CN111553210B - Training method of lane line detection model, lane line detection method and device - Google Patents

Training method of lane line detection model, lane line detection method and device Download PDF

Info

Publication number
CN111553210B
CN111553210B CN202010298845.9A CN202010298845A CN111553210B CN 111553210 B CN111553210 B CN 111553210B CN 202010298845 A CN202010298845 A CN 202010298845A CN 111553210 B CN111553210 B CN 111553210B
Authority
CN
China
Prior art keywords
lane line
line detection
lane
detection model
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010298845.9A
Other languages
Chinese (zh)
Other versions
CN111553210A (en
Inventor
但孝杰
杜青青
汪娟
周俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chery Automobile Co Ltd
Lion Automotive Technology Nanjing Co Ltd
Original Assignee
Chery Automobile Co Ltd
Lion Automotive Technology Nanjing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chery Automobile Co Ltd, Lion Automotive Technology Nanjing Co Ltd filed Critical Chery Automobile Co Ltd
Priority to CN202010298845.9A priority Critical patent/CN111553210B/en
Publication of CN111553210A publication Critical patent/CN111553210A/en
Application granted granted Critical
Publication of CN111553210B publication Critical patent/CN111553210B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a training method of a lane line detection model, a lane line detection method and a lane line detection device, and belongs to the field of image recognition. The method comprises the following steps: acquiring a lane line picture, wherein the lane line picture comprises lane lines; marking the lane lines along the length direction of the lane lines by adopting a plurality of rectangular frames to obtain sample pictures, wherein any side length of each rectangular frame is smaller than the width of each lane line; training a neural network by adopting a sample picture to obtain a lane line detection model. According to the lane line detection model, the lane lines in the lane line pictures are marked through the rectangular frames with any side length smaller than the width of the lane lines, so that the lane line pixels in the lane line pictures are marked by the rectangular frames, and the detection accuracy of the lane line detection model obtained by training the neural network as the sample picture is higher.

Description

Training method of lane line detection model, lane line detection method and device
Technical Field
The disclosure relates to the field of image recognition, and in particular relates to a training method of a lane line detection model, a lane line detection method and a lane line detection device.
Background
With the continuous progress of scientific technology, people pay more attention to automatic driving technology, and the realization of the automatic driving technology relates to a plurality of fields such as design environment sensing, sensor fusion, data communication, high-precision positioning, path planning, automatic control and the like. Since an autonomous vehicle traveling on a road requires route planning with reference to a lane line on the road, lane line detection is an essential part in autonomous driving.
The common method for detecting the lane lines is to shoot a ground photo through equipment such as a camera on a vehicle, then identify the shot picture, and the identification method in the related technology is a common neural network technology, namely, the picture is marked, the marked picture is used as a sample set training neural network model, and then the trained neural network model is used for detecting the lane lines in the picture.
In carrying out the present disclosure, the inventors have found that the prior art has at least the following problems: in the process of labeling the pictures, as the clear degree of the contours of the lane lines at the far and near distances is different, the part with the unclear lane lines is not labeled, and the labeled pictures are used as sample sets for training to obtain the neural network model, so that the detection accuracy of the lane lines in the pictures is low.
Disclosure of Invention
The embodiment of the disclosure provides a training method of a lane line detection model, a lane line detection method and a lane line detection device, which can improve the accuracy of detecting lane lines from pictures to be detected, and the technical scheme is as follows:
in one aspect, a method for training a lane line detection model is provided, the method comprising:
obtaining a lane line picture, wherein the lane line picture comprises lane lines;
marking lane lines in the lane line pictures along the length direction of the lane lines by adopting a plurality of rectangular frames to obtain sample pictures, wherein any side length of each rectangular frame is smaller than the width of each lane line;
and training the neural network by adopting the sample picture to obtain a lane line detection model.
In one implementation manner of the present disclosure, the marking the lane line in the lane line picture along the length direction of the lane line by using a plurality of rectangular frames includes:
and labeling any two adjacent lane lines by adopting the rectangular frames with different colors.
In one implementation of the present disclosure, the plurality of rectangular boxes are identical in shape and size.
In one implementation of the present disclosure, the rectangular frame ranges in length from 5 to 7 pixels and the rectangular frame ranges in width from 5 to 7 pixels.
In another implementation of the disclosure, at least 80% of the area of each lane line is covered by the rectangular frame.
In another aspect, a lane line detection method is provided, the method including:
obtaining a picture to be detected, and inputting the picture to be detected into a lane line detection model obtained by the training method of the lane line detection model in any one of the embodiments;
and obtaining a lane line detection result output by the lane line detection model.
In another aspect, a training device for a lane line detection model is provided, the device including:
the first acquisition module is used for acquiring lane line pictures, wherein the lane line pictures comprise lane lines;
the image marking module is used for marking the lane lines in the lane line images along the length direction of the lane lines by adopting a plurality of rectangular frames to obtain sample images, and any side length of each rectangular frame is smaller than the width of each lane line;
and the model training module is used for training the neural network by adopting the sample picture to obtain a lane line detection model.
In another aspect, there is provided a lane line detection apparatus, the apparatus including:
the second acquisition module is used for acquiring a picture to be detected, and inputting the picture to be detected into a lane line detection model obtained by the training device of the lane line detection model in the embodiment;
and the third acquisition module is used for acquiring the lane line detection result output by the lane line detection model.
In another aspect, there is provided an electronic device including:
the system comprises a memory and a processor, wherein the memory and the processor are in communication connection, the memory stores computer instructions, and the processor executes the computer instructions, so that the training method of the lane line detection model in any one of the embodiments and the lane line detection method in the embodiment are executed.
In another aspect, a computer-readable storage medium is provided, which stores computer instructions for causing the computer to perform the training method of the lane line detection model as described in any one of the above embodiments and the lane line detection method as described in the above embodiments.
The technical scheme provided by the embodiment of the disclosure has the beneficial effects that:
by implementing the training method of the lane line detection model in the embodiment of the disclosure, a plurality of rectangular frames with any side length smaller than the width of the lane line are used for marking the lane line in the lane line picture, so that the lane line pixels in the lane line picture are marked by the rectangular frames, and the marked lane line picture is used as a sample picture to train the neural network model, so that the lane line detection model is obtained. The accuracy of the lane line detection model in detecting the lane line from the picture to be detected can be improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flowchart of a training method of a lane line detection model according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of a lane line detection method provided in another embodiment of the present disclosure;
FIG. 3 is a flowchart of a training method for a lane line detection model provided in another embodiment of the present disclosure;
FIG. 4 is a schematic illustration of lane marking provided by another embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a training device of a lane line detection model according to another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a lane line detection apparatus according to another embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device according to another embodiment of the present disclosure.
Detailed Description
For the purposes of clarity, technical solutions and advantages of the present disclosure, the following further details the embodiments of the present disclosure with reference to the accompanying drawings.
Fig. 1 is a flowchart of a training method of a lane line detection model according to an embodiment of the present disclosure, and referring to fig. 1, the method includes:
step S101: and obtaining a lane line picture.
In the embodiment of the disclosure, the lane line pattern includes lane lines, that is, the lane line picture is a picture including lane lines.
In the embodiment of the present disclosure, the lane line pattern may be obtained through a database, where the database may be an existing published database, or may be a database built by itself for training a lane line model, which is not limited in this application.
Step S102: marking the lane lines along the length direction of the lane lines by adopting a plurality of rectangular frames to obtain sample pictures, wherein any side length of each rectangular frame is smaller than the width of each lane line.
In the embodiment of the disclosure, the length direction of the lane line, that is, the traveling direction along the road where the lane line is located, and the width of the lane line refer to the contour width of the lane line in the traveling direction perpendicular to the road.
In step S102, the lane marking is manually marked by a rectangular frame, which is not limited by the fixed vertex of the rectangular frame, and the lane area is covered by the rectangular frame as much as possible in the manual calibration process. The lane line area is filled with a plurality of rectangular frames with any side length smaller than the width of the lane line manually, so that clear parts and unclear parts in the lane line picture can be covered by the rectangular frames, and the lane line detection model obtained through training as a sample picture has high accuracy in detecting the lane line from the picture to be detected.
Step S103: training a neural network by adopting a sample picture to obtain a lane line detection model.
By implementing the training method of the lane line detection model in the embodiment of the disclosure, the lane lines in the lane line picture are marked by a plurality of rectangular frames with any side length smaller than the width of the lane lines, so that the lane line areas in the lane line picture are covered by the plurality of rectangular frames, and the lane line detection model obtained by training the sample picture has high accuracy in detecting the lane lines from the picture to be detected.
Fig. 2 is a flow chart of a lane line detection method according to another embodiment of the disclosure, referring to fig. 2, the flow chart of the method includes:
step S201: obtaining a picture to be detected, and inputting the picture to be detected into a lane line detection model obtained by the training method of the lane line detection model in the embodiment.
In the embodiment of the disclosure, the picture to be detected may be obtained directly from an image capturing apparatus on a vehicle.
Step S202: and obtaining a lane line detection result output by the lane line detection model.
In the embodiment of the disclosure, the lane line detection result refers to that the lane line detection model marks or does not mark the picture to be detected according to whether the lane line is detected from the picture to be detected.
By implementing the lane line detection method in the embodiment of the disclosure, the lane line detection result of the picture to be detected can be obtained only by inputting the picture to be detected, which is obtained by shooting by the camera equipment on the vehicle, into the lane line detection model, and the detection accuracy is high. The method can solve the problem of weak robustness of the traditional lane line detection method, overcome the interference influence of illumination, road vehicles, tree shadows and the like, improve the detection efficiency and the detection precision, and enlarge the applicable scene.
Fig. 3 is a flowchart of a training method of a lane line detection model according to another embodiment of the present disclosure, and referring to fig. 3, the method includes:
step S301: and obtaining lane line pictures from a vehicle picture database.
In the embodiment of the disclosure, the lane line pattern includes lane lines, that is, the lane line picture is a picture including lane lines.
In the embodiment of the present disclosure, the lane line pattern may be obtained through a database, where the database may be an existing published database, or may be a database built by itself for training a lane line model, which is not limited in this application.
Step S302: and receiving a manual filling operation instruction.
In the embodiment of the disclosure, the relevant staff member processes the lane line picture through image editing software on the electronic device, which may be, for example, image processing software (Photoshop CS, abbreviated as PS), and fills a plurality of colored rectangular frames into the lane line region in the lane line picture. The electronic equipment receives the manual filling operation instruction input by related personnel.
Illustratively, the electronic device includes, but is not limited to, a smart phone, a tablet computer, a notebook computer, a desktop computer, or the like.
Step S303: and marking the lane lines in the lane line picture along the length direction of the lane lines by using a plurality of rectangular frames based on the manual filling operation instruction to obtain a sample picture, wherein any side length of each rectangular frame is smaller than the width of each lane line.
In the embodiment of the disclosure, the manual filling operation instruction may be an instruction to define a lane line contour (the contour is wider than the guide line, for example, the width is 100% of the lane line), and when the electronic device receives the instruction, a plurality of rectangular frames with colors are uniformly filled in the defined contour, so that the lane line area is covered by the rectangular frames as much as possible. In the filled picture, the lane line outline defined by the manual filling operation instruction is not displayed.
In the embodiment of the disclosure, the electronic device marks the lane line area of the lane line picture along the length direction of the lane line by adopting a plurality of colored rectangular frames based on the manual filling operation instruction, and takes the marked picture as a sample picture, wherein the sample picture is subsequently used for training the neural network model.
In the embodiment of the present disclosure, step S303 may include:
and labeling any two adjacent lane lines by adopting the rectangular frames with different colors.
In the embodiment of the disclosure, the lane line pattern includes at least one lane line, when the lane lines are multiple, different lane lines need to be distinguished, and any two adjacent lane lines can be marked by rectangular frames with different colors, so that in the subsequent training process, the model can distinguish different lane lines based on the colors. For example, as shown in fig. 4, in the lane line picture, when the number of lane lines 401 is two, the two lane lines 401 are respectively marked with red and green rectangular boxes 402, the filled pattern of the rectangular boxes 401 in fig. 4 represents the filled color thereof, for example, the left rectangular box indicates that the filled color is red, and the right rectangular box indicates that the filled color is green. In other embodiments of the present disclosure, when there are three lane lines, the rectangular frames of three adjacent lane lines may be sequentially red, green, and red, respectively. Of course, the above colors are merely examples, and other arrangements may be employed in other embodiments.
In the embodiment of the present disclosure, the shapes and sizes of the plurality of rectangular frames are the same. The sizes of the rectangular frames are the same, so that the calculation amount required in the process of training the neural network can be reduced.
In an embodiment of the disclosure, the rectangular frame has a length ranging from 5 to 7 pixels and a width ranging from 5 to 7 pixels.
In the embodiment of the present disclosure, the length and width of the rectangular frame may be determined according to practical situations, for example, in the above embodiment, the length and width of the rectangular frame are each 6 pixels. The length and the width of the rectangular frame are far smaller than the width of the lane lines, so that lane line pixels with different distances in the lane line picture can be marked by the rectangular frame, and meanwhile, the rectangular frame does not contain non-lane line pixels.
In an embodiment of the disclosure, at least 80% of the area of each lane line is covered by the rectangular frame.
In the embodiment of the disclosure, each lane line is covered by a plurality of rectangular frames, and since the rectangular frames do not contain non-lane line pixels, not all lane line pixels can be 100% fully covered by the rectangular frames, so as to ensure the accuracy of the lane line detection model in detecting lane lines from the to-be-detected picture, at least 80% of the area of each lane line is covered by the rectangular frames.
In step S304: and acquiring the true value of the filled lane line.
And obtaining a true value of the lane line according to the left lower corner coordinates and the right upper corner coordinates of the rectangular frames in the lane line, wherein the true value of the lane line is a set of the left lower corner coordinates and the right upper corner coordinates of all the rectangular frames in the lane line.
In step S305: and training the neural network by taking the true value corresponding to the lane line picture and the lane line picture as a sample picture to obtain a lane line detection model.
The sample pictures are divided into a training set, a validation set and a test set according to a ratio of 6:2:2, and in other embodiments, the sample pictures can be divided according to other ratios. The neural network is trained using a training set, and in this embodiment, the neural network may be a convolutional neural network. And training through multiple rounds of iterative optimization (for example, performing iterative optimization according to a certain set target), so as to obtain a trained neural network model. Finally, calculating the detection accuracy of the trained neural network model through a test set, and taking the trained neural network model as a lane line detection model when the detection accuracy is greater than or equal to a preset threshold (for example, the preset threshold can be 98 percent); and when the detection accuracy is smaller than a preset threshold value, training the convolutional neural network again.
In step S306: and acquiring a picture to be detected, which is shot by shooting equipment on the vehicle, and then inputting the picture to be detected into a lane line detection model.
Here, the photographing apparatus on the vehicle may be an in-vehicle camera, and thus, the steps S306 and 307 are performed for the in-vehicle terminal, and the foregoing steps 301 to 305 may be performed by other electronic apparatuses, such as a server.
In this step, the image to be detected taken by the vehicle may or may not include a lane line, and either may be used as the image to be detected to detect the lane line.
Before being input into the lane line detection model, the shot picture to be detected can be preprocessed, such as size normalization and the like, so that the picture size conforming to the processing of the model can be obtained.
In step S307: and obtaining a lane line detection result output by the lane line detection model.
Here, the lane line detection result may include whether a lane line exists and a position of the lane line. The vehicle-mounted terminal assists driving of the vehicle, for example, realizes automatic driving, based on the lane line detection result output by the lane line detection model.
Fig. 5 is a schematic structural diagram of a training device for a lane line detection model according to another embodiment of the present disclosure, and referring to fig. 5, the device includes:
a first obtaining module 501, configured to obtain a lane line image, where the lane line image includes a lane line;
the image marking module 502 is configured to mark the lane line along the length direction of the lane line by using a plurality of rectangular frames, so as to obtain a sample image, where any side length of the rectangular frames is smaller than the width of the lane line;
the model training module 503 is configured to train the neural network by using the sample image to obtain a lane line detection model.
By implementing the training device of the lane line detection model in the embodiment of the disclosure, the lane lines in the lane line picture are marked by a plurality of rectangular frames with any side length smaller than the width of the lane lines, so that the lane line pixels in the lane line picture are marked, and the rectangular frames do not comprise non-lane line pixels, so that the accuracy of detecting the lane lines from the picture to be detected by the lane line detection model obtained through training as the sample picture is high.
Optionally, in some embodiments of the present disclosure, the picture labeling module 402 in the foregoing embodiments is configured to label rectangular boxes with different colors for any two adjacent lane lines.
In the embodiment of the disclosure, the lane line pattern includes at least one lane line, when the lane lines are multiple, different lane lines need to be distinguished, and any two adjacent lane lines can be marked by rectangular frames with different colors, so that in the subsequent training process, the model can distinguish different lane lines based on the colors. For example, as shown in fig. 4, in the lane line picture, when the number of lane lines 401 is two, the two lane lines 401 are respectively marked with red and green rectangular boxes 402, the filled pattern of the rectangular boxes 401 in fig. 4 represents the filled color thereof, for example, the left rectangular box indicates that the filled color is red, and the right rectangular box indicates that the filled color is green. In other embodiments of the present disclosure, when there are three lane lines, the rectangular frames of three adjacent lane lines may be sequentially red, green, and red, respectively. Of course, the above colors are merely examples, and other arrangements may be employed in other embodiments.
Alternatively, in some embodiments of the present disclosure, the shape and size of the plurality of rectangular frames in the above embodiments are the same. The sizes of the rectangular frames are the same, so that the calculation amount required in the process of training the neural network can be reduced.
Optionally, in some embodiments of the present disclosure, the length of the rectangular frame in the above embodiments ranges between 5 and 7 pixels, and the width of the rectangular frame ranges between 5 and 7 pixels.
In the embodiment of the present disclosure, the length and width of the rectangular frame may be determined according to practical situations, for example, in the above embodiment, the length and width of the rectangular frame are each 6 pixels. The length and the width of the rectangular frame are far smaller than the width of the lane lines, so that lane line pixels with different distances in the lane line picture can be marked by the rectangular frame, and meanwhile, the rectangular frame does not contain non-lane line pixels.
Optionally, in some embodiments of the present disclosure, at least 80% of the area of each lane line is covered by the rectangular frame.
In the embodiment of the disclosure, each lane line is covered by a plurality of rectangular frames, and since the rectangular frames do not contain non-lane line pixels, not all lane line pixels can be 100% fully covered by the rectangular frames, so as to ensure the accuracy of the lane line detection model in detecting lane lines from the to-be-detected picture, at least 80% of the area of each lane line is covered by the rectangular frames.
Fig. 6 is a schematic structural diagram of a training device for a lane line detection model according to another embodiment of the present disclosure, and referring to fig. 6, the device includes:
the second obtaining module 601 is configured to obtain a picture to be detected, and input the picture to be detected into a lane line detection model obtained by the training device for a lane line detection model described in the above embodiment;
and a third obtaining module 602, configured to obtain a lane line detection result output by the lane line detection model.
By implementing the lane line detection device in the embodiment of the disclosure, the lane line detection result of the picture to be detected can be obtained only by inputting the picture to be detected, which is shot by the camera equipment on the vehicle, into the lane line detection model, and the detection accuracy is high.
Fig. 7 is a schematic diagram of an electronic device according to an exemplary embodiment. The electronic device 700 includes a Central Processing Unit (CPU) 701, a system memory 704 including a Random Access Memory (RAM) 702 and a Read Only Memory (ROM) 703, and a system bus 705 connecting the system memory 704 and the central processing unit 701. The electronic device 700 also includes a basic input/output system (I/O system) 706, which facilitates the transfer of information between various devices within the computer, and a mass storage device 707 for storing an operating system 713, application programs 714, and other program modules 715.
The basic input/output system 706 includes a display 708 for displaying information and an input device 709, such as a mouse, keyboard, or the like, for a user to input information. Wherein the display 708 and the input device 709 are coupled to the central processing unit 701 through an input output controller 710 coupled to a system bus 705. The basic input/output system 706 may also include an input/output controller 710 for receiving and processing input from a number of other devices, such as a keyboard, mouse, or electronic stylus. Similarly, the input output controller 710 also provides output to a display screen, a printer, or other type of output device.
The mass storage device 707 is connected to the central processing unit 701 through a mass storage controller (not shown) connected to the system bus 705. The mass storage device 707 and its associated computer-readable media provide non-volatile storage for the electronic device 700. That is, the mass storage device 707 may include a computer readable medium (not shown) such as a hard disk or CD-ROM drive.
The computer readable medium may include computer storage media and communication media without loss of generality. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices. Of course, those skilled in the art will recognize that the computer storage medium is not limited to the one described above. The system memory 704 and mass storage device 707 described above may be collectively referred to as memory.
The electronic device 700 may also operate through a network, such as the internet, to a remote computer on the network, according to various embodiments of the present disclosure. I.e. the electronic device 700 may be connected to the network 712 via a network interface unit 711 connected to the system bus 705, or alternatively, the network interface unit 711 may be used to connect to other types of networks or remote computer systems (not shown).
The memory further includes one or more programs stored in the memory, and the central processor 701 implements the training method of the lane line detection model shown in fig. 1 and the lane line detection method shown in fig. 2 by executing the one or more programs.
In an exemplary embodiment, a non-transitory computer readable storage medium is also provided, such as a memory including instructions executable by a processor of an electronic device to perform the training method and lane line detection method of the lane line detection model shown in the various embodiments of the present invention. For example, the non-transitory computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present disclosure is provided for the purpose of illustration only, and is not intended to limit the disclosure to the particular embodiments disclosed, but on the contrary, the intention is to cover all modifications, equivalents, alternatives, and alternatives falling within the spirit and principles of the disclosure.

Claims (8)

1. The training method of the lane line detection model is characterized by comprising the following steps of:
obtaining a lane line picture, wherein the lane line picture comprises lane lines;
marking lane lines in the lane line picture along the length direction of the lane lines by adopting a plurality of rectangular frames to obtain a sample picture, wherein any side length of each rectangular frame is smaller than the width of each lane line, and the shape and the size of each rectangular frame are the same; at least two rectangular frames marked along the width direction and the length direction of each lane line are adopted, and any two adjacent lane lines are marked by adopting the rectangular frames with different colors;
and training the neural network by adopting the sample picture to obtain a lane line detection model.
2. The training method of a lane line detection model according to claim 1, wherein the rectangular frame has a length ranging from 5 to 7 pixels and a width ranging from 5 to 7 pixels.
3. The training method of a lane line detection model according to claim 1, wherein at least 80% of the area of each lane line is covered by the rectangular frame.
4. A lane line detection method, characterized by comprising:
obtaining a picture to be detected, and inputting the picture to be detected into a lane line detection model obtained by the training method of the lane line detection model according to any one of claims 1 to 3;
and obtaining a lane line detection result output by the lane line detection model.
5. The training device of the lane line detection model is characterized by comprising the following components:
the first acquisition module is used for acquiring lane line pictures, wherein the lane line pictures comprise lane lines;
the image marking module is used for marking the lane lines in the lane line images along the length direction of the lane lines by adopting a plurality of rectangular frames to obtain sample images, any side length of each rectangular frame is smaller than the width of each lane line, and the shape and the size of each rectangular frame are the same; at least two rectangular frames marked along the width direction and the length direction of each lane line are adopted, and any two adjacent lane lines are marked by adopting the rectangular frames with different colors;
and the model training module is used for training the neural network by adopting the sample picture to obtain a lane line detection model.
6. A lane line detection apparatus, comprising:
the second acquisition module is used for acquiring a picture to be detected, and inputting the picture to be detected into a lane line detection model obtained by the training device of the lane line detection model according to claim 5;
and the third acquisition module is used for acquiring the lane line detection result output by the lane line detection model.
7. An electronic device, comprising: a memory and a processor, the memory and the processor being communicatively connected to each other, the memory storing computer instructions, the processor executing the training method of the lane line detection model according to any one of claims 1 to 3 or the lane line detection method according to claim 4 by executing the computer instructions.
8. A computer-readable storage medium storing computer instructions for causing the computer to perform the training method of the lane line detection model according to any one of claims 1 to 3 or the lane line detection method according to claim 4.
CN202010298845.9A 2020-04-16 2020-04-16 Training method of lane line detection model, lane line detection method and device Active CN111553210B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010298845.9A CN111553210B (en) 2020-04-16 2020-04-16 Training method of lane line detection model, lane line detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010298845.9A CN111553210B (en) 2020-04-16 2020-04-16 Training method of lane line detection model, lane line detection method and device

Publications (2)

Publication Number Publication Date
CN111553210A CN111553210A (en) 2020-08-18
CN111553210B true CN111553210B (en) 2024-04-09

Family

ID=71999967

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010298845.9A Active CN111553210B (en) 2020-04-16 2020-04-16 Training method of lane line detection model, lane line detection method and device

Country Status (1)

Country Link
CN (1) CN111553210B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105922991A (en) * 2016-05-27 2016-09-07 广州大学 Lane departure early warning method and system based on generation of virtual lane lines
CN106203237A (en) * 2015-05-04 2016-12-07 杭州海康威视数字技术股份有限公司 The recognition methods of container-trailer numbering and device
CN106203401A (en) * 2016-08-11 2016-12-07 电子科技大学 A kind of method for quick of lane line
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network
CN108154114A (en) * 2017-12-22 2018-06-12 温州大学激光与光电智能制造研究院 A kind of method of lane detection
CN108615358A (en) * 2018-05-02 2018-10-02 安徽大学 A kind of congestion in road detection method and device
CN109389046A (en) * 2018-09-11 2019-02-26 昆山星际舟智能科技有限公司 Round-the-clock object identification and method for detecting lane lines for automatic Pilot

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108764187B (en) * 2018-06-01 2022-03-08 百度在线网络技术(北京)有限公司 Method, device, equipment, storage medium and acquisition entity for extracting lane line

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106203237A (en) * 2015-05-04 2016-12-07 杭州海康威视数字技术股份有限公司 The recognition methods of container-trailer numbering and device
CN105922991A (en) * 2016-05-27 2016-09-07 广州大学 Lane departure early warning method and system based on generation of virtual lane lines
CN106203401A (en) * 2016-08-11 2016-12-07 电子科技大学 A kind of method for quick of lane line
CN108154114A (en) * 2017-12-22 2018-06-12 温州大学激光与光电智能制造研究院 A kind of method of lane detection
CN108009524A (en) * 2017-12-25 2018-05-08 西北工业大学 A kind of method for detecting lane lines based on full convolutional network
CN108615358A (en) * 2018-05-02 2018-10-02 安徽大学 A kind of congestion in road detection method and device
CN109389046A (en) * 2018-09-11 2019-02-26 昆山星际舟智能科技有限公司 Round-the-clock object identification and method for detecting lane lines for automatic Pilot

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A Fast Learning Method for Accurate and Robust Lane Detection Using Two-Stage Feature Extraction with YOLO v3;Xiang Zhang等;sensors;正文第1-20页 *

Also Published As

Publication number Publication date
CN111553210A (en) 2020-08-18

Similar Documents

Publication Publication Date Title
EP3806064B1 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
WO2021051344A1 (en) Method and apparatus for determining lane lines in high-precision map
CN112735253B (en) Traffic light automatic labeling method and computer equipment
CN111539484A (en) Method and device for training neural network
CN112288716A (en) Steel coil bundling state detection method, system, terminal and medium
US20230360246A1 (en) Method and System of Real-Timely Estimating Dimension of Signboards of Road-side Shops
CN113902740A (en) Construction method of image blurring degree evaluation model
CN111553210B (en) Training method of lane line detection model, lane line detection method and device
CN112434582A (en) Lane line color identification method and system, electronic device and storage medium
CN111951328A (en) Object position detection method, device, equipment and storage medium
CN116386373A (en) Vehicle positioning method and device, storage medium and electronic equipment
CN113763466A (en) Loop detection method and device, electronic equipment and storage medium
CN109903308B (en) Method and device for acquiring information
CN114821513B (en) Image processing method and device based on multilayer network and electronic equipment
CN113112551B (en) Camera parameter determining method and device, road side equipment and cloud control platform
CN115631169A (en) Product detection method and device, electronic equipment and storage medium
CN112489240B (en) Commodity display inspection method, inspection robot and storage medium
CN114792343A (en) Calibration method of image acquisition equipment, and method and device for acquiring image data
CN114842443A (en) Target object identification and distance measurement method, device and equipment based on machine vision and storage medium
CN116266402A (en) Automatic object labeling method and device, electronic equipment and storage medium
CN114283413B (en) Method and system for identifying digital instrument readings in inspection scene
CN112101299B (en) Automatic traffic sign extraction method and system based on binocular CCD camera
CN116266388A (en) Method, device and system for generating annotation information, electronic equipment and storage medium
CN116468787A (en) Position information extraction method and device of forklift pallet and domain controller
CN112325780A (en) Distance measuring and calculating method and device based on community monitoring

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant