CN111815742A - Lane line generation method and system - Google Patents

Lane line generation method and system Download PDF

Info

Publication number
CN111815742A
CN111815742A CN202010961103.XA CN202010961103A CN111815742A CN 111815742 A CN111815742 A CN 111815742A CN 202010961103 A CN202010961103 A CN 202010961103A CN 111815742 A CN111815742 A CN 111815742A
Authority
CN
China
Prior art keywords
lane
lane line
vehicle
image
coordinates
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010961103.XA
Other languages
Chinese (zh)
Inventor
李倩
贾双成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mushroom Car Union Information Technology Co Ltd
Original Assignee
Mushroom Car Union Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mushroom Car Union Information Technology Co Ltd filed Critical Mushroom Car Union Information Technology Co Ltd
Priority to CN202010961103.XA priority Critical patent/CN111815742A/en
Publication of CN111815742A publication Critical patent/CN111815742A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • G06T11/206Drawing of charts or graphs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/40Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application relates to a lane line generation method and system. The method comprises the steps of acquiring an image containing a lane line and geographical position information of a vehicle, wherein the image is acquired during the driving of the vehicle; identifying the image to obtain pixel coordinates of lane line pixel points in the image; calculating to obtain the space coordinates of two lane lines of the lane where the vehicle is located at present according to the pixel coordinates of the lane lines and the corresponding geographic position information; searching whether the space coordinates of the two lane lines are stored or not; if not, the space coordinate of the lane line is saved; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule. The scheme provided by the application can be used for manufacturing the lane lines required by the high-precision map based on the video data and the positioning signals acquired by the single vehicle, has the characteristics of low manufacturing cost, short processing flow, simple algorithm and high efficiency, and greatly shortens the manufacturing and updating period of the high-precision map.

Description

Lane line generation method and system
Technical Field
The application relates to the technical field of navigation, in particular to a lane line generation method and system.
Background
In the related art, the high-precision map is manufactured by the acquisition vehicle provided with the panoramic camera, the binocular camera and other equipment, so that the manufacturing cost is high, the processing flow is lengthened, the algorithm is complex, the efficiency is low, and the manufacturing and updating period of the high-precision map is limited.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides a lane line generation method and system, which can be used for manufacturing a lane line required by a high-precision map based on video data and a positioning signal acquired by a single vehicle.
The embodiment of the application discloses a lane line generation method, which comprises the following steps: acquiring an image containing a lane line and geographical position information of a vehicle, which are acquired during the driving of the vehicle; identifying the image to obtain pixel coordinates of lane line pixel points in the image; calculating to obtain the space coordinates of two lane lines of the lane where the vehicle is located at present according to the pixel coordinates of the lane lines and the corresponding geographic position information; searching whether the space coordinates of the two lane lines are stored or not; if not, the space coordinate of the lane line is saved; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule.
In the method, vehicles respectively run on all lanes of a specific road, and images containing lane lines and geographical position information of the vehicles are collected; and drawing high-precision map lanes after all lane lines of the specific road are obtained.
In the above method, the optimization process specifically includes: and comparing the calculated space coordinates of the lane line with the stored space coordinates of the lane line according to a preset rule, and storing the space coordinates with high precision.
Or, the optimization processing specifically includes: and discarding the calculated lane line space coordinates.
In the above method, the acquiring the image including the lane and the geographic position information of the vehicle, which are acquired during the driving of the vehicle, specifically includes: and shooting images according to a preset time interval, and recording the current geographical position information when the images are shot.
Or, the acquiring the image including the lane and the geographic position information of the vehicle collected during the running of the vehicle specifically includes: acquiring video data containing lanes and collecting geographic position information of a vehicle during vehicle running; extracting frames from the video data; and searching the acquired geographical position information according to the time information of the frame.
A lane line generation method, comprising: acquiring an image containing a lane and geographical position information of a vehicle, which are acquired during the running of the vehicle; identifying the image to obtain pixel coordinates of lane line pixel points in the image, and acquiring pixel coordinates of two lane lines of a lane where a vehicle is located; calculating to obtain the space coordinates of the two lane lines according to the pixel coordinates of the two lane lines and the corresponding geographic position information; searching whether the space coordinates of the two lane lines are stored or not; if not, the space coordinate of the lane line is saved; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule.
In the above method, the optimization process specifically includes: comparing the calculated lane line space coordinate with the stored lane line space coordinate according to a preset rule, and storing a space coordinate with high precision; or discarding the calculated lane line space coordinates.
The embodiment of the application also comprises a lane line generation system for realizing the method.
The technical scheme provided by the embodiment of the application can have the following beneficial effects: the method can be used for manufacturing the lane lines required by the high-precision map based on the video data and the positioning signals acquired by the single vehicle, has the characteristics of low manufacturing cost, short processing flow, simple algorithm and high efficiency, and greatly shortens the manufacturing and updating period of the high-precision map.
After the obtained lane line pixel coordinates/space coordinates are identified, only the lane line space coordinates of the lane where the vehicle is located are reserved. For the public lane lines of the two lanes, if the space coordinates of the public lane line are acquired and calculated, only the other lane line needs to be drawn, and the space coordinates of the public lane line can be processed by an optimization method, for example, the space coordinates with high precision are reserved as the space coordinates of the public lane line, or the measured space coordinates of the lane line are reserved, and the newly measured space coordinates are discarded. The method improves the drawing efficiency of the high-precision map.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flowchart of a lane line generation method according to an embodiment of the present application;
fig. 2 is a schematic structural diagram of a lane line generation system according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of a lane line generation system according to another embodiment of the present application.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of a lane line generation method according to an embodiment of the present application.
And 11, acquiring an image containing a lane and geographic position information of the vehicle, which are acquired during the running of the vehicle.
In the step, video data of the collected lane can be recorded through equipment with a camera shooting function, such as a vehicle-mounted automobile data recorder or a mobile phone of a driver, and the like, the video data is used for identifying lane lines through an identification model subsequently, in addition, geographic position information of the vehicle can be collected through positioning equipment configured on a vehicle or a mobile terminal, such as a mobile phone, and the like, the positioning equipment can be realized by existing equipment, such as a GPS (global positioning system), a Beidou and the like, the embodiment does not limit the above, the geographic position information is used for determining coordinates of a vehicle track, and the video data and the geographic position information can be imported into a server end through a vehicle-mounted machine.
In an implementation manner of the embodiment of the application, a lane line in front of a vehicle may be photographed at preset time intervals by setting a program. The shooting period of the camera device is matched with the period of the vehicle geographical position information acquired by the GPS acquisition device on the vehicle. For example, the data recording frequency of the GPS device is 1 point per second. Thus, the images are taken while collecting geographical location information. Since the image capturing period is set by software, in other embodiments, the image capturing period does not coincide with the geographic position information acquisition period of the GPS acquisition device. As long as the corresponding geographical location information can be obtained after the image is taken.
The server side or the vehicle side can read and process the geographic position information through GIS software, and the GIS software can read files in txt format, so that the txt format which can be read by the GIS software can be obtained by converting the format and extracting the file format of the vehicle track coordinate recorded by the GPS.
The application also does not exclude the realization method that the shooting device faces the rear of the vehicle so as to shoot the lane line which the vehicle has run through.
In another implementation manner of the embodiment of the application, the camera device of the vehicle performs video recording on the lane line so as to obtain continuous video images including the lane line. Therefore, it is necessary to frame the video data including the lane line acquired while the vehicle is running.
Typically, the frame rate of video is 30 frames per second. After the video is extracted according to the preset rule, the shooting time information of the frame is further acquired, and the geographical position information matched with the time in the file recorded by the GPS equipment is searched according to the time information. The purpose of this step is also to enable the image obtained by framing to accurately correspond to the current geographical location information of the vehicle.
And step 12, identifying the image to obtain the pixel coordinates of the lane line pixel points in the image.
The step can be realized by using a GPU at a server side, and the step specifically comprises the following steps: and (3) carrying out sample training based on a deep learning frame tensorflow, constructing a model, carrying out precision verification on the constructed model, and extracting the lane line pixel points in the image obtained in the step (11) by using the model passing the precision verification so as to obtain the pixel coordinates of the lane line pixel points.
And step 13, calculating the space coordinates of the two lane lines of the current lane of the vehicle according to the pixel coordinates of the lane lines and the corresponding geographic position information.
The conversion of the pixel coordinates in the image into spatial coordinates based on the geographical location information may be done using prior art solutions.
Generally, in an image obtained by shooting a vehicle, a plurality of lane lines may be included, where the lane lines include a lane line of a lane where the vehicle is located, and may also include a lane line of an adjacent lane, and even a lane line next to the adjacent lane.
For example, the spatial coordinates of the lane line may be compared with the geographic information position of the vehicle measured by the vehicle, and if the spatial coordinates of the lane line are smaller than a preset threshold, the lane line may be determined as the lane line of the current lane, and if the spatial coordinates of the lane line are larger than the preset threshold, the lane line of the other lane may be determined as the lane line of the other lane.
Step 14, searching whether the space coordinates of the two lane lines are stored; if not, the space coordinate of the lane line is saved; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule.
In an embodiment, the search may be implemented in various ways, for example, according to the serial number of the lane, or according to the geographic location information when the vehicle measures the lane, or comparing the spatial coordinates of the lane measured by the vehicle with the spatial coordinates of the lane stored in the database, and if the difference between the two is smaller than a preset threshold, the lane is a public lane of the adjacent lane. And if the lane line space coordinates of the same lane are found, optimizing.
In order to draw the lane lines required for a high-precision map of a certain road, the vehicle needs to acquire the lane line spatial coordinates of each lane of the road. For example, a road is one-way with two lanes having three lane lines. Namely, the lane line comprises a public lane line and lane lines respectively positioned at two sides of the public lane line.
The vehicles respectively run on each lane. The vehicle carries out image acquisition on a first lane, and finally two lane line space coordinates of the first lane are obtained; and similarly, the vehicle carries out image acquisition on the second lane, and finally the space coordinates of the two lane lines of the second lane are obtained.
Then, in the process of drawing the high-precision map, the first lane and the second lane have a common lane line, and when the images of the first lane and the second lane are acquired, the spatial coordinates of the common lane line are obtained respectively. At this time, the following two ways can be adopted for processing the public lane line:
1) acquiring two adjacent lanes to respectively obtain the space coordinates of two groups of public lane lines, comparing the two groups of space coordinates according to a preset rule, and keeping the space coordinates with higher coordinate precision as drawing data of the public lane lines;
2) and acquiring two adjacent lanes to respectively obtain the space coordinates of two groups of public lane lines, and discarding the space coordinates of the public lane line measured for the second time if the space coordinates of the public lane line are acquired and stored, for example, the space coordinates of the two lane lines including the public lane line are acquired and calculated by the adjacent lanes.
For example, for a one-way 3-lane road, there are 4 lane lines, with 2 common lane lines in adjacent lanes.
And 3 groups of lane line space coordinate data are obtained through calculation by collecting 3 lanes. The method comprises the steps of processing the spatial coordinates of the public lane lines by adopting the method of the embodiment, wherein the spatial coordinates of 4 public lane lines comprise 2 public lane lines, and finally 4 lane lines of 3 lanes can be drawn.
The method for judging the public lane line provided by the embodiment comprises the following steps: and comparing the lane line space coordinates measured by the vehicle with the database stored lane line space coordinates, and if the difference value between the two is smaller than a preset threshold value, determining that the lane line is a public lane line of an adjacent lane.
In the above embodiment of the present application, after the spatial coordinates of the lane line are obtained in step 13, whether the lane line belongs to the current lane of the vehicle is identified. In another embodiment of the present application, after the vehicle-mounted device obtains the pixel coordinates of the lane lines in the image containing the lane lines, the lane lines of the lane where the vehicle is located can be identified according to a preset algorithm, and the lane lines of other lanes are excluded. For a monocular camera, the deep learning algorithm model has a good recognition result for the left side and the right side of the current lane, and has a poor recognition effect for the lane lines of other lanes, so after the pixel coordinates of the pixel points of the lane lines in the image are obtained through recognition, the recognition results of the lane lines in each image except the lane lines on the left side and the right side of the current lane can be deleted in a software mode and the like, that is, the pixel coordinates of two lane lines on the left side and the right side of the current lane where the vehicle is located are obtained, and the space coordinates of the two subsequent lane lines are calculated according to the pixel coordinates of the lane lines on the left side and the right side of the current lane. For example, a lane line in which the vehicle is located is determined as the lane line in which the vehicle is located when the vector direction of the lane line is smaller than the preset included angle with respect to the image axis. Alternatively, the lane lines on the left and right sides closest to the longitudinal axis of the image are identified as lane lines of the lane in which the vehicle is located. Other identification methods are not limiting of the present application.
And after the lane lines of the lane where the vehicle is located are obtained through the pixel coordinates, only the two lane lines are calculated from the pixel coordinates to the space coordinates, and the subsequent steps are carried out. This embodiment is not described in detail.
The application also provides an embodiment of a lane line generation system.
Fig. 2 is a schematic structural diagram of a lane line generation system according to an embodiment of the present application.
Referring to fig. 2, the system includes: the system comprises an image acquisition module 21, a geographic position information acquisition module 22, a first identification module 23, a first calculation module 24, a processor module 25 and a storage module 26.
And the image acquisition module 21 is used for acquiring an image containing a lane acquired during the running of the vehicle.
A geographic position information obtaining module 22, configured to obtain geographic position information of the vehicle corresponding to the image obtained by the image obtaining module 21.
The first identification module 23 is configured to identify the image acquired by the image acquisition module 21, so as to obtain pixel coordinates of lane line pixel points in the image.
The first calculating module 24 is configured to calculate and obtain spatial coordinates of two lane lines of a lane where the vehicle is currently located according to the pixel coordinates of the lane lines obtained by the first identifying module 23 and the geographic position information obtained by the geographic position information obtaining module 22.
The processor module 25 is configured to search whether the spatial coordinates of the two lane lines are stored; if not, the storage module 26 stores the space coordinates of the two lane lines; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule.
And a storage module 26 for storing the spatial coordinates of the lane lines.
In an embodiment, the search may be implemented in various ways, for example, according to the serial number of the lane, or according to the geographic location information when the vehicle measures the lane, or comparing the spatial coordinates of the lane measured by the vehicle with the spatial coordinates of the lane stored in the database, and if the difference between the two is smaller than a preset threshold, the lane is a public lane of the adjacent lane. And if the lane line space coordinates of the same lane are found, optimizing.
As described above, the optimization process may be to discard the newly measured spatial coordinates; or reserving the space coordinates with higher coordinate precision as the drawing data of the public lane line.
In order to improve the drawing precision, for each lane, the video data and the positioning information can be collected for multiple times to obtain the space coordinates of the multiple groups of lane line pixel points of the lane lines on the left side and the right side of the lane, and the space coordinates of the multiple groups of lane line pixel points of the lane lines on the left side and the right side are respectively fused to obtain the space coordinates of the two groups of lane line pixel points.
Fig. 3 is a schematic structural diagram of a lane line generation system according to another embodiment of the present application.
Referring to fig. 3, a lane marking generation system includes an image acquisition module 21, a geographic position information acquisition module 22, a second identification module 33, a second calculation module 34, a processor module 25, and a storage module 26.
In one embodiment, the functions of the image acquisition module 21, the geographic location information acquisition module 22, the processor module 25, and the storage module 26 may be as shown in fig. 2.
The second recognition module 33 is configured to recognize the image acquired by the image acquisition module 21, obtain pixel coordinates of lane line pixel points in the image, and acquire pixel coordinates of two lane lines of a lane where a vehicle is located.
The second calculating module 34 is configured to calculate spatial coordinates of the two lane lines according to the pixel coordinates of the two lane lines obtained by the second identifying module 33 and the geographic position information obtained by the geographic position information obtaining module 22.
In order to further improve the drawing efficiency in the above method, the common lane line of two adjacent lanes may be drawn only once without performing repeated processing on the two collected recognition results. Specifically, whether a public lane line exists is judged. The determination method may be implemented in various ways, for example, by comparing the lane number of the lane line obtained by the lane number determination and conversion module with the lane number stored in the storage module, and determining the relationship between lanes. Or, as described above, the spatial coordinates of the lane lines obtained by the conversion module are compared with the spatial coordinates of the lane lines stored in the storage module, and it is determined whether the lane lines are the same, and if so, the lane lines are not processed and stored.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that the acts and modules referred to in the specification are not necessarily required in the present application. In addition, it can be understood that the steps in the method of the embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device of the embodiment of the present application may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for performing some or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or computing device, server, etc.), causes the processor to perform some or all of the various steps of the above-described method according to the present application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. A lane line generation method, comprising:
acquiring an image containing a lane line and geographical position information of a vehicle, which are acquired during the driving of the vehicle;
identifying the image to obtain pixel coordinates of lane line pixel points in the image;
calculating to obtain the space coordinates of two lane lines of the lane where the vehicle is located at present according to the pixel coordinates of the lane lines and the corresponding geographic position information;
searching whether the space coordinates of the two lane lines are stored or not;
if not, the space coordinate of the lane line is saved; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule.
2. The method of claim 1,
the method comprises the following steps that vehicles respectively run on all lanes of a specific road, and images containing lane lines and geographic position information of the vehicles are collected;
and drawing high-precision map lanes after all lane lines of the specific road are obtained.
3. The method according to claim 1 or 2, wherein the optimization process is specifically:
and comparing the calculated space coordinates of the lane line with the stored space coordinates of the lane line according to a preset rule, and storing the space coordinates with high precision.
4. The method according to claim 1 or 2, wherein the optimization process is specifically:
and discarding the calculated lane line space coordinates.
5. The method according to claim 1, wherein the acquiring the image containing the lane and the geographic position information of the vehicle collected during the driving of the vehicle is specifically:
and shooting images according to a preset time interval, and recording the current geographical position information when the images are shot.
6. The method according to claim 1, wherein the acquiring the image containing the lane and the geographic position information of the vehicle collected during the driving of the vehicle is specifically:
acquiring video data containing lanes and collecting geographic position information of a vehicle during vehicle running;
extracting frames from the video data;
and searching the acquired geographical position information according to the time information of the frame.
7. A lane line generation method, comprising:
acquiring an image containing a lane and geographical position information of a vehicle, which are acquired during the running of the vehicle;
identifying the image to obtain pixel coordinates of lane line pixel points in the image, and acquiring pixel coordinates of two lane lines of a lane where a vehicle is located;
calculating to obtain the space coordinates of the two lane lines according to the pixel coordinates of the two lane lines and the corresponding geographic position information;
searching whether the space coordinates of the two lane lines are stored or not;
if not, the space coordinate of the lane line is saved; and for the lane line with the stored space coordinates, carrying out space coordinate optimization processing on the lane line according to a preset rule.
8. The method according to claim 7, wherein the optimization process is specifically:
comparing the calculated lane line space coordinate with the stored lane line space coordinate according to a preset rule, and storing a space coordinate with high precision;
or discarding the calculated lane line space coordinates.
9. A lane line generation system is characterized by comprising
The image acquisition module is used for acquiring an image which is acquired during the running of the vehicle and contains a lane;
the geographic position information acquisition module is used for acquiring the geographic position information of the vehicle corresponding to the image acquired by the image acquisition module;
the first identification module is used for identifying the image acquired by the image acquisition module to obtain the pixel coordinates of the lane line pixel points in the image;
the first calculation module is used for calculating and obtaining the space coordinates of two lane lines of a lane where the vehicle is located at present according to the pixel coordinates of the lane lines obtained by the first identification module and the geographic position information obtained by the geographic position information obtaining module;
the processor module is used for searching whether the space coordinates of the two lane lines are stored or not; if not, the storage module stores the space coordinates of the two lane lines; for the lane line with the stored space coordinates, optimizing the space coordinates of the lane line according to a preset rule;
and the storage module is used for storing the space coordinates of the lane lines.
10. A lane line generation system is characterized by comprising
The image acquisition module is used for acquiring an image which is acquired during the running of the vehicle and contains a lane;
the geographic position information acquisition module is used for acquiring the geographic position information of the vehicle corresponding to the image acquired by the image acquisition module;
the second identification module is used for identifying the image acquired by the image acquisition module to obtain pixel coordinates of lane line pixel points in the image and acquiring pixel coordinates of two lane lines of a lane where a vehicle is located;
the second calculation module is used for calculating and obtaining the space coordinates of the two lane lines according to the pixel coordinates of the two lane lines obtained by the second identification module and the geographic position information obtained by the geographic position information obtaining module;
the processor module is used for searching whether the space coordinates of the two lane lines are stored or not; if not, the storage module stores the space coordinates of the two lane lines; for the lane line with the stored space coordinates, optimizing the space coordinates of the lane line according to a preset rule;
and the storage module is used for storing the space coordinates of the lane lines.
CN202010961103.XA 2020-09-14 2020-09-14 Lane line generation method and system Pending CN111815742A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010961103.XA CN111815742A (en) 2020-09-14 2020-09-14 Lane line generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010961103.XA CN111815742A (en) 2020-09-14 2020-09-14 Lane line generation method and system

Publications (1)

Publication Number Publication Date
CN111815742A true CN111815742A (en) 2020-10-23

Family

ID=72859269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010961103.XA Pending CN111815742A (en) 2020-09-14 2020-09-14 Lane line generation method and system

Country Status (1)

Country Link
CN (1) CN111815742A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112595335A (en) * 2021-01-15 2021-04-02 智道网联科技(北京)有限公司 Method for generating intelligent traffic stop line and related device
CN112697159A (en) * 2021-01-06 2021-04-23 智道网联科技(北京)有限公司 Map editing method and system
CN113449692A (en) * 2021-07-22 2021-09-28 成都纵横自动化技术股份有限公司 Map lane information updating method and system based on unmanned aerial vehicle
CN113465615A (en) * 2021-06-23 2021-10-01 智道网联科技(北京)有限公司 Lane line generation method and related device
CN113607159A (en) * 2021-08-09 2021-11-05 智道网联科技(北京)有限公司 Optimization method, device and equipment for high-precision map lane line
CN114526746A (en) * 2022-03-15 2022-05-24 智道网联科技(北京)有限公司 Method, device and equipment for generating high-precision map lane line and storage medium
WO2023088486A1 (en) * 2021-11-22 2023-05-25 中国第一汽车股份有限公司 Lane line extraction method and apparatus, vehicle and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041396A1 (en) * 2015-09-10 2017-03-16 百度在线网络技术(北京)有限公司 Driving lane data processing method, device, storage medium and apparatus
CN106570446A (en) * 2015-10-12 2017-04-19 腾讯科技(深圳)有限公司 Lane line extraction method and device
CN109284674A (en) * 2018-08-09 2019-01-29 浙江大华技术股份有限公司 A kind of method and device of determining lane line
CN109583312A (en) * 2018-10-31 2019-04-05 百度在线网络技术(北京)有限公司 Lane detection method, apparatus, equipment and storage medium
CN109858307A (en) * 2017-11-30 2019-06-07 高德软件有限公司 A kind of Lane detection method and apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017041396A1 (en) * 2015-09-10 2017-03-16 百度在线网络技术(北京)有限公司 Driving lane data processing method, device, storage medium and apparatus
CN106570446A (en) * 2015-10-12 2017-04-19 腾讯科技(深圳)有限公司 Lane line extraction method and device
CN109858307A (en) * 2017-11-30 2019-06-07 高德软件有限公司 A kind of Lane detection method and apparatus
CN109284674A (en) * 2018-08-09 2019-01-29 浙江大华技术股份有限公司 A kind of method and device of determining lane line
CN109583312A (en) * 2018-10-31 2019-04-05 百度在线网络技术(北京)有限公司 Lane detection method, apparatus, equipment and storage medium

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112697159A (en) * 2021-01-06 2021-04-23 智道网联科技(北京)有限公司 Map editing method and system
CN112697159B (en) * 2021-01-06 2024-01-23 智道网联科技(北京)有限公司 Map editing method and system
CN112595335A (en) * 2021-01-15 2021-04-02 智道网联科技(北京)有限公司 Method for generating intelligent traffic stop line and related device
CN113465615A (en) * 2021-06-23 2021-10-01 智道网联科技(北京)有限公司 Lane line generation method and related device
CN113465615B (en) * 2021-06-23 2021-11-09 智道网联科技(北京)有限公司 Lane line generation method and related device
CN113449692A (en) * 2021-07-22 2021-09-28 成都纵横自动化技术股份有限公司 Map lane information updating method and system based on unmanned aerial vehicle
CN113607159A (en) * 2021-08-09 2021-11-05 智道网联科技(北京)有限公司 Optimization method, device and equipment for high-precision map lane line
CN113607159B (en) * 2021-08-09 2024-04-12 智道网联科技(北京)有限公司 Optimization method, device and equipment for high-precision map lane line
WO2023088486A1 (en) * 2021-11-22 2023-05-25 中国第一汽车股份有限公司 Lane line extraction method and apparatus, vehicle and storage medium
CN114526746A (en) * 2022-03-15 2022-05-24 智道网联科技(北京)有限公司 Method, device and equipment for generating high-precision map lane line and storage medium

Similar Documents

Publication Publication Date Title
CN111815742A (en) Lane line generation method and system
KR102266830B1 (en) Lane determination method, device and storage medium
JP7326720B2 (en) Mobile position estimation system and mobile position estimation method
CN111830953B (en) Vehicle self-positioning method, device and system
CN111261016B (en) Road map construction method and device and electronic equipment
US20120166080A1 (en) Method, system and computer-readable medium for reconstructing moving path of vehicle
CN115131420A (en) Visual SLAM method and device based on key frame optimization
CN110634306A (en) Method and device for determining vehicle position, storage medium and computing equipment
CN111928842B (en) Monocular vision based SLAM positioning method and related device
CN110599794A (en) Intelligent vehicle finding method and system based on Internet of vehicles
CN111310728B (en) Pedestrian re-identification system based on monitoring camera and wireless positioning
KR20140054710A (en) Apparatus and method for generating 3d map
CN111260549A (en) Road map construction method and device and electronic equipment
CN107506753B (en) Multi-vehicle tracking method for dynamic video monitoring
JP7259454B2 (en) Mobile position estimation system and mobile position estimation method
CN112598743B (en) Pose estimation method and related device for monocular vision image
CN113705271A (en) High-precision map lane generation method and device
CN113256683B (en) Target tracking method and related equipment
WO2020248197A1 (en) Saturation flow estimation for signalized intersections using vehicle trajectory data
CN112595335B (en) Intelligent traffic driving stop line generation method and related device
CN112270748B (en) Three-dimensional reconstruction method and device based on image
CN113569752B (en) Lane line structure identification method, device, equipment and medium
CN113465615B (en) Lane line generation method and related device
CN115761164A (en) Method and device for generating inverse perspective IPM image
CN111368692B (en) Information fusion method and device, and parking position positioning method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20201023

RJ01 Rejection of invention patent application after publication