CN113869293A - Lane line recognition method and device, electronic equipment and computer readable medium - Google Patents

Lane line recognition method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN113869293A
CN113869293A CN202111461131.6A CN202111461131A CN113869293A CN 113869293 A CN113869293 A CN 113869293A CN 202111461131 A CN202111461131 A CN 202111461131A CN 113869293 A CN113869293 A CN 113869293A
Authority
CN
China
Prior art keywords
feature point
conversion
lane line
point group
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111461131.6A
Other languages
Chinese (zh)
Other versions
CN113869293B (en
Inventor
胡禹超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Heduo Technology Guangzhou Co ltd
Original Assignee
HoloMatic Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by HoloMatic Technology Beijing Co Ltd filed Critical HoloMatic Technology Beijing Co Ltd
Priority to CN202111461131.6A priority Critical patent/CN113869293B/en
Publication of CN113869293A publication Critical patent/CN113869293A/en
Application granted granted Critical
Publication of CN113869293B publication Critical patent/CN113869293B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)

Abstract

The embodiment of the disclosure discloses a lane line identification method, a lane line identification device, an electronic device and a computer readable medium. One embodiment of the method comprises: extracting characteristic points of the pre-acquired road image to obtain a characteristic point group set; performing coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a conversion feature point group set; performing transverse correction on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate correction characteristic points, so as to obtain a correction characteristic point group set; and generating a lane line identification result based on the correction feature point group set. This embodiment can improve the accuracy of generating lane line recognition results.

Description

Lane line recognition method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a lane line identification method, a lane line identification device, electronic equipment and a computer readable medium.
Background
Lane line recognition is an indispensable technology in the field of unmanned driving. At present, when lane line identification is performed, the following methods are generally adopted: the lane lines obtained based on the image back projection are represented by a cubic polynomial, and the cubic polynomial is used as a lane line recognition result.
However, when the lane line recognition is performed in the above manner, there are often technical problems as follows:
the lateral uncertainty of each coordinate point in the lane line represented by the cubic polynomial is not considered, so that the lane line identification result is not accurate enough, and further, the safety of automatic driving is reduced.
Disclosure of Invention
Lane line recognition is an indispensable technology in the field of unmanned driving. At present, when lane line identification is performed, the following methods are generally adopted: the lane lines obtained based on the image back projection are represented by a cubic polynomial, and the cubic polynomial is used as a lane line recognition result.
However, when the lane line recognition is performed in the above manner, there are often technical problems as follows:
the lateral uncertainty of each coordinate point in the lane line represented by the cubic polynomial is not considered, so that the lane line identification result is not accurate enough, and further, the safety of automatic driving is reduced.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and elements are not necessarily drawn to scale.
Fig. 1 is a schematic diagram of one application scenario of the lane line identification method of some embodiments of the present disclosure;
FIG. 2 is a flow diagram of some embodiments of a lane line identification method according to the present disclosure;
FIG. 3 is a flow chart of further embodiments of lane line identification methods according to the present disclosure;
FIG. 4 is a schematic structural diagram of some embodiments of lane marking identification devices according to the present disclosure;
FIG. 5 is a schematic structural diagram of an electronic device suitable for use in implementing some embodiments of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of the lane line identification method of some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may perform feature point extraction on the pre-acquired road image 102 to obtain a feature point group set 103. Next, the computing device 101 may perform coordinate transformation on each feature point in each feature point group in the feature point group set 103 to obtain a transformed feature point group set 104. Then, the computing device 101 may perform horizontal rectification on each transformed feature point in each transformed feature point group in the transformed feature point group set 104 to generate a rectified feature point, resulting in a rectified feature point group set 105. Finally, the computing device 101 may generate the lane line identification result 106 based on the set 105 of corrected feature point groups described above.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of lane line identification methods according to the present disclosure is shown. The flow 200 of the lane line identification method comprises the following steps:
step 201, feature point extraction is performed on the pre-acquired road image to obtain a feature point group set.
In some embodiments, an executing subject of the lane line identification method (such as the computing device 101 shown in fig. 1) may perform feature point extraction on the pre-acquired road image, resulting in a feature point group set. The pre-acquired road image may be a road image of the current time captured by a camera installed in the current vehicle, or may be a road image in the cache of the execution subject. First, edge detection may be performed on the road image through an edge detection algorithm, so as to determine an area (i.e., a strip-shaped lane line area) representing a lane line existing in the road image. In addition, each area can correspond to an identifier for uniquely identifying the lane line. Then, the feature points of each region may be extracted to obtain a feature point group. Specifically, each pixel point on the center line position of each region may be extracted as a feature point group. Thus, if a plurality of lane lines exist in the road image, a plurality of regions representing the lane lines can be detected. A feature point group can be extracted for each region. Thus, a feature point group set can be obtained. In addition, the feature points in the feature point group set may be pixel points in the road image. Thus, each feature point may correspond to one pixel coordinate. Thus, each feature point group may correspond to a unique identification of a lane line. E.g., 1, 2, 3, etc.
Step 202, performing coordinate transformation on each feature point in each feature point group in the feature point group set to obtain a transformation feature point group set.
In some embodiments, the executing body may perform coordinate transformation on each feature point in each feature point group in the feature point group set to obtain a transformed feature point group set. The coordinate transformation can transform the pixel coordinates of the feature points to a camera coordinate system through a preset internal reference matrix and a preset external reference matrix of the camera. Thus, the conversion feature points in the resultant set of conversion feature point groups may be three-dimensional coordinates in the camera coordinate system. Specifically, the inverse number of the distance value between the camera and the ground may be determined as the height value (i.e., the vertical coordinate value) of the converted feature point in the camera coordinate system.
Step 203, performing transverse rectification on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate a rectification characteristic point, so as to obtain a rectification characteristic point group set.
In some embodiments, the executing entity may perform a horizontal rectification on each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a rectification feature point, resulting in a rectification feature point group set. Wherein each transformed feature point may be laterally rectified to generate rectified feature points by:
in the first step, a Lane line Detection is performed on the road image through a preset Lane line Detection algorithm (for example, UFLD (Ultra Fast Structure-aware Deep Lane Detection), so as to obtain a Lane line equation set. Each lane line equation in the lane line equation set may be an equation in the image coordinate system of the road image, and is used to represent a center line of an area where each lane line in the road image is located. In addition, each lane line equation in the lane line equation set may also correspond to a lane line identifier. E.g., 1, 2, 3, etc.
And secondly, carrying out coordinate conversion on each lane line equation in the lane line equation set to obtain a converted lane line equation set. Wherein the lane line equations can be transformed from the image coordinate system into the camera coordinate system. Specifically, the inverse number of the distance value between the camera and the ground may be determined as the height value (i.e., the vertical coordinate value) of the lane line in the camera coordinate system.
Thirdly, for each lane line equation, the following substeps are performed:
in the first substep, a lane line coordinate point corresponding to each correction feature point in the matched correction feature point set is selected from the lane line equation, and a lane line coordinate set is obtained. The matching may be that the lane line identifier corresponding to the lane line equation is the same as the unique identifier corresponding to the correction feature point set, that is, the same lane line is represented. The correspondence may be that the abscissa of a certain point in the lane-line equation is the same as the abscissa of the correction feature point.
And a second substep, transversely fusing the lane line coordinate point group and the correction characteristic points in the matched correction characteristic point group to obtain corrected characteristic points. The transverse fusion may be to determine a midpoint coordinate of a connection line between the lane line coordinate point and the abscissa of the corresponding correction feature point as the abscissa of the corrected feature point.
And step 204, generating a lane line identification result based on the correction feature point group set.
In some embodiments, the executing entity may generate the lane line recognition result based on the set of corrected feature points. Each correction feature point in each correction feature point group in the correction feature point group set may be fitted to generate a fitted lane line, so as to obtain a fitted lane line group. The above-described fitted lane line group may be determined as a lane line recognition result.
Optionally, the execution main body may further send the lane line recognition result to a vehicle control end to adjust a distance between the current vehicle and the lane line. After the lane line recognition result is sent to the vehicle control terminal, the vehicle control terminal can adjust the distance according to the distance value between the current vehicle and the nearest lane line in the lane line recognition result. Specifically, under the condition that lane changing is not needed, if the distance value between the current vehicle and the lane line is smaller than the preset distance threshold, it can be indicated that the position of the current vehicle on the lane is not located in the middle of the lane, and a large potential safety hazard exists. The distance value between the current vehicle and the lane line (i.e., the lane line closest thereto) can be adjusted to improve safety.
The above embodiments of the present disclosure have the following advantages: by the lane line identification method of some embodiments of the present disclosure, the accuracy of lane line identification can be improved. Specifically, the reason why the lane line identification accuracy is reduced is that: the lateral uncertainty of each coordinate point in the lane line characterized by the cubic polynomial is not considered. Based on this, in the lane line identification method according to some embodiments of the present disclosure, each conversion feature point in each conversion feature point group in the conversion feature point group set is transversely corrected to generate a correction feature point, so as to obtain a correction feature point group set. By performing the lateral rectification on each conversion feature point, the lateral uncertainty of each conversion feature point is considered. Therefore, the generated lane line recognition result is increased in lateral stability and higher in accuracy based on the correction feature point group set. Therefore, the accuracy of the lane line identification result can be improved in the mode. Further, safety of automatic driving can be improved.
With further reference to fig. 3, a flow 300 of further embodiments of lane line identification methods is illustrated. The flow 300 of the lane line identification method includes the following steps:
step 301, determining a lane center curve equation set representing a lane line in a road image.
In some embodiments, the executing subject of the lane line identification method (e.g., the computing device 101 shown in fig. 1) may determine a set of lane center curve equations characterizing the lane lines in the road image. The road image may be detected by a lane line detection model (e.g., lanonet (multi-branch lane line detection network)), so as to obtain a detected lane center curve equation. Each lane center curve equation in the lane center curve equation set may be in an image coordinate system of the road image, and is used to represent a center line of an area where each lane line exists in the road image. In addition, each lane center curve equation in the lane center curve equation set can also correspond to a lane line identifier. E.g., 1, 2, 3, etc.
Step 302, determining the intersection point between each lane center curve equation in the lane center curve equation set and each line of pixels in the road image to generate a feature point set, so as to obtain a feature point set.
In some embodiments, the executing entity may determine an intersection point between each lane center curve equation in the lane center curve equation set and each line pixel in the road image to generate a feature point set, resulting in a feature point set. First, each lane center curve equation in the lane center curve equation set may be converted from an image coordinate system to a pixel coordinate system of the road image to obtain a pixel curve equation set. Then, the intersection point of each pixel curve equation and each line of pixels in the road image can be used as a feature point group, so as to obtain a feature point group set. Wherein each pixel curve equation corresponds to a set of feature points.
Step 303, performing coordinate transformation on each feature point in each feature point group in the feature point group set to obtain a transformation feature point group set.
In some embodiments, the specific implementation manner and technical effects of step 303 may refer to step 203 in those embodiments corresponding to fig. 2, and are not described herein again.
And 304, performing transverse rectification on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate a rectification characteristic point, so as to obtain a rectification characteristic point group set.
In some embodiments, the executing entity may perform a horizontal rectification on each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a rectification feature point, resulting in a rectification feature point group set. Wherein, each conversion feature point in each conversion feature point group in the above conversion feature point group set may be transversely rectified to generate a rectified feature point by:
the method comprises the steps of firstly, acquiring an internal reference matrix and a detection distance value of a camera for shooting the road image, and acquiring a height value and a pitch angle of the camera relative to the ground. Wherein the detection distance value may be a maximum shooting distance value (e.g., 50 meters) of the above-described camera. In addition, the internal reference matrix and the detection distance value can be acquired once when the current vehicle is started, and then do not need to be acquired for multiple times. The height value and the pitch angle of the camera with the ground can be acquired once or acquired when a road image is shot.
And secondly, determining a conversion relational expression between the conversion characteristic points and the corresponding characteristic points in the characteristic point group set by using the internal reference matrix, the height value and the pitch angle. Wherein, using the internal reference matrix, the height value, and the pitch angle, the conversion relation expression (i.e., formula one) that can be determined may be:
Figure 303753DEST_PATH_IMAGE001
wherein,
Figure 752052DEST_PATH_IMAGE002
data of the first row and the first column in the above-described reference matrix (matrix of three rows and three columns) is represented.
Figure 797368DEST_PATH_IMAGE003
Data representing the first row and the second column in the reference matrix.
Figure 599102DEST_PATH_IMAGE004
Data of the first row and the third column in the reference matrix are shown.
Figure 679054DEST_PATH_IMAGE005
Representing the above-mentioned reference matrixThe second row and the first column of (1).
Figure 430014DEST_PATH_IMAGE006
And data representing a second row and a second column in the reference matrix.
Figure 493784DEST_PATH_IMAGE007
Data of the second row and the third column in the reference matrix are shown.
Figure 833630DEST_PATH_IMAGE008
And an abscissa value representing the above-described conversion feature point.
Figure 33667DEST_PATH_IMAGE009
And a ordinate value indicating the conversion feature point.
Figure 292610DEST_PATH_IMAGE010
And a vertical coordinate value representing the above-mentioned conversion feature point.
Figure 453464DEST_PATH_IMAGE011
And an abscissa value indicating the feature point.
Figure 456055DEST_PATH_IMAGE012
The pitch angle is shown.
Figure 385965DEST_PATH_IMAGE013
The height value is shown.
Specifically, the expression f (x) of the lateral coordinate (u) of the feature point in the pixel coordinate system of the road image can be solved by the first formula.
And thirdly, determining a probability density function of the feature points corresponding to the conversion feature points, and taking the probability density function of the abscissa of the feature points corresponding to the conversion feature points as a first probability density function. Wherein, if the prior of the abscissa (u) of the feature points obeys uniform distribution, the probability density function of the abscissa of the feature points can be determined as:
Figure 346968DEST_PATH_IMAGE014
wherein,
Figure 854173DEST_PATH_IMAGE015
the probability density function representing the abscissa value of the feature point may be used to characterize the probability of the lateral position of the feature point in a row of pixels.
Figure 34356DEST_PATH_IMAGE016
The width value (unit is pixel) of the road image is represented.
And a fourth step of determining a probability density function of the conversion feature point using the detection distance value, and using the probability density function of the abscissa of the conversion feature point as a second probability density function. If the prior of the abscissa (x) of the transformed feature point is also subject to uniform distribution, the probability density function of the abscissa of the transformed feature point can be determined as follows:
Figure 208986DEST_PATH_IMAGE017
wherein,
Figure 950677DEST_PATH_IMAGE018
a probability density function representing the abscissa of the above-described conversion feature point.
Figure 945177DEST_PATH_IMAGE019
The detected distance value is represented.
And fifthly, transversely correcting the conversion characteristic points based on the first probability density function and the second probability density function to obtain corrected characteristic points.
In some optional implementation manners of some embodiments, the executing unit may perform a lateral correction on the conversion feature point based on the first probability density function and the second probability density function to obtain a corrected feature point, and may further include the following steps:
and generating a lateral standard deviation of the conversion feature point based on the first probability density function and the second probability density function. First, the lateral position of the conversion feature point in the image coordinate system of the road image may be modeled as a gaussian distribution. Thereby, the mean and standard deviation of the above-described conversion feature points can be obtained. Thereafter, the mean and standard deviation can be regressed by a Deep learning method (e.g., Ultra Fast Structure-aware Deep Lane Detection algorithm). Thus, the regression mean and regression standard deviation of the above-described conversion feature points can be obtained. Finally, the lateral standard deviation of the above-described transformed feature points can be generated by the following formula:
Figure 430516DEST_PATH_IMAGE020
wherein,
Figure 194073DEST_PATH_IMAGE021
a posterior probability density function representing the abscissa of the transformed feature point (the result of which may represent the lateral uncertainty, i.e., the lateral standard deviation, of the transformed feature point in the camera coordinate system).
Figure 496879DEST_PATH_IMAGE022
An expression representing the lateral coordinate (u) of the above-described feature point solved by the above-described formula one in the pixel coordinate system of the road image.
Figure 854042DEST_PATH_IMAGE023
The regression standard deviation is shown.
Figure 2126DEST_PATH_IMAGE024
The regression mean is shown above.
In practice, the posterior probability density of x is calculated through the prior probability densities of x and u and the probability density of the abscissa u of the feature point obtained after the road image is detected. It can be considered that the empirical information (a priori) and the detected information are fused to obtain fused information (a posteriori). Therefore, all available information can participate in the operation, so that the fused information is as accurate as possible. Thus, the accuracy of generating the correction feature points can be improved.
In response to determining that there is a detection feature point matching the conversion feature point in the preset detection feature point group set, a lateral displacement value between the conversion feature point and the detection feature point is determined. The preset set of detected feature points may be the detection result of the previous frame of road image adjacent to the time when the road image was captured. That is, it can be used to characterize the detected feature point group corresponding to each lane line in the previous frame of road image. The matching may be: whether the detection feature points representing the same coordinate point as the conversion feature point exist in the detection feature point group set is determined through a feature matching algorithm (for example, Scale-invariant feature transform (SIFT) algorithm and the like). If so, the lateral distance between the above-described conversion feature point and the detection feature point may be determined as a lateral displacement value.
And performing transverse correction on the conversion characteristic points based on the transverse displacement value, the conversion characteristic points, the transverse standard deviation, and transverse image coordinates, a regression standard deviation and a fusion standard deviation which are stored in advance and correspond to the detection characteristic points to obtain corrected characteristic points. Wherein the lateral correction may be: first, the transverse coordinate and the fusion standard deviation after the transformation feature point and the matched detection feature point are fused are determined through a filtering algorithm (for example, a kalman filtering algorithm, an extended kalman filtering algorithm, and the like). And then, replacing the horizontal coordinate of the conversion characteristic point with the fused horizontal coordinate, thereby finishing the horizontal correction of the conversion characteristic point and obtaining a corrected characteristic point. In addition, the fusion standard deviation can be used for the lateral correction in the lane line recognition of the road image of the next frame. And the method can also be used for tracking, filtering, fusing and the like of the lane lines.
As an example, the transverse coordinates and the fusion standard deviation after the fusion of the above conversion feature points and the above matching detection feature points may be determined by the following formulas:
Figure 259670DEST_PATH_IMAGE025
wherein,
Figure 467797DEST_PATH_IMAGE026
represents the jacobian matrix, i.e., the first derivative of f (x).
Figure 436890DEST_PATH_IMAGE027
Representing the lateral displacement value described above.
Figure 998453DEST_PATH_IMAGE028
Indicating the time (e.g., the current time) corresponding to the road image.
Figure 736602DEST_PATH_IMAGE029
Indicating the time (e.g., previous time) corresponding to the previous road image.
Figure 381210DEST_PATH_IMAGE030
And an abscissa value indicating the conversion feature point at the current time.
Figure 447386DEST_PATH_IMAGE031
The above-described reference matrix is represented.
Figure 937273DEST_PATH_IMAGE032
The regression standard deviation of the detected feature points at the previous time is shown.
Figure 264349DEST_PATH_IMAGE033
Indicates the current time,
Figure 955225DEST_PATH_IMAGE034
The transposed matrix of (2).
Figure 633331DEST_PATH_IMAGE035
The above-mentioned conversion representing the current timeLateral standard deviation of feature points.
Figure 192488DEST_PATH_IMAGE036
And representing the horizontal coordinate after the transformation characteristic point and the matched detection characteristic point are fused.
Figure 13551DEST_PATH_IMAGE037
The above-described lateral image coordinates at the previous time are indicated.
Figure 468803DEST_PATH_IMAGE038
And indicating the fusion standard deviation corresponding to the detection characteristic point at the previous moment.
Figure 634205DEST_PATH_IMAGE039
And a fusion standard deviation representing a fusion standard deviation of the conversion feature point and the matched detection feature point.
In some optional implementation manners of some embodiments, the executing unit may perform a lateral correction on the conversion feature point based on the first probability density function and the second probability density function to obtain a corrected feature point, and may further include the following steps:
and in response to determining that no detection feature point matched with the conversion feature point exists in the preset detection feature point group set, performing transverse correction on the conversion feature point based on the conversion feature point and the transverse standard deviation to obtain a corrected feature point. The transverse coordinate and the fusion standard deviation after the transformation characteristic point and the matched detection characteristic point are fused can be determined through the following formula:
Figure 465895DEST_PATH_IMAGE040
wherein,
Figure 174088DEST_PATH_IMAGE041
and representing the horizontal coordinate after the transformation characteristic point and the matched detection characteristic point are fused.
Figure 800242DEST_PATH_IMAGE024
The regression mean is shown above.
Figure 452940DEST_PATH_IMAGE042
The lateral standard deviation of the above-described conversion feature points is represented.
Figure 963687DEST_PATH_IMAGE039
And representing the current standard deviation after the transformation characteristic points and the matched detection characteristic points are fused.
The above formulas and the related contents serve as an invention point of the embodiment of the disclosure, and the technical problem mentioned in the background art that the transverse uncertainty of each coordinate point in the lane line represented by the cubic polynomial is not considered, so that the lane line identification result is not accurate enough, and further, the safety of automatic driving is reduced is solved. By means of the formula and its associated content, the uncertainty (lateral standard deviation) of the lane line in the camera coordinate system is generated. Thus, the lateral uncertainty of each coordinate point in the lane line characterized by the cubic polynomial is considered. Thus, the accuracy of lane line identification can be improved. In addition, in the process of generating the lane line recognition result, the lane line recognition result of the road image at the previous time (i.e., the lateral image coordinates, the lateral standard deviation, and the fusion standard deviation corresponding to the detection feature point) is also introduced and fused with the corrected feature point after the lateral correction, whereby the accuracy of the lateral position of the corrected feature point can be further improved. And meanwhile, the fusion standard deviation which is further fused and corresponds to the corrected feature point can be obtained for recognizing the lane line of the next frame of road image. Thus, the accuracy of the lane line recognition result can be further improved.
Step 305, fusing each feature point in each correction feature point group in the correction feature point group set to generate a fused lane line, obtaining a fused lane line group, and determining the fused lane line group as a lane line recognition result.
In some embodiments, the executing entity may fuse the feature points in each of the correction feature point groups in the correction feature point group set to generate a fused lane line, obtain a fused lane line group, and determine the fused lane line group as a lane line recognition result. The fusion may be to perform curve fitting (for example, a cubic curve) on each correction feature point in the correction feature point group to obtain a fused lane line. Thus, a lane line recognition result can be obtained.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of the lane line identification method in some embodiments corresponding to fig. 3 embodies the steps of feature point extraction and generation of the corrected feature point group set. Through the steps, the accuracy of lane line identification can be further improved. Thus, the safety of the vehicle driving can be further improved.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a lane marking recognition apparatus, which correspond to those shown in fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 4, the lane line recognition apparatus 400 of some embodiments includes: a feature point extraction unit 401, a coordinate conversion unit 402, a lateral correction unit 403, and a generation unit 404. The feature point extraction unit 401 is configured to perform feature point extraction on a pre-acquired road image to obtain a feature point group set; a coordinate conversion unit 402 configured to perform coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a converted feature point group set; a transverse correction unit 403 configured to perform transverse correction on each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a correction feature point, so as to obtain a correction feature point group set; the generating unit 404 is configured to generate a lane line recognition result based on the set of corrected feature point groups.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: extracting characteristic points of the pre-acquired road image to obtain a characteristic point group set; performing coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a conversion feature point group set; performing transverse correction on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate correction characteristic points, so as to obtain a correction characteristic point group set; and generating a lane line identification result based on the correction feature point group set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a feature point extraction unit, a coordinate conversion unit, a transverse correction unit and a generation unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the generation unit may also be described as a "unit that generates a lane line recognition result".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (10)

1. A lane line identification method, comprising:
extracting characteristic points of the pre-acquired road image to obtain a characteristic point group set;
performing coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a conversion feature point group set;
performing transverse correction on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate correction characteristic points, so as to obtain a correction characteristic point group set;
and generating a lane line identification result based on the correction feature point group set.
2. The method of claim 1, wherein the method further comprises:
and sending the lane line identification result to a vehicle control end so as to adjust the distance between the current vehicle and the lane line.
3. The method of claim 1, wherein the extracting feature points from the pre-acquired road image to obtain a feature point group set comprises:
determining a lane center curve equation set representing a lane line in the road image, wherein each lane center line curve equation in the lane center line curve equation set is in an image coordinate system of the road image;
and determining the intersection point between each lane central curve equation in the lane central curve equation set and each line of pixels in the road image to generate a characteristic point set, so as to obtain a characteristic point set.
4. The method of claim 3, wherein the transversely rectifying each transformed feature point in each transformed feature point group in the set of transformed feature point groups to generate rectified feature points comprises:
acquiring an internal reference matrix and a detection distance value of a camera for shooting the road image, and a height value and a pitch angle of the camera relative to the ground;
determining a conversion relation expression between the conversion characteristic points and the corresponding characteristic points in the characteristic point group set by using the internal reference matrix, the height value and the pitch angle;
determining a probability density function of the feature points corresponding to the conversion feature points, and taking the probability density function of the abscissa of the feature points corresponding to the conversion feature points as a first probability density function;
determining a probability density function of the conversion characteristic points by using the detection distance values, and taking the probability density function of the abscissa of the conversion characteristic points as a second probability density function;
and transversely correcting the conversion characteristic points based on the first probability density function and the second probability density function to obtain corrected characteristic points.
5. The method of claim 4, wherein the laterally rectifying the transformed feature points based on the first probability density function and the second probability density function to obtain rectified feature points comprises:
generating a lateral standard deviation of the transformed feature points based on the first probability density function and the second probability density function;
in response to determining that there is a detection feature point matching the conversion feature point in a preset detection feature point group set, determining a lateral displacement value between the conversion feature point and the detection feature point;
and performing transverse correction on the conversion characteristic points based on the transverse displacement value, the conversion characteristic points, the transverse standard deviation, and a transverse image coordinate, a transverse standard deviation and a fusion standard deviation which are stored in advance and correspond to the detection characteristic points to obtain correction characteristic points.
6. The method of claim 5, wherein the laterally rectifying the transformed feature points based on the first probability density function and the second probability density function to obtain rectified feature points, further comprises:
and in response to determining that no detection feature point matched with the conversion feature point exists in the preset detection feature point group set, performing transverse correction on the conversion feature point based on the conversion feature point and the transverse standard deviation to obtain a correction feature point.
7. The method of claim 6, wherein generating a lane line identification based on the set of rectified feature point sets comprises:
and fusing each feature point in each correction feature point group in the correction feature point group set to generate a fused lane line, so as to obtain a fused lane line group, and determining the fused lane line group as a lane line identification result.
8. A lane line identification apparatus comprising:
the characteristic point extraction unit is configured to extract characteristic points of the pre-acquired road image to obtain a characteristic point group set;
the coordinate conversion unit is configured to perform coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a conversion feature point group set;
a transverse correction unit configured to transversely correct each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a correction feature point, so as to obtain a correction feature point group set;
a generating unit configured to generate a lane line recognition result based on the set of corrected feature point groups.
9. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon,
when executed by the one or more processors, cause the one or more processors to implement the method of any one of claims 1-7.
10. A computer-readable medium, on which a computer program is stored, wherein the program, when executed by a processor, implements the method of any one of claims 1-7.
CN202111461131.6A 2021-12-03 2021-12-03 Lane line recognition method and device, electronic equipment and computer readable medium Active CN113869293B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111461131.6A CN113869293B (en) 2021-12-03 2021-12-03 Lane line recognition method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111461131.6A CN113869293B (en) 2021-12-03 2021-12-03 Lane line recognition method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN113869293A true CN113869293A (en) 2021-12-31
CN113869293B CN113869293B (en) 2022-03-11

Family

ID=78985601

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111461131.6A Active CN113869293B (en) 2021-12-03 2021-12-03 Lane line recognition method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN113869293B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419590A (en) * 2022-01-17 2022-04-29 北京百度网讯科技有限公司 High-precision map verification method, device, equipment and storage medium
CN114445597A (en) * 2022-01-28 2022-05-06 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114663524A (en) * 2022-03-09 2022-06-24 禾多科技(北京)有限公司 Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN114842448A (en) * 2022-05-11 2022-08-02 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114863385A (en) * 2022-03-23 2022-08-05 禾多科技(北京)有限公司 Road curved surface information generation method, device, equipment and computer readable medium
CN114863026A (en) * 2022-05-18 2022-08-05 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN115240435A (en) * 2022-09-21 2022-10-25 广州市德赛西威智慧交通技术有限公司 AI technology-based vehicle illegal driving detection method and device
CN115731526A (en) * 2022-11-21 2023-03-03 禾多科技(北京)有限公司 Lane line recognition method, lane line recognition device, electronic equipment and computer readable medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007060856A1 (en) * 2007-12-18 2009-07-09 Siemens Ag Lane determining method, involves determining stopping point of movable objects e.g. lorry, with sensor arrangement, and determining lanes from stopping points of movable objects with statistic process
CN107590470A (en) * 2017-09-18 2018-01-16 浙江大华技术股份有限公司 A kind of method for detecting lane lines and device
CN109325388A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Recognition methods, system and the automobile of lane line
CN109657686A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Lane line generation method, device, equipment and storage medium
CN111191487A (en) * 2018-11-14 2020-05-22 北京市商汤科技开发有限公司 Lane line detection and driving control method and device and electronic equipment
CN111242031A (en) * 2020-01-13 2020-06-05 禾多科技(北京)有限公司 Lane line detection method based on high-precision map
KR20200070702A (en) * 2018-12-10 2020-06-18 르노삼성자동차 주식회사 Method of verifying lane detection in more improved lane detection system
CN111353466A (en) * 2020-03-12 2020-06-30 北京百度网讯科技有限公司 Lane line recognition processing method, lane line recognition processing device, and storage medium
CN111738035A (en) * 2019-03-25 2020-10-02 比亚迪股份有限公司 Method, device and equipment for calculating yaw angle of vehicle
CN112115857A (en) * 2020-09-17 2020-12-22 福建牧月科技有限公司 Lane line identification method and device for intelligent automobile, electronic equipment and medium
CN112418037A (en) * 2020-11-12 2021-02-26 武汉光庭信息技术股份有限公司 Method and system for identifying lane lines in satellite picture, electronic device and storage medium
CN112487861A (en) * 2020-10-27 2021-03-12 爱驰汽车(上海)有限公司 Lane line recognition method and device, computing equipment and computer storage medium
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium
CN112598762A (en) * 2020-09-16 2021-04-02 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, electronic device, and medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102007060856A1 (en) * 2007-12-18 2009-07-09 Siemens Ag Lane determining method, involves determining stopping point of movable objects e.g. lorry, with sensor arrangement, and determining lanes from stopping points of movable objects with statistic process
CN109325388A (en) * 2017-07-31 2019-02-12 比亚迪股份有限公司 Recognition methods, system and the automobile of lane line
CN107590470A (en) * 2017-09-18 2018-01-16 浙江大华技术股份有限公司 A kind of method for detecting lane lines and device
CN109657686A (en) * 2018-10-31 2019-04-19 百度在线网络技术(北京)有限公司 Lane line generation method, device, equipment and storage medium
CN111191487A (en) * 2018-11-14 2020-05-22 北京市商汤科技开发有限公司 Lane line detection and driving control method and device and electronic equipment
KR20200070702A (en) * 2018-12-10 2020-06-18 르노삼성자동차 주식회사 Method of verifying lane detection in more improved lane detection system
CN111738035A (en) * 2019-03-25 2020-10-02 比亚迪股份有限公司 Method, device and equipment for calculating yaw angle of vehicle
CN111242031A (en) * 2020-01-13 2020-06-05 禾多科技(北京)有限公司 Lane line detection method based on high-precision map
CN111353466A (en) * 2020-03-12 2020-06-30 北京百度网讯科技有限公司 Lane line recognition processing method, lane line recognition processing device, and storage medium
CN112598762A (en) * 2020-09-16 2021-04-02 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, electronic device, and medium
CN112115857A (en) * 2020-09-17 2020-12-22 福建牧月科技有限公司 Lane line identification method and device for intelligent automobile, electronic equipment and medium
CN112487861A (en) * 2020-10-27 2021-03-12 爱驰汽车(上海)有限公司 Lane line recognition method and device, computing equipment and computer storage medium
CN112418037A (en) * 2020-11-12 2021-02-26 武汉光庭信息技术股份有限公司 Method and system for identifying lane lines in satellite picture, electronic device and storage medium
CN112507852A (en) * 2020-12-02 2021-03-16 上海眼控科技股份有限公司 Lane line identification method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
DANILO CACERE HERNANDEZ 等: "Lane Marking Detection Using Image Features and Line Fitting Model", 《2017 10TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTIONS》 *
HOUZHONG ZHANG 等: "Lane line recognition based on improved 2D-gamma function and variable threshold Canny algorithm under complex environment", 《MEASUREMENT AND CONTROL》 *
刘源 等: "基于边缘特征点聚类的车道线检测", 《科学技术与工程》 *
吕颖 等: "基于道路特征与道路模型的车道线检测与跟踪", 《汽车文摘》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114419590A (en) * 2022-01-17 2022-04-29 北京百度网讯科技有限公司 High-precision map verification method, device, equipment and storage medium
CN114419590B (en) * 2022-01-17 2024-03-19 北京百度网讯科技有限公司 Verification method, device, equipment and storage medium of high-precision map
CN114445597A (en) * 2022-01-28 2022-05-06 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114445597B (en) * 2022-01-28 2022-11-11 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114663524A (en) * 2022-03-09 2022-06-24 禾多科技(北京)有限公司 Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN114663524B (en) * 2022-03-09 2023-04-07 禾多科技(北京)有限公司 Multi-camera online calibration method and device, electronic equipment and computer readable medium
CN114863385A (en) * 2022-03-23 2022-08-05 禾多科技(北京)有限公司 Road curved surface information generation method, device, equipment and computer readable medium
CN114842448A (en) * 2022-05-11 2022-08-02 禾多科技(北京)有限公司 Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN114863026A (en) * 2022-05-18 2022-08-05 禾多科技(北京)有限公司 Three-dimensional lane line information generation method, device, equipment and computer readable medium
CN115240435A (en) * 2022-09-21 2022-10-25 广州市德赛西威智慧交通技术有限公司 AI technology-based vehicle illegal driving detection method and device
CN115731526A (en) * 2022-11-21 2023-03-03 禾多科技(北京)有限公司 Lane line recognition method, lane line recognition device, electronic equipment and computer readable medium
CN115731526B (en) * 2022-11-21 2023-10-13 禾多科技(北京)有限公司 Lane line identification method, lane line identification device, electronic equipment and computer readable medium

Also Published As

Publication number Publication date
CN113869293B (en) 2022-03-11

Similar Documents

Publication Publication Date Title
CN113869293B (en) Lane line recognition method and device, electronic equipment and computer readable medium
CN108229479B (en) Training method and device of semantic segmentation model, electronic equipment and storage medium
CN110517214B (en) Method and apparatus for generating image
CN112733820B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN115540894B (en) Vehicle trajectory planning method and device, electronic equipment and computer readable medium
CN112598762A (en) Three-dimensional lane line information generation method, device, electronic device, and medium
CN115257727B (en) Obstacle information fusion method and device, electronic equipment and computer readable medium
CN114399589B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN113255619B (en) Lane line recognition and positioning method, electronic device, and computer-readable medium
CN110827301B (en) Method and apparatus for processing image
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN113607185A (en) Lane line information display method, lane line information display device, electronic device, and computer-readable medium
CN113537153A (en) Meter image identification method and device, electronic equipment and computer readable medium
CN115272182B (en) Lane line detection method, lane line detection device, electronic equipment and computer readable medium
CN111815738A (en) Map construction method and device
CN115393815A (en) Road information generation method and device, electronic equipment and computer readable medium
CN113033377A (en) Character position correction method, character position correction device, electronic equipment and storage medium
CN114445597B (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN116188583B (en) Method, device, equipment and computer readable medium for generating camera pose information
CN115620264B (en) Vehicle positioning method and device, electronic equipment and computer readable medium
CN114723640B (en) Obstacle information generation method and device, electronic equipment and computer readable medium
CN113808134B (en) Oil tank layout information generation method, oil tank layout information generation device, electronic apparatus, and medium
CN115326079A (en) Vehicle lane level positioning method, device, equipment and computer readable medium
CN115393826A (en) Three-dimensional lane line generation method and device, electronic device and computer readable medium
CN115393423A (en) Target detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Lane line identification method, device, electronic equipment and computer-readable medium

Effective date of registration: 20230228

Granted publication date: 20220311

Pledgee: Bank of Shanghai Co.,Ltd. Beijing Branch

Pledgor: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.

Registration number: Y2023980033668

CP03 Change of name, title or address
CP03 Change of name, title or address

Address after: 201, 202, 301, No. 56-4 Fenghuang South Road, Huadu District, Guangzhou City, Guangdong Province, 510806

Patentee after: Heduo Technology (Guangzhou) Co.,Ltd.

Address before: 100099 101-15, 3rd floor, building 9, yard 55, zique Road, Haidian District, Beijing

Patentee before: HOLOMATIC TECHNOLOGY (BEIJING) Co.,Ltd.