Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be noted that, for convenience of description, only the portions related to the related invention are shown in the drawings. The embodiments and features of the embodiments in the present disclosure may be combined with each other without conflict.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 is a schematic diagram of an application scenario of the lane line identification method of some embodiments of the present disclosure.
In the application scenario of fig. 1, first, the computing device 101 may perform feature point extraction on the pre-acquired road image 102 to obtain a feature point group set 103. Next, the computing device 101 may perform coordinate transformation on each feature point in each feature point group in the feature point group set 103 to obtain a transformed feature point group set 104. Then, the computing device 101 may perform horizontal rectification on each transformed feature point in each transformed feature point group in the transformed feature point group set 104 to generate a rectified feature point, resulting in a rectified feature point group set 105. Finally, the computing device 101 may generate the lane line identification result 106 based on the set 105 of corrected feature point groups described above.
The computing device 101 may be hardware or software. When the computing device is hardware, it may be implemented as a distributed cluster composed of multiple servers or terminal devices, or may be implemented as a single server or a single terminal device. When the computing device is embodied as software, it may be installed in the hardware devices enumerated above. It may be implemented, for example, as multiple software or software modules to provide distributed services, or as a single software or software module. And is not particularly limited herein.
It should be understood that the number of computing devices in FIG. 1 is merely illustrative. There may be any number of computing devices, as implementation needs dictate.
With continued reference to fig. 2, a flow 200 of some embodiments of lane line identification methods according to the present disclosure is shown. The flow 200 of the lane line identification method comprises the following steps:
step 201, feature point extraction is performed on the pre-acquired road image to obtain a feature point group set.
In some embodiments, an executing subject of the lane line identification method (such as the computing device 101 shown in fig. 1) may perform feature point extraction on the pre-acquired road image, resulting in a feature point group set. The pre-acquired road image may be a road image of the current time captured by a camera installed in the current vehicle, or may be a road image in the cache of the execution subject. First, edge detection may be performed on the road image through an edge detection algorithm, so as to determine an area (i.e., a strip-shaped lane line area) representing a lane line existing in the road image. In addition, each area can correspond to an identifier for uniquely identifying the lane line. Then, the feature points of each region may be extracted to obtain a feature point group. Specifically, each pixel point on the center line position of each region may be extracted as a feature point group. Thus, if a plurality of lane lines exist in the road image, a plurality of regions representing the lane lines can be detected. A feature point group can be extracted for each region. Thus, a feature point group set can be obtained. In addition, the feature points in the feature point group set may be pixel points in the road image. Thus, each feature point may correspond to one pixel coordinate. Thus, each feature point group may correspond to a unique identification of a lane line. E.g., 1, 2, 3, etc.
Step 202, performing coordinate transformation on each feature point in each feature point group in the feature point group set to obtain a transformation feature point group set.
In some embodiments, the executing body may perform coordinate transformation on each feature point in each feature point group in the feature point group set to obtain a transformed feature point group set. The coordinate transformation can transform the pixel coordinates of the feature points to a camera coordinate system through a preset internal reference matrix and a preset external reference matrix of the camera. Thus, the conversion feature points in the resultant set of conversion feature point groups may be three-dimensional coordinates in the camera coordinate system. Specifically, the inverse number of the distance value between the camera and the ground may be determined as the height value (i.e., the vertical coordinate value) of the converted feature point in the camera coordinate system.
Step 203, performing transverse rectification on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate a rectification characteristic point, so as to obtain a rectification characteristic point group set.
In some embodiments, the executing entity may perform a horizontal rectification on each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a rectification feature point, resulting in a rectification feature point group set. Wherein each transformed feature point may be laterally rectified to generate rectified feature points by:
in the first step, a Lane line Detection is performed on the road image through a preset Lane line Detection algorithm (for example, UFLD (Ultra Fast Structure-aware Deep Lane Detection), so as to obtain a Lane line equation set. Each lane line equation in the lane line equation set may be an equation in the image coordinate system of the road image, and is used to represent a center line of an area where each lane line in the road image is located. In addition, each lane line equation in the lane line equation set may also correspond to a lane line identifier. E.g., 1, 2, 3, etc.
And secondly, carrying out coordinate conversion on each lane line equation in the lane line equation set to obtain a converted lane line equation set. Wherein the lane line equations can be transformed from the image coordinate system into the camera coordinate system. Specifically, the inverse number of the distance value between the camera and the ground may be determined as the height value (i.e., the vertical coordinate value) of the lane line in the camera coordinate system.
Thirdly, for each lane line equation, the following substeps are performed:
in the first substep, a lane line coordinate point corresponding to each correction feature point in the matched correction feature point set is selected from the lane line equation, and a lane line coordinate set is obtained. The matching may be that the lane line identifier corresponding to the lane line equation is the same as the unique identifier corresponding to the correction feature point set, that is, the same lane line is represented. The correspondence may be that the abscissa of a certain point in the lane-line equation is the same as the abscissa of the correction feature point.
And a second substep, transversely fusing the lane line coordinate point group and the correction characteristic points in the matched correction characteristic point group to obtain corrected characteristic points. The transverse fusion may be to determine a midpoint coordinate of a connection line between the lane line coordinate point and the abscissa of the corresponding correction feature point as the abscissa of the corrected feature point.
And step 204, generating a lane line identification result based on the correction feature point group set.
In some embodiments, the executing entity may generate the lane line recognition result based on the set of corrected feature points. Each correction feature point in each correction feature point group in the correction feature point group set may be fitted to generate a fitted lane line, so as to obtain a fitted lane line group. The above-described fitted lane line group may be determined as a lane line recognition result.
Optionally, the execution main body may further send the lane line recognition result to a vehicle control end to adjust a distance between the current vehicle and the lane line. After the lane line recognition result is sent to the vehicle control terminal, the vehicle control terminal can adjust the distance according to the distance value between the current vehicle and the nearest lane line in the lane line recognition result. Specifically, under the condition that lane changing is not needed, if the distance value between the current vehicle and the lane line is smaller than the preset distance threshold, it can be indicated that the position of the current vehicle on the lane is not located in the middle of the lane, and a large potential safety hazard exists. The distance value between the current vehicle and the lane line (i.e., the lane line closest thereto) can be adjusted to improve safety.
The above embodiments of the present disclosure have the following advantages: by the lane line identification method of some embodiments of the present disclosure, the accuracy of lane line identification can be improved. Specifically, the reason why the lane line identification accuracy is reduced is that: the lateral uncertainty of each coordinate point in the lane line characterized by the cubic polynomial is not considered. Based on this, in the lane line identification method according to some embodiments of the present disclosure, each conversion feature point in each conversion feature point group in the conversion feature point group set is transversely corrected to generate a correction feature point, so as to obtain a correction feature point group set. By performing the lateral rectification on each conversion feature point, the lateral uncertainty of each conversion feature point is considered. Therefore, the generated lane line recognition result is increased in lateral stability and higher in accuracy based on the correction feature point group set. Therefore, the accuracy of the lane line identification result can be improved in the mode. Further, safety of automatic driving can be improved.
With further reference to fig. 3, a flow 300 of further embodiments of lane line identification methods is illustrated. The flow 300 of the lane line identification method includes the following steps:
step 301, determining a lane center curve equation set representing a lane line in a road image.
In some embodiments, the executing subject of the lane line identification method (e.g., the computing device 101 shown in fig. 1) may determine a set of lane center curve equations characterizing the lane lines in the road image. The road image may be detected by a lane line detection model (e.g., lanonet (multi-branch lane line detection network)), so as to obtain a detected lane center curve equation. Each lane center curve equation in the lane center curve equation set may be in an image coordinate system of the road image, and is used to represent a center line of an area where each lane line exists in the road image. In addition, each lane center curve equation in the lane center curve equation set can also correspond to a lane line identifier. E.g., 1, 2, 3, etc.
Step 302, determining the intersection point between each lane center curve equation in the lane center curve equation set and each line of pixels in the road image to generate a feature point set, so as to obtain a feature point set.
In some embodiments, the executing entity may determine an intersection point between each lane center curve equation in the lane center curve equation set and each line pixel in the road image to generate a feature point set, resulting in a feature point set. First, each lane center curve equation in the lane center curve equation set may be converted from an image coordinate system to a pixel coordinate system of the road image to obtain a pixel curve equation set. Then, the intersection point of each pixel curve equation and each line of pixels in the road image can be used as a feature point group, so as to obtain a feature point group set. Wherein each pixel curve equation corresponds to a set of feature points.
Step 303, performing coordinate transformation on each feature point in each feature point group in the feature point group set to obtain a transformation feature point group set.
In some embodiments, the specific implementation manner and technical effects of step 303 may refer to step 203 in those embodiments corresponding to fig. 2, and are not described herein again.
And 304, performing transverse rectification on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate a rectification characteristic point, so as to obtain a rectification characteristic point group set.
In some embodiments, the executing entity may perform a horizontal rectification on each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a rectification feature point, resulting in a rectification feature point group set. Wherein, each conversion feature point in each conversion feature point group in the above conversion feature point group set may be transversely rectified to generate a rectified feature point by:
the method comprises the steps of firstly, acquiring an internal reference matrix and a detection distance value of a camera for shooting the road image, and acquiring a height value and a pitch angle of the camera relative to the ground. Wherein the detection distance value may be a maximum shooting distance value (e.g., 50 meters) of the above-described camera. In addition, the internal reference matrix and the detection distance value can be acquired once when the current vehicle is started, and then do not need to be acquired for multiple times. The height value and the pitch angle of the camera with the ground can be acquired once or acquired when a road image is shot.
And secondly, determining a conversion relational expression between the conversion characteristic points and the corresponding characteristic points in the characteristic point group set by using the internal reference matrix, the height value and the pitch angle. Wherein, using the internal reference matrix, the height value, and the pitch angle, the conversion relation expression (i.e., formula one) that can be determined may be:
wherein,
data of the first row and the first column in the above-described reference matrix (matrix of three rows and three columns) is represented.
Data representing the first row and the second column in the reference matrix.
Data of the first row and the third column in the reference matrix are shown.
Representing the above-mentioned reference matrixThe second row and the first column of (1).
And data representing a second row and a second column in the reference matrix.
Data of the second row and the third column in the reference matrix are shown.
And an abscissa value representing the above-described conversion feature point.
And a ordinate value indicating the conversion feature point.
And a vertical coordinate value representing the above-mentioned conversion feature point.
And an abscissa value indicating the feature point.
The pitch angle is shown.
The height value is shown.
Specifically, the expression f (x) of the lateral coordinate (u) of the feature point in the pixel coordinate system of the road image can be solved by the first formula.
And thirdly, determining a probability density function of the feature points corresponding to the conversion feature points, and taking the probability density function of the abscissa of the feature points corresponding to the conversion feature points as a first probability density function. Wherein, if the prior of the abscissa (u) of the feature points obeys uniform distribution, the probability density function of the abscissa of the feature points can be determined as:
wherein,
the probability density function representing the abscissa value of the feature point may be used to characterize the probability of the lateral position of the feature point in a row of pixels.
The width value (unit is pixel) of the road image is represented.
And a fourth step of determining a probability density function of the conversion feature point using the detection distance value, and using the probability density function of the abscissa of the conversion feature point as a second probability density function. If the prior of the abscissa (x) of the transformed feature point is also subject to uniform distribution, the probability density function of the abscissa of the transformed feature point can be determined as follows:
wherein,
a probability density function representing the abscissa of the above-described conversion feature point.
The detected distance value is represented.
And fifthly, transversely correcting the conversion characteristic points based on the first probability density function and the second probability density function to obtain corrected characteristic points.
In some optional implementation manners of some embodiments, the executing unit may perform a lateral correction on the conversion feature point based on the first probability density function and the second probability density function to obtain a corrected feature point, and may further include the following steps:
and generating a lateral standard deviation of the conversion feature point based on the first probability density function and the second probability density function. First, the lateral position of the conversion feature point in the image coordinate system of the road image may be modeled as a gaussian distribution. Thereby, the mean and standard deviation of the above-described conversion feature points can be obtained. Thereafter, the mean and standard deviation can be regressed by a Deep learning method (e.g., Ultra Fast Structure-aware Deep Lane Detection algorithm). Thus, the regression mean and regression standard deviation of the above-described conversion feature points can be obtained. Finally, the lateral standard deviation of the above-described transformed feature points can be generated by the following formula:
wherein,
a posterior probability density function representing the abscissa of the transformed feature point (the result of which may represent the lateral uncertainty, i.e., the lateral standard deviation, of the transformed feature point in the camera coordinate system).
An expression representing the lateral coordinate (u) of the above-described feature point solved by the above-described formula one in the pixel coordinate system of the road image.
The regression standard deviation is shown.
The regression mean is shown above.
In practice, the posterior probability density of x is calculated through the prior probability densities of x and u and the probability density of the abscissa u of the feature point obtained after the road image is detected. It can be considered that the empirical information (a priori) and the detected information are fused to obtain fused information (a posteriori). Therefore, all available information can participate in the operation, so that the fused information is as accurate as possible. Thus, the accuracy of generating the correction feature points can be improved.
In response to determining that there is a detection feature point matching the conversion feature point in the preset detection feature point group set, a lateral displacement value between the conversion feature point and the detection feature point is determined. The preset set of detected feature points may be the detection result of the previous frame of road image adjacent to the time when the road image was captured. That is, it can be used to characterize the detected feature point group corresponding to each lane line in the previous frame of road image. The matching may be: whether the detection feature points representing the same coordinate point as the conversion feature point exist in the detection feature point group set is determined through a feature matching algorithm (for example, Scale-invariant feature transform (SIFT) algorithm and the like). If so, the lateral distance between the above-described conversion feature point and the detection feature point may be determined as a lateral displacement value.
And performing transverse correction on the conversion characteristic points based on the transverse displacement value, the conversion characteristic points, the transverse standard deviation, and transverse image coordinates, a regression standard deviation and a fusion standard deviation which are stored in advance and correspond to the detection characteristic points to obtain corrected characteristic points. Wherein the lateral correction may be: first, the transverse coordinate and the fusion standard deviation after the transformation feature point and the matched detection feature point are fused are determined through a filtering algorithm (for example, a kalman filtering algorithm, an extended kalman filtering algorithm, and the like). And then, replacing the horizontal coordinate of the conversion characteristic point with the fused horizontal coordinate, thereby finishing the horizontal correction of the conversion characteristic point and obtaining a corrected characteristic point. In addition, the fusion standard deviation can be used for the lateral correction in the lane line recognition of the road image of the next frame. And the method can also be used for tracking, filtering, fusing and the like of the lane lines.
As an example, the transverse coordinates and the fusion standard deviation after the fusion of the above conversion feature points and the above matching detection feature points may be determined by the following formulas:
wherein,
represents the jacobian matrix, i.e., the first derivative of f (x).
Representing the lateral displacement value described above.
Indicating the time (e.g., the current time) corresponding to the road image.
Indicating the time (e.g., previous time) corresponding to the previous road image.
And an abscissa value indicating the conversion feature point at the current time.
The above-described reference matrix is represented.
The regression standard deviation of the detected feature points at the previous time is shown.
Indicates the current time,
The transposed matrix of (2).
The above-mentioned conversion representing the current timeLateral standard deviation of feature points.
And representing the horizontal coordinate after the transformation characteristic point and the matched detection characteristic point are fused.
The above-described lateral image coordinates at the previous time are indicated.
And indicating the fusion standard deviation corresponding to the detection characteristic point at the previous moment.
And a fusion standard deviation representing a fusion standard deviation of the conversion feature point and the matched detection feature point.
In some optional implementation manners of some embodiments, the executing unit may perform a lateral correction on the conversion feature point based on the first probability density function and the second probability density function to obtain a corrected feature point, and may further include the following steps:
and in response to determining that no detection feature point matched with the conversion feature point exists in the preset detection feature point group set, performing transverse correction on the conversion feature point based on the conversion feature point and the transverse standard deviation to obtain a corrected feature point. The transverse coordinate and the fusion standard deviation after the transformation characteristic point and the matched detection characteristic point are fused can be determined through the following formula:
wherein,
and representing the horizontal coordinate after the transformation characteristic point and the matched detection characteristic point are fused.
The regression mean is shown above.
The lateral standard deviation of the above-described conversion feature points is represented.
And representing the current standard deviation after the transformation characteristic points and the matched detection characteristic points are fused.
The above formulas and the related contents serve as an invention point of the embodiment of the disclosure, and the technical problem mentioned in the background art that the transverse uncertainty of each coordinate point in the lane line represented by the cubic polynomial is not considered, so that the lane line identification result is not accurate enough, and further, the safety of automatic driving is reduced is solved. By means of the formula and its associated content, the uncertainty (lateral standard deviation) of the lane line in the camera coordinate system is generated. Thus, the lateral uncertainty of each coordinate point in the lane line characterized by the cubic polynomial is considered. Thus, the accuracy of lane line identification can be improved. In addition, in the process of generating the lane line recognition result, the lane line recognition result of the road image at the previous time (i.e., the lateral image coordinates, the lateral standard deviation, and the fusion standard deviation corresponding to the detection feature point) is also introduced and fused with the corrected feature point after the lateral correction, whereby the accuracy of the lateral position of the corrected feature point can be further improved. And meanwhile, the fusion standard deviation which is further fused and corresponds to the corrected feature point can be obtained for recognizing the lane line of the next frame of road image. Thus, the accuracy of the lane line recognition result can be further improved.
Step 305, fusing each feature point in each correction feature point group in the correction feature point group set to generate a fused lane line, obtaining a fused lane line group, and determining the fused lane line group as a lane line recognition result.
In some embodiments, the executing entity may fuse the feature points in each of the correction feature point groups in the correction feature point group set to generate a fused lane line, obtain a fused lane line group, and determine the fused lane line group as a lane line recognition result. The fusion may be to perform curve fitting (for example, a cubic curve) on each correction feature point in the correction feature point group to obtain a fused lane line. Thus, a lane line recognition result can be obtained.
As can be seen from fig. 3, compared with the description of some embodiments corresponding to fig. 2, the flow 300 of the lane line identification method in some embodiments corresponding to fig. 3 embodies the steps of feature point extraction and generation of the corrected feature point group set. Through the steps, the accuracy of lane line identification can be further improved. Thus, the safety of the vehicle driving can be further improved.
With further reference to fig. 4, as an implementation of the methods shown in the above figures, the present disclosure provides some embodiments of a lane marking recognition apparatus, which correspond to those shown in fig. 2, and which may be applied in various electronic devices in particular.
As shown in fig. 4, the lane line recognition apparatus 400 of some embodiments includes: a feature point extraction unit 401, a coordinate conversion unit 402, a lateral correction unit 403, and a generation unit 404. The feature point extraction unit 401 is configured to perform feature point extraction on a pre-acquired road image to obtain a feature point group set; a coordinate conversion unit 402 configured to perform coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a converted feature point group set; a transverse correction unit 403 configured to perform transverse correction on each conversion feature point in each conversion feature point group in the conversion feature point group set to generate a correction feature point, so as to obtain a correction feature point group set; the generating unit 404 is configured to generate a lane line recognition result based on the set of corrected feature point groups.
It will be understood that the elements described in the apparatus 400 correspond to various steps in the method described with reference to fig. 2. Thus, the operations, features and resulting advantages described above with respect to the method are also applicable to the apparatus 400 and the units included therein, and will not be described herein again.
Referring now to FIG. 5, a block diagram of an electronic device (e.g., computing device 101 of FIG. 1) 500 suitable for use in implementing some embodiments of the present disclosure is shown. The electronic device shown in fig. 5 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, electronic device 500 may include a processing means (e.g., central processing unit, graphics processor, etc.) 501 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 502 or a program loaded from a storage means 508 into a Random Access Memory (RAM) 503. In the RAM 503, various programs and data necessary for the operation of the electronic apparatus 500 are also stored. The processing device 501, the ROM 502, and the RAM 503 are connected to each other through a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Generally, the following devices may be connected to the I/O interface 505: input devices 506 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 507 including, for example, a Liquid Crystal Display (LCD), speakers, vibrators, and the like; storage devices 508 including, for example, magnetic tape, hard disk, etc.; and a communication device 509. The communication means 509 may allow the electronic device 500 to communicate with other devices wirelessly or by wire to exchange data. While fig. 5 illustrates an electronic device 500 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 5 may represent one device or may represent multiple devices as desired.
In particular, according to some embodiments of the present disclosure, the processes described above with reference to the flow diagrams may be implemented as computer software programs. For example, some embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In some such embodiments, the computer program may be downloaded and installed from a network via the communication means 509, or installed from the storage means 508, or installed from the ROM 502. The computer program, when executed by the processing device 501, performs the above-described functions defined in the methods of some embodiments of the present disclosure.
It should be noted that the computer readable medium described above in some embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In some embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In some embodiments of the present disclosure, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the apparatus; or may exist separately without being assembled into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: extracting characteristic points of the pre-acquired road image to obtain a characteristic point group set; performing coordinate conversion on each feature point in each feature point group in the feature point group set to obtain a conversion feature point group set; performing transverse correction on each conversion characteristic point in each conversion characteristic point group in the conversion characteristic point group set to generate correction characteristic points, so as to obtain a correction characteristic point group set; and generating a lane line identification result based on the correction feature point group set.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in some embodiments of the present disclosure may be implemented by software, and may also be implemented by hardware. The described units may also be provided in a processor, and may be described as: a processor comprises a feature point extraction unit, a coordinate conversion unit, a transverse correction unit and a generation unit. Here, the names of these units do not constitute a limitation to the unit itself in some cases, and for example, the generation unit may also be described as a "unit that generates a lane line recognition result".
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.