CN114283398A - Method and device for processing lane line and electronic equipment - Google Patents

Method and device for processing lane line and electronic equipment Download PDF

Info

Publication number
CN114283398A
CN114283398A CN202111574070.4A CN202111574070A CN114283398A CN 114283398 A CN114283398 A CN 114283398A CN 202111574070 A CN202111574070 A CN 202111574070A CN 114283398 A CN114283398 A CN 114283398A
Authority
CN
China
Prior art keywords
lane line
point
lane
determining
vector
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111574070.4A
Other languages
Chinese (zh)
Inventor
路海涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202111574070.4A priority Critical patent/CN114283398A/en
Publication of CN114283398A publication Critical patent/CN114283398A/en
Pending legal-status Critical Current

Links

Images

Abstract

The disclosure provides a lane line processing method and device and electronic equipment, and relates to the technical field of artificial intelligence such as environment perception, unmanned driving and auxiliary driving. The specific implementation scheme is as follows: when determining the lane line in the road, the initial lane line information in the image of the lane where the vehicle is located can be firstly identified; the initial lane line information comprises a plurality of points which are respectively used for describing two non-parallel initial lane lines; determining a starting point, a middle point and an end point of a plurality of points corresponding to the initial lane line; and then, according to the starting point, the middle point and the end point, two target lane lines are determined together, so that the two target lane lines are determined to be the lane lines which are parallel to each other, and the accuracy of the determined lane lines is improved.

Description

Method and device for processing lane line and electronic equipment
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to the field of artificial intelligence technologies such as environment sensing, unmanned driving, and assisted driving, and in particular, to a method and an apparatus for processing a lane line, and an electronic device.
Background
The lane line is used as a main part of the road, can effectively provide reference for the running of the vehicle, and has important significance for the safe running of the vehicle.
In the driving process of a vehicle, the vehicle usually needs to firstly identify lane line information in a road in the driving direction of the vehicle, and perform three-dimensional simulation on the identified lane line information to determine a lane line in the road; and displaying the lane line through the central control platform so as to provide reference for the vehicle to run through the lane line.
Therefore, how to accurately determine the lane lines in the road is crucial to implementing the assistant driving.
Disclosure of Invention
The disclosure provides a processing method and device for a lane line and electronic equipment, which can accurately determine the lane line and improve the accuracy of the determined lane line.
According to a first aspect of the present disclosure, there is provided a lane line processing method, which may include:
identifying initial lane line information in an image of a lane in which a vehicle is located; wherein the initial lane line information includes a plurality of points each describing two non-parallel initial lane lines.
And determining a starting point, a middle point and an end point of a plurality of points corresponding to the initial lane line.
And determining two target lane lines corresponding to the lane according to the starting point, the middle point and the end point, wherein the two target lane lines are parallel.
According to a second aspect of the present disclosure, there is provided a lane line processing apparatus, which may include:
the identification unit is used for identifying initial lane line information in an image of a lane where the vehicle is located; wherein the initial lane line information includes a plurality of points each describing two non-parallel initial lane lines.
And the determining unit is used for determining a starting point, a middle point and an end point of a plurality of points corresponding to the initial lane line.
And the processing unit is used for determining two target lane lines corresponding to the lane according to the starting point, the middle point and the end point, wherein the two target lane lines are parallel.
According to a third aspect of the present disclosure, there is provided an electronic device, which may include:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of lane line processing of the first aspect.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to execute the method for processing a lane line of the first aspect.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of an electronic device can read the computer program, execution of the computer program by the at least one processor causing the electronic device to perform the method of the first aspect.
According to the technical scheme, the lane line can be accurately determined, and the accuracy of the determined lane line is improved.
It should be understood that the statements in this section do not necessarily identify key or critical features of the embodiments of the present disclosure, nor do they limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic illustration of a prior art identified lane line information;
fig. 2 is a schematic flow chart of a method for processing a lane line according to a first embodiment of the present disclosure;
fig. 3 is a schematic diagram of lane line information provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of a determined target lane line provided by an embodiment of the present disclosure;
fig. 5 is a schematic flowchart of a method for determining two target lane lines corresponding to lanes according to a second embodiment of the present disclosure;
FIG. 6 is a schematic diagram of a first vector and a second vector constructed based on a starting point, a middle point, and an ending point provided by embodiments of the present disclosure;
fig. 7 is a schematic structural diagram of a lane line processing apparatus according to a third embodiment of the present disclosure;
fig. 8 is a schematic block diagram of an electronic device provided by an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
In embodiments of the present disclosure, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the access relationship of the associated object, meaning that there may be three relationships, e.g., A and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the text of the present disclosure, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship. In addition, in the embodiments of the present disclosure, "first", "second", "third", "fourth", "fifth", and "sixth" are only used to distinguish the contents of different objects, and have no other special meaning.
The technical scheme provided by the embodiment of the disclosure can be applied to scenes such as environment perception, unmanned driving, auxiliary driving and the like. In the driving process of a vehicle, a camera arranged in front of the vehicle usually collects road images of a road in the driving direction of the vehicle; carrying out image recognition on the collected road image, carrying out three-dimensional simulation on the recognized lane line information, and determining a lane line in the road; and displaying the lane line through the central control platform so as to provide reference for the vehicle to run through the lane line.
In the prior art, collected road images are identified, and the obtained lane line information is a series of points for describing lane lines, but the two described lane lines are only parallel to each other at the near end and close to each other at the far end, and have step jump. For example, referring to fig. 1, fig. 1 is a schematic diagram of identified lane line information provided by the prior art, and it can be seen that two lane lines formed by the lane line information are actually parallel to each other at the near end and close to each other at the far end, and have step jump; and the lane line information with step jumping carries out three-dimensional simulation, the determined lane lines are parallel to each other at the near end and are close to each other at the far end, so that the lane lines in the road cannot be accurately determined, and the accuracy of the determined lane lines is low.
In order to accurately determine the lane line in the road, after the road image is identified and the lane line information with step jump as shown in the above figure 1 is obtained, the lane line in the road is not determined directly based on the lane line information; and furthermore, a starting point, a middle point and an end point are determined from a plurality of points included in the information of the lane lines with the step jump, and two target lane lines are determined together according to the starting point, the middle point and the end point, wherein the two target lane lines are mutually parallel lane lines, so that the accuracy of the determined lane lines is improved.
Based on the above technical concept, embodiments of the present disclosure provide a method for processing a lane line, and the method for processing a lane line provided by the present disclosure will be described in detail through specific embodiments. It is to be understood that the following detailed description may be combined with other embodiments, and that the same or similar concepts or processes may not be repeated in some embodiments.
Example one
Fig. 2 is a flowchart illustrating a method for processing a lane line according to a first embodiment of the present disclosure, where the method for processing the lane line may be performed by software and/or a hardware device, for example, the hardware device may be a terminal or a server. For example, referring to fig. 2, the method for processing the lane line may include:
s201, identifying initial lane line information in an image of a lane where a vehicle is located; wherein the initial lane line information includes a plurality of points for describing two non-parallel initial lane lines, respectively.
For example, when the image of the lane where the vehicle is located is obtained, the image of the lane where the vehicle is located may be collected by a vehicle front-view camera, the image of the lane where the vehicle is located may also be received from other devices, the image of the lane where the vehicle is located may also be obtained from a local storage, the image of the lane where the vehicle is located may also be obtained in other manners, and the setting may be specifically performed according to actual needs.
After the image of the lane where the vehicle is located is obtained, the image of the lane where the vehicle is located can be identified by adopting an image identification technology, and initial lane line information in the image is identified. For example, as shown in fig. 1, the identified initial lane line information includes a plurality of points respectively describing two non-parallel initial lane lines, and the plurality of points corresponding to each initial lane line are not all in the same straight line, and the two initial lane lines are parallel to each other at the proximal end and are close to each other at the distal end, which are the two non-parallel initial lane lines.
Considering that two lane lines for defining a lane are two parallel lane lines in a normal case, after a plurality of points describing two initial lane lines are acquired, a starting point, a middle point and an end point of the plurality of points corresponding to the initial lane lines may be further determined, that is, the following S202 is performed:
s202, determining a starting point, a middle point and an end point of a plurality of points corresponding to the initial lane line.
Among the plurality of points corresponding to the initial lane line, two points at two ends can be respectively understood as a starting point and an end point corresponding to the initial lane line.
For example, when determining the midpoint corresponding to the initial lane line, the midpoint corresponding to the initial lane line may be determined according to the number of the plurality of points corresponding to the initial lane line; the midpoint corresponding to the initial lane line may also be determined according to the length of the initial lane line, or other methods may also be used to determine the midpoint corresponding to the initial lane line, which may be specifically set according to actual needs.
With reference to the initial lane line information shown in fig. 1, for a plurality of points of the initial lane line, according to the driving direction of the vehicle, an end point at a closer end to the current position of the vehicle from among the plurality of points corresponding to the initial lane line may be determined as a starting point from among the plurality of points corresponding to the initial lane line; determining an end point at one end far away from the current position of the vehicle as an end point of a plurality of points corresponding to the initial lane line; determining a point at the middle position in a plurality of points corresponding to the initial lane line as a midpoint; for example, as shown in fig. 3, fig. 3 is a schematic diagram of lane line information provided by an embodiment of the present disclosure; or determining an end point at one end closer to the current position of the vehicle from the plurality of points corresponding to the initial lane line as an end point of the plurality of points corresponding to the initial lane line; determining an end point at one end far away from the current position of the vehicle as a starting point of a plurality of points corresponding to the initial lane line; and determining a point at the middle position in the plurality of points corresponding to the initial lane line as a midpoint, wherein the point can be specifically set according to actual needs.
After determining the starting point, the middle point, and the end point of the plurality of points corresponding to the initial lane line, two target lane lines corresponding to the lanes may be determined according to the starting point, the middle point, and the end point, that is, the following S203 is performed:
and S203, determining two target lane lines corresponding to the lanes according to the starting point, the middle point and the end point, wherein the two target lane lines are parallel.
Illustratively, the two determined target lane lines are two lane lines parallel to each other according to the starting point, the middle point and the end point. For example, referring to fig. 4, fig. 4 is a schematic diagram of a determined target lane line according to an embodiment of the present disclosure; it can be seen that the two target lane lines corresponding to the lane where the vehicle is located are two lane lines parallel to each other.
It can be seen that in the embodiment of the present disclosure, when determining a lane line in a road, initial lane line information in an image of a lane where a vehicle is located may be identified first; the initial lane line information comprises a plurality of points which are respectively used for describing two non-parallel initial lane lines; determining a starting point, a middle point and an end point of a plurality of points corresponding to the initial lane line; and then, according to the starting point, the middle point and the end point, two target lane lines are determined together, so that the two target lane lines are determined to be the lane lines which are parallel to each other, and the accuracy of the determined lane lines is improved.
Based on the embodiment shown in fig. 2, in order to avoid the influence of the invalid points such as some noise points on the determination of the subsequent target lane line, for example, when determining the starting point, the middle point, and the end point of the plurality of points corresponding to the initial lane line in S202, the invalid points may be determined from the plurality of points corresponding to the initial lane line; removing invalid points from the multiple points; and then, the starting point, the middle point and the end point are determined from other points except the invalid points, so that invalid points such as noise points and the like can be avoided, the data processing amount is reduced, and the processing efficiency is improved. For example, when the invalid point is removed from the plurality of points, the invalid point may be removed from the plurality of points by using, for example, a Ramer _ Douglas-Peucker algorithm.
Based on the above-mentioned embodiment shown in fig. 2, in order to facilitate understanding of how to determine two target lane lines corresponding to lanes according to the starting point, the middle point and the ending point in the above-mentioned S203, a detailed description will be given below by using a second embodiment shown in fig. 5.
Example two
Fig. 5 is a flowchart illustrating a method for determining two target lane lines corresponding to a lane according to a second embodiment of the present disclosure, where the method for determining two target lane lines corresponding to a lane may also be performed by a software and/or hardware device. For example, please refer to fig. 5, the method for determining two target lane lines corresponding to a lane may include:
s501, constructing a first vector according to the starting point and the middle point, and constructing a second vector according to the middle point and the end point.
When constructing the first vector based on the starting point, the middle point and the end point, for example, as shown in fig. 6, fig. 6 is a schematic diagram of the first vector and the second vector constructed based on the starting point, the middle point and the end point, and when constructing the first vector based on the starting point and the middle point, the coordinate of the starting point may be subtracted from the coordinate of the middle point to obtain a vector in which the starting point points to the middle point, which may be denoted as the first vector; when the second vector is constructed according to the midpoint and the end point, the coordinate of the midpoint can be subtracted from the coordinate of the end point to obtain a vector of which the midpoint points to the end point, and the vector can be marked as the second vector, so that the first vector and the second vector are constructed based on the starting point, the midpoint and the end point.
After constructing the first vector and the second vector based on the starting point, the middle point, and the ending point, considering that the included angle between the first vector and the second vector can determine to some extent accurately whether the lane line in the actual scene is a straight lane line, therefore, two target lane lines corresponding to the lane can be further determined according to the included angle between the first vector and the second vector, that is, the following S502 is executed:
and S502, determining two target lane lines corresponding to the lanes according to the included angle between the first vector and the second vector.
For example, when determining the angle between the first vector and the second vector, a dot product operation may be performed on the first vector and the second vector to obtain the angle between the first vector and the second vector.
For example, in the embodiment of the present disclosure, when two target lane lines corresponding to a lane are determined according to an included angle between a first vector and a second vector, a relationship between the included angle between the first vector and the second vector and a preset value may be determined; if the included angle between the first vector and the second vector is larger than or equal to a preset value, the fact that the lane line where the first vector and the second vector are located is a lane line which is possibly turned in an actual scene is indicated; on the contrary, if the included angle between the first vector and the second vector is smaller than the preset value, the actual scene of the lane line where the first vector and the second vector are located is a straight line, and the first vector and the second vector are mistakenly identified as a plurality of points for describing two non-parallel initial lane lines only when the image identification is carried out; therefore, a new lane line can be drawn based on the starting point and the point within the preset range after the starting point during simulation according to straight line processing, and the drawn lane line can be shown in fig. 4, so that the jump in the existing identified initial lane line information can be eliminated, the determined target lane line is the lane line which is parallel to each other, and the perspective effect during rendering is added, so that the situation of the lane line in the actual scene is better met, and the accuracy of the determined lane line is improved.
For example, in the embodiments of the present disclosure, the preset value is determined according to the wide angle and the recognition distance of the capturing device that captures the image. Under the general condition, the larger the wide angle of the acquisition equipment is, the wider the identification distance is, and the larger the corresponding preset value is; conversely, the smaller the wide angle of the acquisition device, the narrower the identification distance, and the smaller the corresponding preset value.
When a new lane line is drawn based on the starting point and the point within the preset range after the starting point, for example, a new straight line point may be constructed based on the starting point and the first point after the starting point, and the original point may be replaced with the new straight line point to draw a new lane line; the method also can construct a new straight line point based on the starting point and a second point or a third point after the starting point, and the like, replace the original point with the new straight line point, draw a new lane line, as long as the second point or the third point after the starting point does not have jump with the starting point in position, so that jump in the existing identified initial lane line information can be eliminated, the determined target lane line is the lane line which is parallel to each other, and the perspective effect in rendering is added, so that the situation of the lane line in the actual scene is better met.
It can be seen that in the embodiment of the present disclosure, when determining a lane line in a road, a first vector may be constructed according to a starting point and a middle point, and a second vector may be constructed according to the middle point and an end point; and determining two target lane lines corresponding to the lanes according to the included angle between the first vector and the second vector. Whether the lane line in the actual scene is a straight lane line can be accurately determined to a certain degree in view of the included angle between the first vector and the second vector, so that two mutually parallel target lane lines can be accurately determined according to the included angle between the first vector and the second vector, and the accuracy of the determined lane line is improved to a certain degree.
Based on any of the embodiments, in order to accurately display mutually parallel target lane lines on the central control platform and provide reference for vehicle driving through the lane lines, after two mutually parallel target lane lines corresponding to lanes are drawn, initial lane line information in an existing recognized image can be replaced by the two mutually parallel target lane lines, and the replaced image is displayed, so that reference can be provided for vehicle driving through the target lane lines in the replaced image, and accuracy of providing reference for vehicle driving is improved.
EXAMPLE III
Fig. 7 is a schematic structural diagram of a lane line processing device 70 provided according to a third embodiment of the present disclosure, and for example, referring to fig. 7, the lane line processing device 70 may include:
an identifying unit 701 configured to identify initial lane line information in an image of a lane in which a vehicle is located; wherein the initial lane line information includes a plurality of points for describing two non-parallel initial lane lines, respectively.
A determining unit 702, configured to determine a starting point, a middle point, and an end point of a plurality of points corresponding to the initial lane line.
The processing unit 703 is configured to determine two target lane lines corresponding to the lane according to the starting point, the middle point, and the end point, where the two target lane lines are parallel.
Optionally, the processing unit 703 comprises a first processing module and a second processing module.
And the first processing module is used for constructing a first vector according to the starting point and the middle point and constructing a second vector according to the middle point and the end point.
And the second processing module is used for determining two target lane lines corresponding to the lanes according to the included angle between the first vector and the second vector.
Optionally, the second processing module comprises a first processing sub-module and a second processing sub-module.
And the first processing submodule is used for drawing a new lane line according to the starting point and points in the preset range after the starting point if the included angle between the first vector and the second vector is smaller than the preset value.
And the second processing submodule is used for determining the new lane line as the target lane line.
Optionally, the preset value is determined according to a wide angle and a recognition distance of a capturing device that captures the image.
Optionally, the determining unit 702 comprises a first determining module and a second determining module.
A first determining module to determine invalid points of the plurality of points.
And a second determining module for determining a starting point, a middle point and an end point from the plurality of points except the invalid point.
Optionally, the lane line processing device 70 further includes a replacement unit and a display unit.
And the replacing unit is used for replacing the initial lane line information in the image with the target lane line.
And the display unit is used for displaying the replaced image.
The lane line processing apparatus 70 provided in the embodiment of the present disclosure may execute the technical solution of the lane line processing method shown in any one of the above embodiments, and its implementation principle and beneficial effects are similar to those of the lane line processing method, and reference may be made to the implementation principle and beneficial effects of the lane line processing method, which are not described herein again.
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
According to an embodiment of the present disclosure, the present disclosure also provides a computer program product comprising: a computer program, stored in a readable storage medium, from which at least one processor of the electronic device can read the computer program, the at least one processor executing the computer program causing the electronic device to perform the solution provided by any of the embodiments described above.
Fig. 8 is a schematic block diagram of an electronic device 80 provided by an embodiment of the disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 8, the apparatus 80 includes a computing unit 801 that can perform various appropriate actions and processes in accordance with a computer program stored in a Read Only Memory (ROM)802 or a computer program loaded from a storage unit 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data required for the operation of the device 80 can also be stored. The calculation unit 801, the ROM 802, and the RAM 803 are connected to each other by a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
Various components in device 80 are connected to I/O interface 805, including: an input unit 806, such as a keyboard, a mouse, or the like; an output unit 807 such as various types of displays, speakers, and the like; a storage unit 808, such as a magnetic disk, optical disk, or the like; and a communication unit 809 such as a network card, modem, wireless communication transceiver, etc. The communication unit 809 allows the device 80 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
Computing unit 801 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 801 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various dedicated Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, and the like. The calculation unit 801 executes the respective methods and processes described above, such as the processing method of the lane line. For example, in some embodiments, the method of lane line processing may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 808. In some embodiments, part or all of the computer program may be loaded and/or installed onto device 80 via ROM 802 and/or communications unit 809. When the computer program is loaded into the RAM 803 and executed by the computing unit 801, one or more steps of the lane line processing method described above may be performed. Alternatively, in other embodiments, the computing unit 801 may be configured to perform the lane line processing method by any other suitable means (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), system on a chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The Server can be a cloud Server, also called a cloud computing Server or a cloud host, and is a host product in a cloud computing service system, so as to solve the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service ("Virtual Private Server", or simply "VPS"). The server may also be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved, and the present disclosure is not limited herein.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.

Claims (15)

1. A method of lane line processing, comprising:
identifying initial lane line information in an image of a lane in which a vehicle is located; wherein the initial lane line information includes a plurality of points respectively used to describe two non-parallel initial lane lines;
determining a starting point, a middle point and an end point of a plurality of points corresponding to the initial lane line;
and determining two target lane lines corresponding to the lane according to the starting point, the middle point and the end point, wherein the two target lane lines are parallel.
2. The method of claim 1, wherein determining two target lane lines corresponding to the lane according to the starting point, the midpoint, and the ending point comprises:
constructing a first vector according to the starting point and the middle point, and constructing a second vector according to the middle point and the end point;
and determining two target lane lines corresponding to the lane according to the included angle between the first vector and the second vector.
3. The method of claim 2, wherein the determining two target lane lines corresponding to the lane according to the included angle between the first vector and the second vector comprises:
if the included angle between the first vector and the second vector is smaller than a preset value, drawing a new lane line according to the starting point and a point in a preset range behind the starting point, and determining the new lane line as the target lane line.
4. The method of claim 3, wherein the predetermined value is determined based on a wide angle and an identification distance of an acquisition device acquiring the image.
5. The method of any of claims 1-4, wherein the determining a start point, a midpoint, and an end point of a plurality of points corresponding to the initial lane line comprises:
determining an invalid point of the plurality of points;
determining the start point, the midpoint, and the end point from the plurality of points other than the invalid point.
6. The method according to any one of claims 1-5, further comprising:
and replacing the initial lane line information in the image with the target lane line, and displaying the replaced image.
7. A lane line processing apparatus comprising:
the identification unit is used for identifying initial lane line information in an image of a lane where the vehicle is located; wherein the initial lane line information includes a plurality of points respectively used to describe two non-parallel initial lane lines;
a determining unit, configured to determine a starting point, a middle point, and an end point of a plurality of points corresponding to the initial lane line;
and the processing unit is used for determining two target lane lines corresponding to the lane according to the starting point, the middle point and the end point, wherein the two target lane lines are parallel.
8. The apparatus of claim 7, wherein the processing unit comprises a first processing module and a second processing module;
the first processing module is used for constructing a first vector according to the starting point and the middle point and constructing a second vector according to the middle point and the end point;
and the second processing module is used for determining two target lane lines corresponding to the lane according to an included angle between the first vector and the second vector.
9. The apparatus of claim 8, wherein the second processing module comprises a first processing sub-module and a second processing sub-module;
the first processing submodule is used for drawing a new lane line according to the starting point and a point in a preset range behind the starting point if an included angle between the first vector and the second vector is smaller than a preset value;
and the second processing submodule is used for determining the new lane line as the target lane line.
10. The apparatus of claim 9, wherein the predetermined value is determined based on a wide angle and an identification distance of an acquisition device that acquires the image.
11. The apparatus according to any one of claims 7-10, wherein the determining unit comprises a first determining module and a second determining module;
the first determining module is used for determining invalid points in the plurality of points;
the second determining module is configured to determine the starting point, the middle point, and the ending point from other points of the plurality of points except the invalid point.
12. The apparatus according to any one of claims 7-11, further comprising a replacement unit and a display unit;
the replacing unit is used for replacing the initial lane line information in the image with the target lane line;
and the display unit is used for displaying the replaced image.
13. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein the content of the first and second substances,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of lane line processing of any of claims 1-6.
14. A non-transitory computer-readable storage medium storing computer instructions for causing a computer to execute the method of processing a lane line according to any one of claims 1 to 6.
15. A computer program product comprising a computer program which, when being executed by a processor, carries out the steps of the method of processing a lane line according to any one of claims 1 to 6.
CN202111574070.4A 2021-12-21 2021-12-21 Method and device for processing lane line and electronic equipment Pending CN114283398A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111574070.4A CN114283398A (en) 2021-12-21 2021-12-21 Method and device for processing lane line and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111574070.4A CN114283398A (en) 2021-12-21 2021-12-21 Method and device for processing lane line and electronic equipment

Publications (1)

Publication Number Publication Date
CN114283398A true CN114283398A (en) 2022-04-05

Family

ID=80873784

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111574070.4A Pending CN114283398A (en) 2021-12-21 2021-12-21 Method and device for processing lane line and electronic equipment

Country Status (1)

Country Link
CN (1) CN114283398A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998477A (en) * 2022-07-14 2022-09-02 高德软件有限公司 Method, device, equipment and product for drawing center line of turning area lane

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998477A (en) * 2022-07-14 2022-09-02 高德软件有限公司 Method, device, equipment and product for drawing center line of turning area lane
CN114998477B (en) * 2022-07-14 2022-10-28 高德软件有限公司 Method, device, equipment and product for drawing center line of lane in U-turn area

Similar Documents

Publication Publication Date Title
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN112634343A (en) Training method of image depth estimation model and processing method of image depth information
CN112785625A (en) Target tracking method and device, electronic equipment and storage medium
CN113205041B (en) Structured information extraction method, device, equipment and storage medium
CN113392794B (en) Vehicle line crossing identification method and device, electronic equipment and storage medium
CN113657289A (en) Training method and device of threshold estimation model and electronic equipment
CN113223113A (en) Lane line processing method and device, electronic equipment and cloud control platform
CN112580666A (en) Image feature extraction method, training method, device, electronic equipment and medium
CN113378712A (en) Training method of object detection model, image detection method and device thereof
CN113362420A (en) Road marking generation method, device, equipment and storage medium
CN114283398A (en) Method and device for processing lane line and electronic equipment
CN113033346A (en) Text detection method and device and electronic equipment
CN112150380B (en) Method, apparatus, electronic device, and readable storage medium for correcting image
CN114708580A (en) Text recognition method, model training method, device, apparatus, storage medium, and program
CN114882313A (en) Method and device for generating image annotation information, electronic equipment and storage medium
CN114166238A (en) Lane line identification method and device and electronic equipment
CN114119990A (en) Method, apparatus and computer program product for image feature point matching
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN113362218B (en) Data processing method and device, electronic equipment and storage medium
CN113408592B (en) Feature point matching method, device, electronic equipment and computer readable storage medium
CN114359513A (en) Method and device for determining position of obstacle and electronic equipment
US20220383626A1 (en) Image processing method, model training method, relevant devices and electronic device
CN113033659A (en) Method and device for training image recognition model and image recognition
CN115880684A (en) Training of three-dimensional object detection model and three-dimensional object detection method and device
CN114972469A (en) Method and device for generating depth map, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20220405