CN115620258A - Lane line detection method, device, storage medium and vehicle - Google Patents

Lane line detection method, device, storage medium and vehicle Download PDF

Info

Publication number
CN115620258A
CN115620258A CN202211464330.7A CN202211464330A CN115620258A CN 115620258 A CN115620258 A CN 115620258A CN 202211464330 A CN202211464330 A CN 202211464330A CN 115620258 A CN115620258 A CN 115620258A
Authority
CN
China
Prior art keywords
lane line
lane
lines
category
alternative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211464330.7A
Other languages
Chinese (zh)
Inventor
王加华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Original Assignee
Beijing Xiaomi Pinecone Electronic Co Ltd
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Pinecone Electronic Co Ltd, Xiaomi Automobile Technology Co Ltd filed Critical Beijing Xiaomi Pinecone Electronic Co Ltd
Priority to CN202211464330.7A priority Critical patent/CN115620258A/en
Publication of CN115620258A publication Critical patent/CN115620258A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The disclosure relates to a lane line detection method, a lane line detection device, a storage medium and a vehicle, wherein the method comprises the following steps: acquiring a lane line identification result, wherein the lane line identification result comprises a plurality of alternative lane lines; dividing the plurality of alternative lane lines into a plurality of groups according to the relative positions of the alternative lane lines and the vehicles; for each group, acquiring category prediction results of all alternative lane lines in the group, and taking the alternative lane line with the highest probability value as a target lane line according to the category prediction results, wherein the category prediction results comprise the prediction category of each alternative lane line in the group and the probability value belonging to the prediction category; and acquiring the target lane line of each group, and storing all the target lane lines as detection results, wherein the method repeatedly inhibits the alternative lane lines for multiple times through the relative position and the category information, so that the accuracy of lane line detection is improved, the consumed time is less, and the calculated power consumption is less.

Description

Lane line detection method, device, storage medium and vehicle
Technical Field
The present disclosure relates to the field of target detection technologies, and in particular, to a lane line detection method, apparatus, storage medium, and vehicle.
Background
In the automatic driving related art, a target detection algorithm is used to detect a vehicle, a pedestrian, a sign, or the like on a road.
In identifying lane lines, an algorithm, such as a lane line detection algorithm based on example segmentation, is used to detect the environment perceived by the vehicle, which can detect all lane lines in the image, but a non-maximum suppression algorithm is required to avoid excessive false detection and duplicate detection problems. In an example segmentation and target detection algorithm of a natural scene, a non-maximum suppression algorithm solves the problem of repeated detection among examples by calculating the intersection ratio among different targets.
Although the non-maximum suppression algorithm can solve the problem of repeated detection through the intersection ratio among different lane lines, the calculation process of the intersection ratio among the lane lines in the same category is time-consuming and needs to occupy more calculation power.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a lane line detection method, apparatus, storage medium, and vehicle.
According to a first aspect of the embodiments of the present disclosure, there is provided a lane line detection method, including: acquiring lane line identification results, wherein the lane line identification results comprise a plurality of alternative lane lines; dividing the plurality of alternative lane lines into a plurality of groups according to the relative positions of the alternative lane lines and the vehicles; for each group, acquiring category prediction results of all alternative lane lines in the group, and taking the alternative lane line with the highest probability value as a target lane line according to the category prediction results, wherein the category prediction results comprise the prediction category of each alternative lane line in the group and the probability value belonging to the prediction category; and acquiring the target lane line of each group, and storing all the target lane lines as detection results.
Optionally, the obtaining of the category prediction results of all the candidate lane lines in the group and taking the candidate lane line with the highest probability value as the target lane line according to the category prediction results includes: dividing all the alternative lane lines in the group into a plurality of classes according to the class prediction result; and acquiring the alternative lane line with the highest probability value under each category, and taking the acquired alternative lane line as a target lane line.
Optionally, the dividing, according to the category prediction result, all the candidate lane lines in the group into a plurality of categories further includes: when at least two categories with conflict exist in the plurality of categories, acquiring a candidate lane line with the highest probability value under each category of the at least two categories and taking the candidate lane line as a lane line to be selected; and screening the lane lines to be selected of the at least two categories according to the distance between the lane lines to be selected to obtain the target lane line.
Optionally, the screening the lane lines to be selected of the at least two categories according to the distance between the lane lines to be selected to obtain the target lane line includes: acquiring the center of each lane line to be selected, and calculating the distance between the centers of every two lane lines to be selected; and when the distance between the centers of every two lane lines to be selected is smaller than a threshold value, taking the lane line to be selected with the highest probability value in the lane lines to be selected as a target lane line.
Optionally, the method further includes: and when the distance between the centers of any two lane lines to be selected is greater than the threshold value, taking any two lane lines to be selected as target lane lines.
Optionally, the method further includes: acquiring a road image; inputting the road image into a lane line detection model, and acquiring the lane line recognition result, wherein the lane line detection model is trained in advance, and the lane detection model is configured to recognize a lane line in the image, the shape and the category of the lane line and the relative position of the lane line and a vehicle.
According to a second aspect of the embodiments of the present disclosure, there is provided a lane line detection apparatus including: an acquisition module configured to acquire lane line identification results, the lane line identification results including a plurality of candidate lane lines; the grouping module is configured to divide the plurality of alternative lane lines into a plurality of groups according to the relative positions of the alternative lane lines and the vehicle; the classification module is configured to acquire category prediction results of all the alternative lane lines in each group, and take the alternative lane line with the highest probability value as a target lane line according to the category prediction results, wherein the category prediction results comprise the prediction category of each alternative lane line in each group and the probability value belonging to the prediction category; and the detection result generation module is configured to acquire the target lane line of each group and store all the target lane lines as detection results.
According to a third aspect of the embodiments of the present disclosure, there is provided a lane line detection apparatus including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to: the steps of the lane line detection method provided by the first aspect of the present disclosure are implemented.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the lane line detection method provided by the first aspect of the present disclosure.
According to a fifth aspect of the embodiments of the present disclosure, a vehicle for implementing the steps of the lane line detection method provided by the first aspect of the present disclosure is provided.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects: the relative positions of the lane lines and the vehicles are grouped, and then the lane lines in the group are screened through the category information of the lane lines, so that the alternative lane lines in the detection result are repeatedly inhibited for many times through the relative positions and the category information, the lane line detection precision is improved, the false report of the lane lines is reduced, and compared with the related technology, the intersection and comparison between different lane lines does not need to be calculated, so that the time consumption is low, and the calculation power consumption is low.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a lane line detection method in accordance with an exemplary embodiment;
FIG. 2 is a block diagram illustrating a lane marking detection apparatus according to an exemplary embodiment;
FIG. 3 is a block diagram illustrating another apparatus for implementing the lane marking detection method described above, according to an exemplary embodiment;
FIG. 4 is a functional block diagram schematic of a vehicle shown in accordance with an exemplary embodiment;
fig. 5 is a block diagram illustrating still another apparatus for implementing the lane line detection method according to an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Fig. 1 is a flowchart illustrating a lane line detection method according to an exemplary embodiment, where the lane line detection method is used in a terminal as shown in fig. 1, and includes the following steps.
In step S110, a lane line recognition result is obtained, and the lane line recognition result includes a plurality of candidate lane lines.
And acquiring an identification result of the lane lines around the vehicle, wherein the identification result comprises a plurality of identified lane lines, and the identified lane lines are alternative lane lines. The candidate lane lines in the lane line recognition result may be different recognition results for the same lane line, for example, for lane line1, candidate lane line1 and candidate lane line2 in the recognition result are both recognition for the lane line. The alternative lane line may also be a plurality of recognition results for a plurality of lane lines, including, for example, in the case of a vehicle turning: the lane line2 of the lane currently being driven, the lane line3 ready to enter the lane, and the lane line4 of the adjacent lane, for which the three lane recognition results may include: the lane line candidate 1, the lane line candidate 2, the lane line candidate 3, the lane line candidate 4, the lane line candidate 5, and the lane line candidate 6.
Optionally, the method further includes: acquiring a road image; inputting the road image into a lane line detection model, and acquiring the lane line recognition result, wherein the lane line detection model is trained in advance, and the lane line detection model is configured to recognize a lane line in the image, and the shape, the category and the relative position of the lane line and the vehicle.
In one embodiment of the present application, a lane line recognition result is generated by a computer vision method: and acquiring a road image through a camera module on the vehicle, and inputting the acquired road image into a pre-trained lane line detection model to generate a recognition result.
The training process of the lane line detection model may include: training the basic model by using training data, wherein the training data comprise a plurality of examples, the examples are images containing lane lines, and all the lane lines appearing in the images are marked. The labeling content comprises: the shape of the lane line, the type of the lane line, and the position of the lane line relative to the vehicle; the shape of the lane line is defined in a key point manner as a solid line and a dotted line, the category of the lane line can be defined in a manner of combining functions and semantics, for example, a white solid line, a double yellow line and a converging and converging line, and the position of the lane line relative to the vehicle can be: the left line refers to the lane line of the left lane beside the current lane, and the right line refers to the lane line of the right lane beside the current lane. The basic model refers to a pre-trained model such as CNN (Convolutional Neural Network), R-CNN (Region-Convolutional Neural Network), fast R-CNN (Fast R-CNN), etc.
After the basic model is trained by using the training data, a lane line detection model is obtained, the lane line model can detect the lane lines in the input image, and shape information formed by key points of the lane lines, the category information of the lane lines and the position information of the lane lines relative to the vehicle are given.
In step S120, the plurality of candidate lane lines are divided into a plurality of groups according to the relative positions of the candidate lane lines and the vehicle.
And for all the alternative lane lines in the recognition result, acquiring the position of each alternative lane line relative to the vehicle, and grouping the alternative lane lines according to the relative positions. For example, the relative position may be the direction of the lane line relative to the vehicle, such as left, right, and left, and the candidate lane lines are classified according to this standard as: a left group, a right group, a left group, a right group.
In an embodiment of the present application, each candidate lane line in the recognition result has relative position information, where the relative position information includes: and the left side line, the right side line, the left line and the right line divide the alternative lane lines into a left line group, a right line group, a left line group and a right line group according to the relative position information.
In step S130, for each of the groups, a category prediction result of all the candidate lane lines in the group is obtained, and the candidate lane line with the highest probability value is used as a target lane line according to the category prediction result, where the category prediction result includes a prediction category of each of the candidate lane lines in the group and a probability value belonging to the prediction category.
For each obtained candidate lane line group, performing category prediction on all candidate lane lines in the group, and in the process of category prediction, the category probability of the lane lines needs to be reserved. For example, three categories are set: the road traffic guidance system comprises indication marked lines, prohibition marked lines and warning marked lines, wherein the indication marked lines are used for indicating marked lines of a roadway and a driving direction, the prohibition marked lines are used for notifying special regulated marked lines of road traffic compliance, prohibition and the like, and the warning marked lines are used for prompting vehicle drivers and pedestrians to know the marked lines of special conditions on the road. For the left group, the right group, the left group and the right group, the classification prediction is performed according to the characteristics of the candidate lanes in each group, and the probability value of the lane line belonging to the classification needs to be reserved in the prediction result, for example, the neural network is used for classifying and predicting the candidate lane line in the left group, and the neural network usually outputs the probability of the candidate lane line belonging to each classification during classification, such as (indicating line class, 0.7; forbidden line class, 0.2; warning line class, 0.1). And taking the category with the highest probability value as the classification prediction of the alternative lane line, and reserving the probability of the category by the alternative lane line.
And (4) classifying and predicting all the alternative lane lines in each group, and after a classification prediction result is obtained, taking the alternative lane line with the maximum probability value in the group as a target lane line, namely, storing the alternative lane line with the maximum probability value as a repeated suppression result.
In an embodiment of the present application, each candidate lane line in the recognition result carries category prediction information, where the category prediction information indicates which category the lane line belongs to and a probability value of the lane line belonging to the category, such as (white solid line, 0.8).
And reserving the alternative lane line with the maximum probability value in the group as a target lane line according to the category prediction information carried by the alternative lane line.
Optionally, the obtaining of the category prediction results of all the candidate lane lines in the group and taking the candidate lane line with the highest probability value as the target lane line according to the category prediction results includes: dividing all the alternative lane lines in the group into a plurality of classes according to the class prediction result; and acquiring the alternative lane line with the highest probability value under each category, and taking the acquired alternative lane line as a target lane line.
And for each alternative lane line group, classifying all alternative lane lines in the group according to the class prediction result of the group, searching for the alternative lane line with the highest probability value under each class, and storing, namely, taking the lane line with the highest probability value under each class in the group as a target lane line.
In the embodiment of the application, the method further subdivides, suppresses the alternative lane lines in each category, obtains the alternative lane lines in each category, expands the obtaining range, and obtains all possible lane lines.
In an embodiment of the present application, the dividing, according to the category prediction result, all the candidate lane lines in the group into a plurality of categories further includes: when at least two types of conflict exist in the plurality of types, acquiring a candidate lane line with the highest probability value under each type of the at least two types and taking the candidate lane line as a lane line to be selected; and screening the lane lines to be selected of the at least two categories according to the distance between the lane lines to be selected to obtain the target lane line.
After classifying all the alternative lane lines in the group, obtaining a classification result, and determining whether multiple classes of conflict exist in the classification result, where the conflict condition is preset, and if the white solid line and the incoming/outgoing line are set as conflict classes, it can be understood that the lane lines are distributed according to a certain rule such as an intersection rule, and in the intersection rule, the solid line and the incoming/outgoing line usually cannot exist at the same time. And when a plurality of classes with conflicts exist in the classification result, acquiring the lane line with the highest probability value under each class of the plurality of classes with conflicts, and taking the acquired lane line as the lane line to be selected. For example, a lane line with the highest probability value such as line1 under the white solid line is acquired as a lane line1 to be selected, and a lane line with the highest probability value such as line2 under the incoming and outgoing line is acquired as a lane line2 to be selected.
And screening the lane lines according to the distance between the lane line1 to be selected and the lane line2 to be selected, and acquiring the multiple types of target lane lines with conflicts.
In the embodiment of the application, the conflict relationship between the lane line types is further utilized to further restrain the lane lines, so that the maximum range restraint is realized, and the wrong lane lines are eliminated.
Further, the screening the lane lines to be selected of the at least two categories according to the distance between the lane lines to be selected to obtain the target lane line includes: acquiring the center of each lane line to be selected, and calculating the distance between the centers of every two lane lines to be selected; and when the distance between the centers of every two lane lines to be selected is smaller than a threshold value, taking the lane line to be selected with the highest probability value in the lane lines to be selected as a target lane line.
Further, when the distance between the centers of any two lane lines to be selected is greater than the threshold value, all the two lane lines to be selected are used as target lane lines.
In an embodiment of the present application, the screening of the lane line to be selected may be:
the center of each lane line to be selected is obtained, and the center may be specified in a specific situation, for example, in the recognition result obtained by using a computer vision method, each lane line to be selected is output in the form of an anchor frame, and when the center of the lane line to be selected needs to be selected, the anchor point of the anchor frame may be used as the center.
And calculating the distance between the centers of the lane lines to be selected, when the distance between the centers of the lane lines to be selected is smaller than a threshold value, indicating that repeated detection possibly exists, and taking the lane line to be selected with the highest probability value as the target lane lines of a plurality of classes in the collision classes.
And when the distance between the centers of the lane lines to be selected is larger than a threshold value, taking all the lane lines to be selected as target lane lines of a plurality of classes in the conflict classes.
In step S140, the target lane line of each group is acquired, and all the target lane lines are saved as the detection result.
The above step S130 is performed for each lane line group, and a target lane line of each group is obtained, and the target lane lines of all the lane line groups are used as the detection result of the lane lines.
In the lane line detection method, the relative positions of the lane lines and the vehicles are grouped, and then the lane lines in the group are screened according to the category information of the lane lines, so that the alternative lane lines in the detection result are repeatedly inhibited for many times according to the relative positions and the category information, the lane line detection precision is improved, the false report of the lane lines is reduced, and compared with the related technology, the intersection and the comparison between different lane lines are not required to be calculated, so that the time consumption is low, and the calculation power consumption is low.
Fig. 2 is a block diagram illustrating a lane line detection apparatus according to an exemplary embodiment. Referring to fig. 2, the apparatus includes a detection module obtaining module 210, a grouping module 220, a classification module 230, and a detection result generating module 240.
An obtaining module 210 configured to obtain lane line identification results, where the lane line identification results include a plurality of candidate lane lines;
a grouping module 220 configured to divide the plurality of candidate lane lines into a plurality of groups according to relative positions of the candidate lane lines and the vehicle;
a classification module 230 configured to, for each of the groups, obtain a class prediction result of all the candidate lane lines in the group, and take the candidate lane line with the highest probability value as a target lane line according to the class prediction result, where the class prediction result includes a prediction class of each of the candidate lane lines in the group and a probability value belonging to the prediction class;
and a detection result generating module 240 configured to obtain the target lane line of each group, and store all the target lane lines as the detection result.
With regard to the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The present disclosure also provides a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the lane line detection method provided by the present disclosure.
Fig. 3 is a block diagram illustrating another apparatus 300 for implementing the lane line detection method described above according to an example embodiment. For example, the apparatus 300 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 3, the apparatus 300 may include one or more of the following components: a first processing component 302, a first memory 304, a first power component 306, a multimedia component 308, an audio component 310, a first input/output interface 312, a sensor component 314, and a communication component 316.
The first processing component 302 generally controls overall operation of the device 300, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The first processing component 302 may include one or more first processors 320 to execute instructions to perform all or a portion of the steps of the method described above. Further, the first processing component 302 may include one or more modules that facilitate interaction between the first processing component 302 and other components. For example, the first processing component 302 may include a multimedia module to facilitate interaction between the multimedia component 308 and the first processing component 302.
The first memory 304 is configured to store various types of data to support operations at the apparatus 300. Examples of such data include instructions for any application or method operating on device 300, contact data, phonebook data, messages, pictures, videos, and so forth. The first memory 304 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The first power supply component 306 provides power to the various components of the device 300. The first power component 306 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the apparatus 300.
The multimedia component 308 includes a screen that provides an output interface between the device 300 and the user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 308 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 300 is in an operation mode, such as a photographing mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 310 is configured to output and/or input audio signals. For example, audio component 310 includes a Microphone (MIC) configured to receive external audio signals when apparatus 300 is in an operating mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the first memory 304 or transmitted via the communication component 316. In some embodiments, audio component 310 also includes a speaker for outputting audio signals.
The first input/output interface 312 provides an interface between the first processing component 302 and a peripheral interface module, which may be a keyboard, click wheel, button, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 314 includes one or more sensors for providing various aspects of status assessment for the device 300. For example, sensor assembly 314 may detect the open/closed status of device 300, the relative positioning of components, such as a display and keypad of device 300, the change in position of device 300 or a component of device 300, the presence or absence of user contact with device 300, the orientation or acceleration/deceleration of device 300, and the change in temperature of device 300. Sensor assembly 314 may include a proximity sensor configured to detect the presence of a nearby object in the absence of any physical contact. The sensor assembly 314 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 314 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 316 is configured to facilitate wired or wireless communication between the apparatus 300 and other devices. The device 300 may access a wireless network based on a communication standard, such as WiFi,2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 316 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 316 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 300 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors, or other electronic components for performing the lane line detection method described above.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, for example, including the first memory 304 storing instructions, executable by the first processor 320 of the apparatus 300 to perform the lane line detection method described above is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
The apparatus may be a part of a stand-alone electronic device, for example, in an embodiment, the apparatus may be an Integrated Circuit (IC) or a chip, where the IC may be one IC or a set of multiple ICs; the chip may include, but is not limited to, the following categories: a GPU (Graphics Processing Unit), a CPU (Central Processing Unit), an FPGA (Field Programmable Gate Array), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), an SOC (System on Chip, SOC, system on Chip, or System on Chip), and the like. The integrated circuit or chip may be configured to execute executable instructions (or codes) to implement the lane line detection method. Where the executable instructions may be stored in the integrated circuit or chip or may be retrieved from another device or apparatus, such as an integrated circuit or chip that includes a processor, memory, and an interface for communicating with other devices. The executable instructions may be stored in the memory, and when executed by the processor, implement the lane marking detection method described above; alternatively, the integrated circuit or chip may receive the executable instructions through the interface and transmit the executable instructions to the processor for execution, so as to implement the lane line detection method.
Referring to fig. 4, fig. 4 is a functional block diagram of a vehicle 400 according to an exemplary embodiment. The vehicle 400 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 400 may acquire environmental information around it through the perception system 420 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement fully automatic driving, or present the analysis results to the user to implement partially automatic driving.
The vehicle 400 may include various subsystems such as an infotainment system 410, a perception system 420, a decision control system 430, a drive system 440, and a computing platform 450. Alternatively, vehicle 400 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 400 may be interconnected by wire or wirelessly.
In some embodiments, infotainment system 410 may include a communication system 411, an entertainment system 412, and a navigation system 413.
The communication system 411 may comprise a wireless communication system that may wirelessly communicate with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may communicate directly with the device using an infrared link, bluetooth, or ZigBee. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 412 may include a display device, a microphone, and a sound box, where a user may listen to a broadcast in the car, play music, based on the entertainment system; or the mobile phone is communicated with the vehicle, the screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control mode, and a user can operate the display equipment by touching the screen.
In some cases, the user's voice signal may be captured by a microphone and certain controls of the vehicle 400 may be implemented by the user, such as adjusting the temperature in the vehicle, etc., depending on the analysis of the user's voice signal. In other cases, music may be played to the user through a stereo.
The navigation system 413 may include a map service provided by a map provider to provide navigation of the route traveled by the vehicle 400, and the navigation system 413 may be used in conjunction with the global positioning system 421 and the inertial measurement unit 422 of the vehicle. The map service provided by the map supplier can be a two-dimensional map or a high-precision map.
The perception system 420 may include several types of sensors that sense information about the environment surrounding the vehicle 400. For example, the sensing system 420 may include a global positioning system 421 (the global positioning system may be a GPS system, a compass system, or other positioning system), an Inertial Measurement Unit (IMU) 422, a laser radar 423, a millimeter wave radar 424, an ultrasonic radar 425, and a camera 426. The sensing system 420 may also include sensors of internal systems of the monitored vehicle 400 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 400.
Global positioning system 421 is used to estimate the geographic location of vehicle 400.
The inertial measurement unit 422 is used to sense a pose change of the vehicle 400 based on the inertial acceleration. In some embodiments, the inertial measurement unit 422 may be a combination of an accelerometer and a gyroscope.
Lidar 423 utilizes laser light to sense objects in the environment in which vehicle 400 is located. In some embodiments, lidar 423 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
Millimeter-wave radar 424 utilizes radio signals to sense objects within the surrounding environment of vehicle 400. In some embodiments, in addition to sensing objects, the millimeter-wave radar 424 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 425 may sense objects around the vehicle 400 using ultrasonic signals.
The camera 426 is used to capture image information of the surroundings of the vehicle 400. The camera 426 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, etc., and the image information acquired by the camera 426 may include still images and may also include video stream information.
The decision control system 430 includes a computing system 431 for making analytical decisions based on information obtained by the perception system 420, the decision control system 430 further includes a vehicle control unit 432 for controlling the powertrain of the vehicle 400, and a steering system 433, a throttle 434, and a braking system 435 for controlling the vehicle 400.
The computing system 431 may be operable to process and analyze various information acquired by the perception system 420 in order to identify objects, and/or features in the environment surrounding the vehicle 400. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. The computing system 431 may use object recognition algorithms, structure From Motion (SFM) algorithms, video tracking, and the like. In some embodiments, the computing system 431 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 431 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle control unit 432 may be used to perform coordinated control on the power battery and the engine 441 of the vehicle to improve the power performance of the vehicle 400.
The steering system 433 is operable to adjust the heading of the vehicle 400. For example, in one embodiment, a steering wheel system.
The throttle 434 is used to control the operating speed of the engine 6441 and thus the speed of the vehicle 400.
The braking system 435 is used to control the deceleration of the vehicle 400. The braking system 435 may use friction to slow the wheels 444. In some embodiments, the braking system 435 may convert the kinetic energy of the wheels 444 into electrical current. The braking system 435 may take other forms to slow the rotational speed of the wheels 444 to control the speed of the vehicle 400.
The drive system 440 may include components that provide powered motion to the vehicle 400. In one embodiment, drive system 440 may include an engine 441, an energy source 442, a transmission 443, and wheels 444. The engine 441 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 441 converts the energy source 442 into mechanical energy.
Examples of energy source 442 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 442 may also provide energy to other systems of the vehicle 400.
The transmission system 443 may transmit mechanical power from the engine 441 to the wheels 444. The driveline 443 may include a gearbox, a differential, and a driveshaft. In one embodiment, the transmission system 443 may also include other devices, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 444.
Some or all of the functions of the vehicle 400 are controlled by the computing platform 450. The computing platform 450 may include at least one second processor 451, and the second processor 451 may execute instructions 453 stored in a non-transitory computer readable medium, such as a second memory 452. In some embodiments, the computing platform 450 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 400 in a distributed manner.
The second processor 451 may be any conventional processor, such as a commercially available CPU. Alternatively, the second processor 451 may also include, for example, a Graphics Processor (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 4 functionally illustrates processors, memories, and other elements of the computer in the same block, one of ordinary skill in the art will appreciate that the processors, computers, or memories may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some of the components, such as the steering and deceleration components, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the second processor 451 may perform the lane line detection method described above.
In various aspects described herein, the second processor 451 can be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the second memory 452 may contain instructions 453 (e.g., program logic), the instructions 453 being executable by the second processor 451 to perform various functions of the vehicle 400. The second memory 452 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 410, the perception system 420, the decision control system 430, the drive system 440.
In addition to instructions 453, the second memory 452 may also store data such as road maps, route information, the position, direction, speed of the vehicle, and other such vehicle data, among other information. Such information may be used by the vehicle 400 and the computing platform 450 during operation of the vehicle 400 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 450 may control the functions of vehicle 400 based on inputs received from various subsystems, such as drive system 440, perception system 420, and decision control system 430. For example, computing platform 450 may utilize input from decision control system 430 in order to control steering system 433 to avoid obstacles detected by perception system 420. In some embodiments, the computing platform 450 is operable to provide control over many aspects of the vehicle 400 and its subsystems.
Optionally, one or more of these components described above may be mounted or associated separately from the vehicle 400. For example, the second memory 452 may be partially or completely separate from the vehicle 400. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 4 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 400 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 400 or a sensory and computing device (e.g., computing system 431, computing platform 450) associated with the vehicle 400 may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each of the identified objects is dependent on the behavior of each other, so all of the identified objects can also be considered together to predict the behavior of a single identified object. The vehicle 400 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 400, such as the lateral position of the vehicle 400 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 400 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 400 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which contains a computer program executable by a programmable apparatus, the computer program having code portions for performing the lane line detection method described above when executed by the programmable apparatus.
Fig. 5 is a block diagram illustrating yet another apparatus 500 for implementing the lane line detection method described above according to an exemplary embodiment. For example, the apparatus 500 may be provided as a server. Referring to fig. 5, the apparatus 500 comprises a second processing component 522 further comprising one or more processors and memory resources, represented by a third memory 532, for storing instructions, e.g. applications, executable by the second processing component 522. The application programs stored in the third memory 532 may include one or more modules each corresponding to a set of instructions. Further, the second processing component 522 is configured to execute instructions to perform the lane marking detection method described above.
The apparatus 500 may also include a second power component 526 configured to perform power management of the apparatus 500, a wired or wireless network interface 550 configured to connect the apparatus 500 to a network, and a second input/output interface 558. The apparatus 500 may operate based on an operating system, such as Windows Server, mac OS XTM, unixTM, linuxTM, freeBSDTM or the like, stored in the third memory 532.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice in the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A lane line detection method, the method comprising:
acquiring a lane line identification result, wherein the lane line identification result comprises a plurality of alternative lane lines;
dividing the plurality of alternative lane lines into a plurality of groups according to the relative positions of the alternative lane lines and the vehicles;
for each group, acquiring category prediction results of all alternative lane lines in the group, and taking the alternative lane line with the highest probability value as a target lane line according to the category prediction results, wherein the category prediction results comprise the prediction category of each alternative lane line in the group and the probability value belonging to the prediction category;
and acquiring the target lane line of each group, and storing all the target lane lines as detection results.
2. The method of claim 1, wherein the obtaining the category prediction results of all the candidate lane lines in the group and using the candidate lane line with the highest probability value as the target lane line according to the category prediction results comprises:
dividing all the alternative lane lines in the group into a plurality of classes according to the class prediction result;
and acquiring the alternative lane line with the highest probability value under each category, and taking the acquired alternative lane line as a target lane line.
3. The method according to claim 2, wherein the dividing all the candidate lane lines in the group into a plurality of classes according to the class prediction result further comprises:
when at least two categories with conflict exist in the plurality of categories, acquiring a candidate lane line with the highest probability value under each category of the at least two categories and taking the candidate lane line as a lane line to be selected;
and screening the at least two categories of lane lines to be selected according to the distance between the lane lines to be selected to obtain the target lane line.
4. The method according to claim 3, wherein the step of screening the lane lines to be selected of the at least two categories according to the distance between the lane lines to be selected to obtain the target lane line comprises:
acquiring the center of each lane line to be selected, and calculating the distance between the centers of every two lane lines to be selected;
and when the distance between the centers of every two lane lines to be selected is smaller than a threshold value, taking the lane line to be selected with the highest probability value in the lane lines to be selected as a target lane line.
5. The method of claim 4, further comprising:
and when the distance between the centers of any two lane lines to be selected is greater than the threshold value, taking any two lane lines to be selected as target lane lines.
6. The method of claim 1, further comprising:
acquiring a road image;
inputting the road image into a lane line detection model, and acquiring the lane line recognition result, wherein the lane line detection model is trained in advance, and the lane detection model is configured to recognize a lane line in the image, the shape and the category of the lane line and the relative position of the lane line and a vehicle.
7. A lane line detection apparatus, comprising:
the acquisition module is configured to acquire lane line identification results, and the lane line identification results comprise a plurality of alternative lane lines;
the grouping module is configured to divide the plurality of alternative lane lines into a plurality of groups according to the relative positions of the alternative lane lines and the vehicles;
the classification module is configured to acquire the category prediction results of all the candidate lane lines in each group, and take the candidate lane line with the highest probability value as a target lane line according to the category prediction results, wherein the category prediction results comprise the prediction category of each candidate lane line in the group and the probability value belonging to the prediction category;
and the detection result generation module is configured to acquire the target lane line of each group and store all the target lane lines as detection results.
8. A lane line detection apparatus, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of implementing the method of any one of claims 1 to 6.
9. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 6.
10. A vehicle for carrying out the steps of the method according to any one of claims 1 to 6.
CN202211464330.7A 2022-11-17 2022-11-17 Lane line detection method, device, storage medium and vehicle Pending CN115620258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211464330.7A CN115620258A (en) 2022-11-17 2022-11-17 Lane line detection method, device, storage medium and vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211464330.7A CN115620258A (en) 2022-11-17 2022-11-17 Lane line detection method, device, storage medium and vehicle

Publications (1)

Publication Number Publication Date
CN115620258A true CN115620258A (en) 2023-01-17

Family

ID=84879223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211464330.7A Pending CN115620258A (en) 2022-11-17 2022-11-17 Lane line detection method, device, storage medium and vehicle

Country Status (1)

Country Link
CN (1) CN115620258A (en)

Similar Documents

Publication Publication Date Title
CN114882464B (en) Multi-task model training method, multi-task processing method, device and vehicle
CN115222941A (en) Target detection method and device, vehicle, storage medium, chip and electronic equipment
CN114935334A (en) Method and device for constructing topological relation of lanes, vehicle, medium and chip
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN114842455B (en) Obstacle detection method, device, equipment, medium, chip and vehicle
CN115170630B (en) Map generation method, map generation device, electronic equipment, vehicle and storage medium
CN115164910B (en) Travel route generation method, travel route generation device, vehicle, storage medium, and chip
CN114756700B (en) Scene library establishing method and device, vehicle, storage medium and chip
CN114863717B (en) Parking stall recommendation method and device, storage medium and vehicle
CN114880408A (en) Scene construction method, device, medium and chip
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN115620258A (en) Lane line detection method, device, storage medium and vehicle
CN114771514B (en) Vehicle running control method, device, equipment, medium, chip and vehicle
CN115535004B (en) Distance generation method, device, storage medium and vehicle
CN115221260B (en) Data processing method, device, vehicle and storage medium
CN114572219B (en) Automatic overtaking method and device, vehicle, storage medium and chip
CN114842454B (en) Obstacle detection method, device, equipment, storage medium, chip and vehicle
CN115042813B (en) Vehicle control method and device, storage medium and vehicle
CN115115822B (en) Vehicle-end image processing method and device, vehicle, storage medium and chip
CN114789723B (en) Vehicle running control method and device, vehicle, storage medium and chip
CN114802217B (en) Method and device for determining parking mode, storage medium and vehicle
CN114780226B (en) Resource scheduling method and device, computer readable storage medium and vehicle
US20230415570A1 (en) Vehicle control method and vehicle, non-transitory storage medium and chip
CN115205804A (en) Image processing method, image processing apparatus, vehicle, medium, and chip
CN114802258A (en) Vehicle control method, device, storage medium and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination