CN115042814A - Traffic light state identification method and device, vehicle and storage medium - Google Patents

Traffic light state identification method and device, vehicle and storage medium Download PDF

Info

Publication number
CN115042814A
CN115042814A CN202210726827.5A CN202210726827A CN115042814A CN 115042814 A CN115042814 A CN 115042814A CN 202210726827 A CN202210726827 A CN 202210726827A CN 115042814 A CN115042814 A CN 115042814A
Authority
CN
China
Prior art keywords
traffic light
state
vehicle
information
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210726827.5A
Other languages
Chinese (zh)
Inventor
张琼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiaomi Automobile Technology Co Ltd
Original Assignee
Xiaomi Automobile Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Automobile Technology Co Ltd filed Critical Xiaomi Automobile Technology Co Ltd
Priority to CN202210726827.5A priority Critical patent/CN115042814A/en
Publication of CN115042814A publication Critical patent/CN115042814A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0002Automatic control, details of type of controller or control system architecture
    • B60W2050/0004In digital systems, e.g. discrete-time systems involving sampling
    • B60W2050/0005Processor details or data handling, e.g. memory registers or chip architecture

Abstract

The disclosure relates to the field of automatic driving, in particular to a traffic light state identification method, a traffic light state identification device, a vehicle and a storage medium, wherein the traffic light state identification method acquires a plurality of frames of traffic light state images within a preset time length through an image acquisition device on the vehicle; the state time sequence information of the front traffic light in the preset time is determined according to the multi-frame traffic light state images, the current target traffic light state is determined according to the state time sequence information, the target traffic light image at the current moment and the running state information of the front vehicle, the accuracy of a traffic light state identification result can be effectively improved, the probability that the unmanned vehicle runs against the traffic regulations is reduced, therefore, the running safety performance of the unmanned vehicle can be effectively improved, and the user experience of the unmanned vehicle is favorably improved.

Description

Traffic light state identification method and device, vehicle and storage medium
Technical Field
The present disclosure relates to the field of automatic driving, and in particular, to a method and an apparatus for identifying a traffic light status, a vehicle, and a storage medium.
Background
At present, the traffic light state in each frame of image can be accurately identified by a perception algorithm in unmanned driving, but the accuracy of the identification result of the traffic light state is still low through statistics and discovery of the identification result of the traffic light state in a period of time, the condition of illegal cross traffic regulation driving caused by wrong identification of the traffic light state is easy to occur, and the experience of vehicle users is not favorably improved.
Disclosure of Invention
To overcome the problems in the related art, the present disclosure provides a traffic light state recognition method, apparatus, vehicle, and storage medium.
According to a first aspect of the embodiments of the present disclosure, there is provided a traffic light status identification method applied to a vehicle, the method including:
under the condition that the distance between the vehicle and the position of the front traffic light is determined to be smaller than or equal to a preset distance threshold, collecting a plurality of frames of traffic light state images within a preset time length through an image collecting device on the vehicle;
determining state time sequence information of the front traffic light in the preset time length according to the plurality of frames of traffic light state images, wherein the state time sequence information comprises traffic light states corresponding to each time point in the preset time length, and the traffic light states comprise at least one of red light bright and dark states, yellow light bright and dark states and green light bright and dark states;
acquiring a target traffic light image at the current moment and driving state information of a vehicle in front of the vehicle;
and determining the current state of the target traffic light according to the state time sequence information, the target traffic light image and the driving state information.
Optionally, the determining, according to the multiple frames of traffic light state images, the state timing information of the front traffic light within the preset time duration includes:
identifying the traffic light state in each frame of traffic light state image to obtain the traffic light state corresponding to each image acquisition time point in the preset time length;
and determining the traffic light state corresponding to each time point in the preset time length according to the traffic light state corresponding to each image acquisition time point in the preset time length so as to obtain the state time sequence information.
Optionally, the determining the current state of the target traffic light according to the state timing information, the target traffic light image and the driving state information includes:
and inputting the state time sequence information, the target traffic light image and the running state information into a first preset identification model to acquire the state of the target traffic light output by the first preset identification model.
Optionally, the first preset recognition model is obtained by pre-training in the following manner:
acquiring multiple groups of historical sample data corresponding to multiple traffic light intersections, wherein each group of historical sample data comprises state time sequence information, historical traffic light images and historical driving state information of one traffic light intersection within historical preset time;
and performing model training on a first initial model by taking the multiple groups of historical sample data as first training data to obtain the first preset recognition model, wherein the first training data comprises historical traffic light state marking data within a specified time length after the preset time length.
Optionally, the determining the current state of the target traffic light according to the state timing information, the target traffic light image and the driving state information includes:
inputting the state time sequence information into a second preset identification model to obtain predicted state time sequence information in a target time period after the preset time length output by the second preset identification model;
determining a first traffic light signal state corresponding to the current time point according to the predicted state time sequence information;
acquiring a second traffic light signal state in the target traffic light image;
determining a third traffic light signal state according to the driving state information of the preceding vehicle;
determining the target traffic light state based on the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state.
Optionally, said determining the target traffic light state from the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state comprises:
obtaining target weights for the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state;
determining the target traffic light state from the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state by the target weight.
Optionally, the second preset recognition model is obtained by pre-training in the following manner:
acquiring a plurality of time sequence state sample information corresponding to a plurality of traffic light intersections within historical time, wherein the time sequence state sample information comprises state marking data within a target time period;
and performing model training on a second initial model by taking the plurality of time sequence state sample information as second training data to obtain the second preset identification model.
According to a second aspect of the embodiments of the present disclosure, there is provided a traffic light state recognition apparatus applied to a vehicle, the apparatus including:
the first determination module is configured to acquire a plurality of frames of traffic light state images within a preset time length through an image acquisition device on the vehicle under the condition that the distance between the vehicle and the position of the front traffic light is determined to be smaller than or equal to a preset distance threshold value;
a second determining module configured to determine, according to the multiple frames of traffic light state images, state timing information of the front traffic light within the preset time period, where the state timing information includes a traffic light state corresponding to each time point within the preset time period, and the traffic light state includes at least one of a red light bright-dark state, a yellow light bright-dark state, and a green light bright-dark state;
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire a target traffic light image at the current moment and driving state information of a vehicle in front of the vehicle;
a third determination module configured to determine a current target traffic light state according to the state timing information, the target traffic light image, and the driving state information.
Optionally, the second determining module is configured to:
identifying the traffic light state in each frame of traffic light state image to obtain the traffic light state corresponding to each image acquisition time point in the preset time length;
and determining the traffic light state corresponding to each time point in the preset time length according to the traffic light state corresponding to each image acquisition time point in the preset time length so as to obtain the state time sequence information.
Optionally, the third determining module is configured to:
and inputting the state time sequence information, the target traffic light image and the running state information into a first preset identification model to acquire the state of the target traffic light output by the first preset identification model.
Optionally, the first preset recognition model is obtained by pre-training in the following manner:
acquiring multiple groups of historical sample data corresponding to multiple traffic light intersections, wherein each group of historical sample data comprises state time sequence information, historical traffic light images and historical driving state information of one traffic light intersection within historical preset time;
and performing model training on a first initial model by taking the multiple groups of historical sample data as first training data to obtain the first preset recognition model, wherein the first training data comprises historical traffic light state marking data within a specified time length after the preset time length.
Optionally, the third determining module is configured to:
inputting the state time sequence information into a second preset identification model to obtain predicted state time sequence information in a target time period after the preset time length output by the second preset identification model;
determining a first traffic light signal state corresponding to the current time point according to the predicted state time sequence information;
acquiring a second traffic light signal state in the target traffic light image;
determining a third traffic light signal state according to the driving state information of the preceding vehicle;
determining the target traffic light state based on the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state.
Optionally, the third determining module is configured to:
obtaining target weights for the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state;
determining the target traffic light state from the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state by the target weight.
Optionally, the second preset recognition model is obtained by pre-training in the following manner:
acquiring a plurality of time sequence state sample information corresponding to a plurality of traffic light intersections within historical time, wherein the time sequence state sample information comprises state marking data within a target time period;
and performing model training on a second initial model by taking the plurality of time sequence state sample information as second training data to obtain the second preset identification model.
According to a third aspect of the embodiments of the present disclosure, there is provided a traffic light state recognition vehicle including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of the method of the first aspect above are implemented.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the steps of the method of the first aspect above.
The technical scheme provided by the embodiment of the disclosure can have the following beneficial effects:
the method comprises the following steps that a plurality of frames of traffic light state images within a preset time length can be collected through an image collecting device on the vehicle; the state time sequence information of the front traffic light within the preset time is determined according to the multi-frame traffic light state images, the current target traffic light state is determined according to the state time sequence information, the target traffic light image at the current moment and the running state information of the front vehicle, the accuracy of a traffic light state identification result can be effectively improved, the probability that the unmanned vehicle runs against the traffic regulations is reduced, therefore, the running safety performance of the unmanned vehicle can be effectively improved, and the user experience of the unmanned vehicle is favorably improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
FIG. 1 is a flow chart illustrating a method of traffic light status identification in accordance with an exemplary embodiment of the present disclosure;
FIG. 2 is a diagram illustrating a state timing information, according to an exemplary embodiment of the present disclosure;
FIG. 3 is a flow chart illustrating a method of traffic light status identification according to the embodiment shown in FIG. 1 of the present disclosure;
FIG. 4 is a schematic diagram of a model architecture shown in an exemplary embodiment of the present disclosure;
FIG. 5 is a block diagram of a traffic lamp status identification device shown in an exemplary embodiment of the present disclosure;
FIG. 6 is a functional block diagram schematic of a vehicle shown in an exemplary embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It should be noted that all actions of acquiring signals, information or data in the present application are performed under the premise of complying with the corresponding data protection regulation policy of the country of the location and obtaining the authorization given by the owner of the corresponding device.
Before describing the embodiments of the present disclosure in detail, the following description is first made on the application scenarios of the present disclosure, which may be applied to vehicles, especially unmanned vehicles or autonomous vehicles, where the unmanned vehicles are taken as an example, the current unmanned technology requires a perception algorithm to analyze the surrounding environment according to the data acquired by each sensor, the perception may be similar to human eyes, the current perception algorithm has a good understanding for a certain image, but has a limited understanding capability for a scene with changes, for example, a red light is used before and after a red frame of a traffic light, it can be determined that the current scene is a red light scene, the vehicle complies with a traffic rule, and must stop within a stop line and cannot move, but if the yellow light flickers, at least one frame of the images before and after the image acquisition device is a black light, or during the traffic light alternation process (for example, the red light turns yellow, yellow to green, or red to green, green to yellow etc.) also can appear the black light condition, present unmanned vehicle perception algorithm is to the traffic light of scintillation, generally can't accurately judge this traffic light state, the phenomenon of going of violating the rule of crossing because the traffic light state identification mistake causes easily appears, this also is one of the lower reason of the accuracy of the recognition result of present traffic light state, so not only be unfavorable for promoting the security performance of vehicle, also be very unfavorable for promoting vehicle user's experience.
In order to solve the above technical problems, the present disclosure provides a traffic light state identification method, a device, a vehicle, and a storage medium, in which the traffic light state identification method acquires a plurality of frames of traffic light state images within a preset time period through an image acquisition device on the vehicle; the state time sequence information of the front traffic light in the preset time is determined according to the multi-frame traffic light state images, the current target traffic light state is determined according to the state time sequence information, the target traffic light image at the current moment and the running state information of the front vehicle, the accuracy of a traffic light state identification result can be effectively improved, the probability that the unmanned vehicle runs against the traffic regulations is reduced, therefore, the running safety performance of the unmanned vehicle can be effectively improved, and the user experience of the unmanned vehicle is favorably improved.
The technical scheme of the disclosure is explained in detail by combining specific embodiments.
FIG. 1 is a flow chart illustrating a method of traffic light status identification in accordance with an exemplary embodiment of the present disclosure; as shown in fig. 1, the traffic light state recognition method may be applied to a vehicle, including:
step 101, collecting multiple frames of traffic light state images within a preset time length through an image collecting device on the vehicle under the condition that the distance between the vehicle and the position where the front traffic light is located is smaller than or equal to a preset distance threshold value.
The traffic light state image can be an image in a traffic light state video, and the image acquisition device can comprise a vehicle-mounted camera.
For example, in a case that it is determined that the distance between the vehicle and the position of the traffic light in front is less than or equal to 500 meters, a traffic light state video of 15 seconds is acquired, where the traffic light state video includes multiple frames of traffic light state images, each frame of traffic light state image includes an image capable of representing a traffic light state, and the traffic light state may be at least one of a red light bright-dark state, a yellow light bright-dark state, and a green light bright-dark state.
And 102, determining state time sequence information of the front traffic light in the preset time length according to the plurality of frames of traffic light state images, wherein the state time sequence information comprises the state of the traffic light corresponding to each time point in the preset time length.
The traffic light state includes at least one of a red light bright-dark state, a yellow light bright-dark state and a green light bright-dark state, and the preset duration may be one third or one half of a traffic light state period.
In the step, the traffic light state in each frame of the traffic light state image can be identified to obtain the traffic light state corresponding to each image acquisition time point in the preset time length; and determining the traffic light state corresponding to each time point in the preset time length according to the traffic light state corresponding to each image acquisition time point in the preset time length so as to obtain the state time sequence information.
The above-mentioned embodiment of determining the traffic light state corresponding to each time point in the preset duration according to the traffic light state corresponding to each image acquisition time point in the preset duration may be that, in a case that two adjacent image acquisition time points are both in the same traffic light state (for example, both the states of red light being bright, green light being dark and yellow light being dark), the traffic light states corresponding to the two image acquisition time points are used as the traffic light states corresponding to each time point between the two adjacent image acquisition time points. Under the condition that the traffic light states corresponding to two adjacent image acquisition time points are different (for example, the traffic light state of the former image acquisition time point is that a red light is on, a green light and a yellow light are off, and the traffic light state of the latter image acquisition time point is that the yellow light is on, the green light and the red light are dark), the traffic light state corresponding to the intermediate time point between the two adjacent image capturing time points is determined to be a switching state (for example, a state of switching from red light to yellow light) of the traffic light state at the previous image capturing time point and the traffic light state at the next image capturing time point, the time length of the middle time point is the same with the time length of the two image acquisition time points before and after the middle time point, thereby obtaining the state time sequence information in the preset time length, the state timing information may be a state timing diagram as shown in fig. 2, and fig. 2 is a schematic diagram of one state timing information shown in an exemplary embodiment of the present disclosure.
It should be noted that the traffic light state in the traffic light state image may be identified by using a perception algorithm in the prior art, and the perception algorithm may be a neural network algorithm, and may also include other algorithms besides the neural network algorithm.
Step 103, collecting the target traffic light image at the current moment and the driving state information of the vehicle in front of the vehicle.
The target traffic light image is a traffic light image at the current moment acquired by an image acquisition device on the vehicle, the front vehicle is a vehicle which has the same driving direction as the vehicle, is positioned in front of the vehicle and is positioned in the same lane as the vehicle, and the driving state information at least comprises the forward driving speed.
And 104, determining the current state of the target traffic light according to the state time sequence information, the target traffic light image and the driving state information.
In this step, one possible implementation manner is: and inputting the state time sequence information, the target traffic light image and the running state information into a first preset identification model so as to acquire the state of the target traffic light output by the first preset identification model.
The first preset recognition model is obtained by pre-training in the following mode:
acquiring multiple groups of historical sample data corresponding to multiple traffic light intersections, wherein each group of historical sample data comprises state time sequence information, historical traffic light images and historical driving state information of one traffic light intersection within historical preset time; and performing model training on the first initial model by taking the multiple groups of historical sample data as first training data to obtain the first preset recognition model, wherein the first training data comprises traffic light state marking data in a specified time length after the historical preset time length.
It should be noted that the historical traffic light image is a traffic light image of a historical time acquired by a vehicle, for example, the state timing sequence information in the historical preset time period is a time sequence chart from 1 minute 20 seconds at 15 hours at 1 month and 2 days at 2021 year and 2 months at 15 hours at 1 month and 2 days at 2021 year and 1 minute 30 seconds at 15 days at 1 month and 2 days at 2021 year, then the traffic light image of the historical time period may be a traffic light image acquired within a target time period (for example, 20 minutes) after 15 minutes at 1 month and 2 days at 2021 year; the historical driving state information is the driving state information of the front vehicle corresponding to the vehicle acquiring the historical traffic light image at the historical time (for example, the time of acquiring the historical traffic light image or the time of a specified time length from the time of acquiring the historical traffic light image). It should be noted that the first initial model may be a neural network model, or may be another machine learning model in the prior art.
Another possible implementation may include the steps shown in FIG. 3, where FIG. 3 is a flow chart of a traffic lamp status identification method according to the embodiment shown in FIG. 1 of the present disclosure; as shown in fig. 3:
step 1041, inputting the state timing sequence information into a second preset identification model to obtain predicted state timing sequence information within a target time period after the preset duration output by the second preset identification model.
The second preset recognition model is obtained by pre-training in the following mode:
acquiring a plurality of time sequence state sample information corresponding to a plurality of traffic light intersections within historical time, wherein the time sequence state sample information comprises state marking data within a target time period; and performing model training on the second initial model by taking the plurality of time sequence state sample information as second training data to obtain the second preset identification model.
It should be noted that the second initial model may be a Transformer model, as shown in fig. 4, fig. 4 is a schematic structural diagram of a model according to an exemplary embodiment of the present disclosure, as shown in fig. 4, the Transformer model may include an Encoder (Encoder) and a Dncoder (decoder), the Transformer model may be used for predicting a time sequence, and the status labeling data in the target time period may include the traffic light status labeling data at each time point in the target time period.
And 1042, determining the state of the first traffic light signal corresponding to the current time point according to the predicted state timing information.
The predicted state time sequence information is used for representing the state of the traffic light at each time point after the preset time length, after the predicted state time sequence information after the preset time length in the historical time is determined, the predicted state time sequence information comprises the predicted state of the traffic light at each time point after the preset time length in the historical time, and the current time point is one of a plurality of time points after the preset time length, so that the state of the first traffic signal corresponding to the current time point can be determined according to the predicted state time sequence information.
For example, taking the state timing shown in fig. 2 as an example, if the timing state information of the preset duration is the part before t2 in the diagram, and the predicted state timing information is the part after t2, if the current time point is t5, the first traffic light signal state corresponding to the current time point t5 may be determined from the predicted state timing information.
Step 1043, acquiring a second traffic light signal state in the target traffic light image.
In this step, the traffic light state in the target traffic light image can be identified through an image identification algorithm, and the signal state of the second traffic light can be obtained.
Step 1044, determining a third traffic light signal status according to the driving status information of the preceding vehicle.
In this step, it may be determined that the third traffic light signal state is a state where the green light is on, the yellow light and the red light are dark, when the forward travel speed in the travel state information of the preceding vehicle is greater than a preset speed threshold, the third traffic light signal state is a state where the yellow light is on, the green light is dark, and when the forward travel speed in the travel state information of the preceding vehicle is greater than zero but less than or equal to the preset speed threshold, the third traffic light signal state is a state where the red light is on, the green light is dark, and the third traffic light signal state is a state where the red light is on, the green light is dark, when the forward travel speed in the travel state information of the preceding vehicle is zero.
Step 1045, determining the target traffic light state according to the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state.
In this step, the target weights of the first traffic light signal state, the second traffic light signal state and the third traffic light signal state may be obtained; determining the target traffic light state according to the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state by the target weight.
Illustratively, if 1 represents a state where green light is on, red light and yellow light are dark, 0 represents a state where red light is on, yellow light and green light are dark, and 0 represents a state where yellow light is on, red light and green light are dark; when the target weight of the first traffic light signal state is q1, the target weight of the second traffic light signal state is q2, and the target weight of the third traffic light signal state is q3, the first traffic light signal state, the second traffic light signal state and the third traffic light signal state are weighted and summed according to q1, q2 and q3, so as to obtain a weighted and summed result value, when the result value is greater than or equal to a preset threshold value, the target traffic light state is determined to be a green light state, a red light and a yellow light dark state, and when the result value is less than the preset threshold value, the target traffic light state is determined to be a non-green light state, possibly a red light state, a yellow light and green light dark state, and possibly a yellow light, a red light and a green light dark state.
According to the technical scheme, a plurality of frames of traffic light state images within a preset time are collected through an image collecting device on the vehicle; the state time sequence information of the front traffic light in the preset time is determined according to the multi-frame traffic light state images, the current target traffic light state is determined according to the state time sequence information, the target traffic light image at the current moment and the running state information of the front vehicle, the accuracy of a traffic light state identification result can be effectively improved, the probability that the unmanned vehicle runs against the traffic regulations is reduced, therefore, the running safety performance of the unmanned vehicle can be effectively improved, and the user experience of the unmanned vehicle is favorably improved.
FIG. 5 is a block diagram of a traffic lamp status identification device shown in an exemplary embodiment of the present disclosure; as shown in fig. 5, the traffic light state recognition apparatus applied to a vehicle may include:
the first determining module 501 is configured to acquire multiple frames of traffic light state images within a preset time length through an image acquisition device on the vehicle under the condition that the distance between the vehicle and the position of the front traffic light is determined to be smaller than or equal to a preset distance threshold;
a second determining module 502, configured to determine, according to the multiple frames of traffic light state images, state timing information of the front traffic light within the preset time period, where the state timing information includes a traffic light state corresponding to each time point within the preset time period, and the traffic light state includes at least one of a red light bright-dark state, a yellow light bright-dark state, and a green light bright-dark state;
an acquisition module 503 configured to acquire a target traffic light image at a current time and driving state information of a vehicle ahead of the vehicle;
a third determining module 504 configured to determine a current target traffic light status according to the status timing information, the target traffic light image, and the driving status information.
According to the technical scheme, a plurality of frames of traffic light state images within a preset time are collected through an image collecting device on the vehicle; the state time sequence information of the front traffic light in the preset time is determined according to the multi-frame traffic light state images, the current target traffic light state is determined according to the state time sequence information, the target traffic light image at the current moment and the running state information of the front vehicle, the accuracy of a traffic light state identification result can be effectively improved, the probability that the unmanned vehicle runs against the traffic regulations is reduced, therefore, the running safety performance of the unmanned vehicle can be effectively improved, and the user experience of the unmanned vehicle is favorably improved.
Optionally, the second determining module 502 is configured to:
identifying the traffic light state in each frame of traffic light state image to obtain the traffic light state corresponding to each image acquisition time point in the preset time length;
and determining the traffic light state corresponding to each time point in the preset time length according to the traffic light state corresponding to each image acquisition time point in the preset time length so as to obtain the state time sequence information.
Optionally, the third determining module 504 is configured to:
and inputting the state time sequence information, the target traffic light image and the running state information into a first preset identification model so as to acquire the state of the target traffic light output by the first preset identification model.
Optionally, the first preset recognition model is obtained by pre-training in the following manner:
acquiring multiple groups of historical sample data corresponding to multiple traffic light intersections, wherein each group of historical sample data comprises state time sequence information, historical traffic light images and historical driving state information of one traffic light intersection within the historical preset time;
and performing model training on the first initial model by taking the multiple groups of historical sample data as first training data to obtain the first preset recognition model, wherein the first training data comprises traffic light state marking data in a specified time length after the historical preset time length.
Optionally, the third determining module 504 is configured to:
inputting the state time sequence information into a second preset identification model to obtain predicted state time sequence information in a target time period after the preset time length output by the second preset identification model;
determining a first traffic light signal state corresponding to the current time point according to the predicted state time sequence information;
acquiring a second traffic light signal state in the target traffic light image;
determining a third traffic light signal state according to the driving state information of the preceding vehicle;
determining the target traffic light state based on the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state.
Optionally, the third determining module 504 is configured to:
acquiring target weights of the first traffic light signal state, the second traffic light signal state and the third traffic light signal state;
determining the target traffic light state according to the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state by the target weight.
Optionally, the second preset recognition model is obtained by pre-training in the following manner:
acquiring a plurality of time sequence state sample information corresponding to a plurality of traffic light intersections within historical time, wherein the time sequence state sample information comprises state marking data within a target time period;
and performing model training on the second initial model by taking the plurality of time sequence state sample information as second training data to obtain the second preset identification model.
According to the technical scheme, a plurality of frames of traffic light state images within a preset time are collected through an image collecting device on the vehicle; the state time sequence information of the front traffic light in the preset time is determined according to the multi-frame traffic light state images, the current target traffic light state is determined according to the state time sequence information, the target traffic light image at the current moment and the running state information of the front vehicle, the accuracy of a traffic light state identification result can be effectively improved, the probability that the unmanned vehicle runs against the traffic regulations is reduced, therefore, the running safety performance of the unmanned vehicle can be effectively improved, and the user experience of the unmanned vehicle is favorably improved.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Referring to fig. 6, fig. 6 is a functional block diagram of a vehicle according to an exemplary embodiment. The vehicle 600 may be configured in a fully or partially autonomous driving mode. For example, the vehicle 600 may acquire environmental information of its surroundings through the sensing system 620 and derive an automatic driving strategy based on an analysis of the surrounding environmental information to implement full automatic driving, or present the analysis result to the user to implement partial automatic driving.
Vehicle 600 may include various subsystems such as infotainment system 610, perception system 620, decision control system 630, drive system 640, and computing platform 650. Alternatively, vehicle 600 may include more or fewer subsystems, and each subsystem may include multiple components. In addition, each of the sub-systems and components of the vehicle 600 may be interconnected by wire or wirelessly.
In some embodiments, the infotainment system 610 may include a communication system 611, an entertainment system 612, and a navigation system 613.
The communication system 611 may comprise a wireless communication system that may communicate wirelessly with one or more devices, either directly or via a communication network. For example, the wireless communication system may use 3G cellular communication, such as CDMA, EVD0, GSM/GPRS, or 4G cellular communication, such as LTE. Or 5G cellular communication. The wireless communication system may communicate with a Wireless Local Area Network (WLAN) using WiFi. In some embodiments, the wireless communication system may utilize an infrared link, bluetooth, or ZigBee to communicate directly with the device. Other wireless protocols, such as various vehicular communication systems, for example, a wireless communication system may include one or more Dedicated Short Range Communications (DSRC) devices that may include public and/or private data communications between vehicles and/or roadside stations.
The entertainment system 612 may include a display device, a microphone, and a sound box, and a user may listen to a broadcast in the car based on the entertainment system, playing music; or the mobile phone is communicated with the vehicle, screen projection of the mobile phone is realized on the display equipment, the display equipment can be in a touch control type, and a user can operate the display equipment by touching the screen.
In some cases, the voice signal of the user may be acquired through a microphone, and certain control of the vehicle 600 by the user, such as adjusting the temperature in the vehicle, etc., may be implemented according to the analysis of the voice signal of the user. In other cases, music may be played to the user through a sound.
The navigation system 613 may include a map service provided by a map provider to provide navigation of a route of travel for the vehicle 600, and the navigation system 613 may be used in conjunction with a global positioning system 621 and an inertial measurement unit 622 of the vehicle. The map service provided by the map provider can be a two-dimensional map or a high-precision map.
The sensing system 620 may include several types of sensors that sense information about the environment surrounding the vehicle 600. For example, the sensing system 620 may include a global positioning system 621 (the global positioning system may be a GPS system, a beidou system or other positioning system), an Inertial Measurement Unit (IMU) 622, a laser radar 623, a millimeter wave radar 624, an ultrasonic radar 625, and a camera 626. The sensing system 620 may also include sensors of internal systems of the monitored vehicle 600 (e.g., an in-vehicle air quality monitor, a fuel gauge, an oil temperature gauge, etc.). Sensor data from one or more of these sensors may be used to detect the object and its corresponding characteristics (position, shape, orientation, velocity, etc.). Such detection and identification is a critical function of the safe operation of the vehicle 600.
Global positioning system 621 is used to estimate the geographic location of vehicle 600.
The inertial measurement unit 622 is used to sense a pose change of the vehicle 600 based on the inertial acceleration. In some embodiments, inertial measurement unit 622 may be a combination of accelerometers and gyroscopes.
Lidar 623 utilizes laser light to sense objects in the environment in which vehicle 600 is located. In some embodiments, lidar 623 may include one or more laser sources, laser scanners, and one or more detectors, among other system components.
The millimeter-wave radar 624 utilizes radio signals to sense objects within the surrounding environment of the vehicle 600. In some embodiments, in addition to sensing objects, the millimeter-wave radar 624 may also be used to sense the speed and/or heading of objects.
The ultrasonic radar 625 may sense objects around the vehicle 600 using ultrasonic signals.
The camera 626 is used to capture image information of the surroundings of the vehicle 600. The image capturing device 626 may include a monocular camera, a binocular camera, a structured light camera, a panoramic camera, and the like, and the image information acquired by the image capturing device 626 may include still images or video stream information.
Decision control system 630 includes a computing system 631 that makes analytical decisions based on information acquired by sensing system 620, decision control system 630 further includes a vehicle control unit 632 that controls the powertrain of vehicle 600, and a steering system 633, throttle 634, and brake system 635 for controlling vehicle 600.
The computing system 631 may operate to process and analyze the various information acquired by the perception system 620 to identify objects, and/or features in the environment surrounding the vehicle 600. The target may comprise a pedestrian or an animal and the objects and/or features may comprise traffic signals, road boundaries and obstacles. Computing system 631 may use object recognition algorithms, Motion from Motion (SFM) algorithms, video tracking, and like techniques. In some embodiments, the computing system 631 may be used to map an environment, track objects, estimate the speed of objects, and so forth. The computing system 631 may analyze the various information obtained and derive a control strategy for the vehicle.
The vehicle controller 632 may be used to perform coordinated control on the power battery and the engine 641 of the vehicle to improve the power performance of the vehicle 600.
The steering system 633 is operable to adjust the heading of the vehicle 600. For example, in one embodiment, a steering wheel system.
The throttle 634 is used to control the operating speed of the engine 641 and thus the speed of the vehicle 600.
The brake system 635 is used to control the deceleration of the vehicle 600. The braking system 635 may use friction to slow the wheel 644. In some embodiments, the braking system 635 may convert the kinetic energy of the wheels 644 into electrical current. The braking system 635 may also take other forms to slow the rotational speed of the wheels 644 to control the speed of the vehicle 600.
The drive system 640 may include components that provide powered motion to the vehicle 600. In one embodiment, the drive system 640 may include an engine 641, an energy source 642, a transmission 643, and wheels 644. The engine 641 may be an internal combustion engine, an electric motor, an air compression engine, or other types of engine combinations, such as a hybrid engine consisting of a gasoline engine and an electric motor, a hybrid engine consisting of an internal combustion engine and an air compression engine. The engine 641 converts the energy source 642 into mechanical energy.
Examples of energy sources 642 include gasoline, diesel, other petroleum-based fuels, propane, other compressed gas-based fuels, ethanol, solar panels, batteries, and other sources of electrical power. The energy source 642 may also provide energy to other systems of the vehicle 600.
The transmission 643 may transmit mechanical power from the engine 641 to the wheels 644. The transmission 643 may include a gearbox, a differential, and a drive shaft. In one embodiment, the transmission 643 may also include other components, such as clutches. Wherein the drive shaft may include one or more axles that may be coupled to one or more wheels 644.
Some or all of the functionality of the vehicle 600 is controlled by the computing platform 650. Computing platform 650 can include at least one processor 651, which processor 651 can execute instructions 653 stored in a non-transitory computer-readable medium, such as memory 652. In some embodiments, the computing platform 650 may also be a plurality of computing devices that control individual components or subsystems of the vehicle 600 in a distributed manner.
The processor 651 may be any conventional processor, such as a commercially available CPU. Alternatively, the processor 651 may also include a processor such as a Graphics Processor Unit (GPU), a Field Programmable Gate Array (FPGA), a System On Chip (SOC), an Application Specific Integrated Circuit (ASIC), or a combination thereof. Although fig. 6 functionally illustrates a processor, memory, and other elements of a computer in the same block, those skilled in the art will appreciate that the processor, computer, or memory may actually comprise multiple processors, computers, or memories that may or may not be stored within the same physical housing. For example, the memory may be a hard drive or other storage medium located in a different enclosure than the computer. Thus, references to a processor or computer are to be understood as including references to a collection of processors or computers or memories which may or may not operate in parallel. Rather than using a single processor to perform the steps described herein, some components, such as the steering component and the retarding component, may each have their own processor that performs only computations related to the component-specific functions.
In the disclosed embodiment, the processor 651 may perform the traffic light state identification method described above.
In various aspects described herein, the processor 651 may be located remotely from the vehicle and in wireless communication with the vehicle. In other aspects, some of the processes described herein are executed on a processor disposed within the vehicle and others are executed by a remote processor, including taking the steps necessary to perform a single maneuver.
In some embodiments, the memory 652 may contain instructions 653 (e.g., program logic), which instructions 653 may be executed by the processor 651 to perform various functions of the vehicle 600. The memory 652 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the infotainment system 610, the perception system 620, the decision control system 630, the drive system 640.
In addition to instructions 653, memory 652 may store data such as road maps, route information, the location, direction, speed of the vehicle, and other such vehicle data, as well as other information. Such information may be used by the vehicle 600 and the computing platform 650 during operation of the vehicle 600 in autonomous, semi-autonomous, and/or manual modes.
Computing platform 650 may control functions of vehicle 600 based on inputs received from various subsystems (e.g., drive system 640, perception system 620, and decision control system 630). For example, computing platform 650 may utilize input from decision control system 630 in order to control steering system 633 to avoid obstacles detected by perception system 620. In some embodiments, the computing platform 650 is operable to provide control over many aspects of the vehicle 600 and its subsystems.
Optionally, one or more of these components described above may be mounted or associated separately from the vehicle 600. For example, the memory 652 may exist partially or completely separate from the vehicle 600. The above components may be communicatively coupled together in a wired and/or wireless manner.
Optionally, the above components are only an example, in an actual application, components in the above modules may be added or deleted according to an actual need, and fig. 6 should not be construed as limiting the embodiment of the present disclosure.
An autonomous automobile traveling on a roadway, such as vehicle 600 above, may identify objects within its surrounding environment to determine an adjustment to the current speed. The object may be another vehicle, a traffic control device, or another type of object. In some examples, each identified object may be considered independently, and based on the respective characteristics of the object, such as its current speed, acceleration, separation from the vehicle, etc., may be used to determine the speed at which the autonomous vehicle is to be adjusted.
Optionally, the vehicle 600 or a sensory and computing device associated with the vehicle 600 (e.g., computing system 631, computing platform 650) may predict behavior of the identified object based on characteristics of the identified object and the state of the surrounding environment (e.g., traffic, rain, ice on the road, etc.). Optionally, each identified object depends on the behavior of each other, so it is also possible to predict the behavior of a single identified object taking all identified objects together into account. The vehicle 600 is able to adjust its speed based on the predicted behavior of the identified object. In other words, the autonomous vehicle is able to determine what steady state the vehicle will need to adjust to (e.g., accelerate, decelerate, or stop) based on the predicted behavior of the object. In this process, other factors may also be considered to determine the speed of the vehicle 600, such as the lateral position of the vehicle 600 in the road being traveled, the curvature of the road, the proximity of static and dynamic objects, and so forth.
In addition to providing instructions to adjust the speed of the autonomous vehicle, the computing device may also provide instructions to modify the steering angle of the vehicle 600 to cause the autonomous vehicle to follow a given trajectory and/or maintain a safe lateral and longitudinal distance from objects in the vicinity of the autonomous vehicle (e.g., vehicles in adjacent lanes on the road).
The vehicle 600 may be any type of vehicle, such as a car, a truck, a motorcycle, a bus, a boat, an airplane, a helicopter, a recreational vehicle, a train, etc., and the disclosed embodiment is not particularly limited.
In another exemplary embodiment, a computer program product is also provided, which comprises a computer program executable by a programmable apparatus, the computer program having code portions for performing the above-mentioned traffic lamp status identification method when executed by the programmable apparatus.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A traffic light status identification method, applied to a vehicle, the method comprising:
under the condition that the distance between the vehicle and the position of the front traffic light is determined to be smaller than or equal to a preset distance threshold, collecting a plurality of frames of traffic light state images within a preset time length through an image collecting device on the vehicle;
determining state time sequence information of the front traffic light in the preset time length according to the plurality of frames of traffic light state images, wherein the state time sequence information comprises traffic light states corresponding to each time point in the preset time length, and the traffic light states comprise at least one of red light bright and dark states, yellow light bright and dark states and green light bright and dark states;
acquiring a target traffic light image at the current moment and driving state information of a vehicle in front of the vehicle;
and determining the current state of the target traffic light according to the state time sequence information, the target traffic light image and the driving state information.
2. The method according to claim 1, wherein the determining the state timing information of the front traffic light within the preset time period according to the plurality of frames of traffic light state images comprises:
identifying the traffic light state in each frame of traffic light state image to obtain the traffic light state corresponding to each image acquisition time point in the preset time length;
and determining the traffic light state corresponding to each time point in the preset time length according to the traffic light state corresponding to each image acquisition time point in the preset time length so as to obtain the state time sequence information.
3. The method of claim 1, wherein determining a current target traffic light state from the state timing information, the target traffic light image, and the driving state information comprises:
and inputting the state time sequence information, the target traffic light image and the running state information into a first preset identification model to acquire the state of the target traffic light output by the first preset identification model.
4. The method of claim 3, wherein the first pre-set recognition model is pre-trained by:
acquiring multiple groups of historical sample data corresponding to multiple traffic light intersections, wherein each group of historical sample data comprises state time sequence information, historical traffic light images and historical driving state information of one traffic light intersection within historical preset time;
and performing model training on a first initial model by taking the multiple groups of historical sample data as first training data to obtain the first preset recognition model, wherein the first training data comprises historical traffic light state marking data within a specified time length after the preset time length.
5. The method of claim 1, wherein determining a current target traffic light status from the status timing information, the target traffic light image, and the travel status information comprises:
inputting the state time sequence information into a second preset identification model to obtain predicted state time sequence information in a target time period after the preset time length output by the second preset identification model;
determining a first traffic light signal state corresponding to the current time point according to the predicted state time sequence information;
acquiring a second traffic light signal state in the target traffic light image;
determining a third traffic light signal state according to the driving state information of the preceding vehicle;
determining the target traffic light state based on the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state.
6. The method of claim 5, wherein determining the target traffic light state from the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state comprises:
obtaining target weights for the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state;
determining the target traffic light state from the first traffic light signal state, the second traffic light signal state, and the third traffic light signal state by the target weight.
7. The method of claim 5, wherein the second pre-set recognition model is pre-trained by:
acquiring a plurality of time sequence state sample information corresponding to a plurality of traffic light intersections within historical time, wherein the time sequence state sample information comprises state marking data within a target time period;
and performing model training on a second initial model by taking the plurality of time sequence state sample information as second training data to obtain the second preset identification model.
8. A traffic light status recognition apparatus, for use with a vehicle, the apparatus comprising:
the first determination module is configured to acquire a plurality of frames of traffic light state images within a preset time length through an image acquisition device on the vehicle under the condition that the distance between the vehicle and the position of the front traffic light is determined to be smaller than or equal to a preset distance threshold value;
a second determining module configured to determine, according to the multiple frames of traffic light state images, state timing information of the front traffic light within the preset time period, where the state timing information includes a traffic light state corresponding to each time point within the preset time period, and the traffic light state includes at least one of a red light bright-dark state, a yellow light bright-dark state, and a green light bright-dark state;
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is configured to acquire a target traffic light image at the current moment and driving state information of a vehicle in front of the vehicle;
a third determination module configured to determine a current target traffic light state according to the state timing information, the target traffic light image, and the driving state information.
9. A traffic light status recognition vehicle, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to:
the steps of carrying out the method of any one of claims 1 to 7.
10. A computer-readable storage medium, on which computer program instructions are stored, which program instructions, when executed by a processor, carry out the steps of the method according to any one of claims 1 to 7.
CN202210726827.5A 2022-06-23 2022-06-23 Traffic light state identification method and device, vehicle and storage medium Pending CN115042814A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210726827.5A CN115042814A (en) 2022-06-23 2022-06-23 Traffic light state identification method and device, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210726827.5A CN115042814A (en) 2022-06-23 2022-06-23 Traffic light state identification method and device, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN115042814A true CN115042814A (en) 2022-09-13

Family

ID=83163941

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210726827.5A Pending CN115042814A (en) 2022-06-23 2022-06-23 Traffic light state identification method and device, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN115042814A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984825A (en) * 2023-03-02 2023-04-18 安徽蔚来智驾科技有限公司 Signal lamp flicker perception method, vehicle control method, device, medium and vehicle

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115984825A (en) * 2023-03-02 2023-04-18 安徽蔚来智驾科技有限公司 Signal lamp flicker perception method, vehicle control method, device, medium and vehicle

Similar Documents

Publication Publication Date Title
CN115042821B (en) Vehicle control method, vehicle control device, vehicle and storage medium
CN115123257A (en) Method and device for identifying position of road deceleration strip, vehicle, storage medium and chip
CN115100377A (en) Map construction method and device, vehicle, readable storage medium and chip
CN115035494A (en) Image processing method, image processing device, vehicle, storage medium and chip
CN115042814A (en) Traffic light state identification method and device, vehicle and storage medium
CN115202234B (en) Simulation test method and device, storage medium and vehicle
CN115056784B (en) Vehicle control method, device, vehicle, storage medium and chip
CN114842440B (en) Automatic driving environment sensing method and device, vehicle and readable storage medium
CN114782638B (en) Method and device for generating lane line, vehicle, storage medium and chip
CN115203457B (en) Image retrieval method, device, vehicle, storage medium and chip
CN115205311B (en) Image processing method, device, vehicle, medium and chip
CN114771539B (en) Vehicle lane change decision method and device, storage medium and vehicle
CN112829762A (en) Vehicle running speed generation method and related equipment
CN115205848A (en) Target detection method, target detection device, vehicle, storage medium and chip
CN115221151A (en) Vehicle data transmission method and device, vehicle, storage medium and chip
CN115205179A (en) Image fusion method and device, vehicle and storage medium
CN114537450A (en) Vehicle control method, device, medium, chip, electronic device and vehicle
CN115063639B (en) Model generation method, image semantic segmentation device, vehicle and medium
CN115139946B (en) Vehicle falling water detection method, vehicle, computer readable storage medium and chip
CN115115707B (en) Vehicle falling water detection method, vehicle, computer readable storage medium and chip
CN114852092B (en) Steering wheel hands-off detection method and device, readable storage medium and vehicle
CN115407344B (en) Grid map creation method, device, vehicle and readable storage medium
CN114877911B (en) Path planning method, device, vehicle and storage medium
CN115082772B (en) Location identification method, location identification device, vehicle, storage medium and chip
CN115082886B (en) Target detection method, device, storage medium, chip and vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination