WO2022228251A1 - 一种车辆驾驶方法、装置及系统 - Google Patents

一种车辆驾驶方法、装置及系统 Download PDF

Info

Publication number
WO2022228251A1
WO2022228251A1 PCT/CN2022/088025 CN2022088025W WO2022228251A1 WO 2022228251 A1 WO2022228251 A1 WO 2022228251A1 CN 2022088025 W CN2022088025 W CN 2022088025W WO 2022228251 A1 WO2022228251 A1 WO 2022228251A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
environment
noise
vehicle
recognition model
Prior art date
Application number
PCT/CN2022/088025
Other languages
English (en)
French (fr)
Inventor
罗达新
高鲁涛
马莎
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2022228251A1 publication Critical patent/WO2022228251A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W50/08Interaction between the driver and the control system
    • B60W50/082Selecting or switching between different modes of propelling
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • G06F30/27Design optimisation, verification or simulation using machine learning, e.g. artificial intelligence, neural networks, support vector machines [SVM] or training a model
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2420/00Indexing codes relating to the type of sensors based on the principle of their operation
    • B60W2420/40Photo, light or radio wave sensitive means, e.g. infrared sensors
    • B60W2420/403Image sensing, e.g. optical camera

Definitions

  • the present application relates to the technical field of Internet of Vehicles, and in particular, to a vehicle driving method, device and system.
  • the vehicle when the vehicle supports the automatic driving function, the vehicle will take pictures of the surrounding environment in real time during the automatic driving process, and input the surrounding environment image into the environment recognition model to identify obstacles and lane lines around the vehicle. And the passing space and other information, and then based on this information to automatically guide the next pass of the vehicle.
  • special weather or light and other meteorological conditions will have a certain impact on the quality of the images captured by the vehicle.
  • the robustness of the environment recognition model is not good, it means that the ability of the environment recognition model to adapt to abnormal weather conditions is worse, and the recognition effect of the environment recognition model for the surrounding environment images captured under abnormal weather conditions will also be poor.
  • the prior art typically evaluates the environment recognition model before providing it to the vehicle. That is to say, after the environment recognition model is obtained by training, first use a special robustness detection tool to detect the recognition effect of the environment recognition model on the images in the preset image library, and determine the recognition effect of the environment recognition model according to the recognition effect of these images. Whether the robustness meets the preset robustness requirements, when it is determined that the preset robustness requirements are met, the environment recognition model is provided to the vehicle, and the vehicle will always use the environment recognition model after receiving the environment recognition model.
  • the model performs autonomous driving.
  • the images in the preset image library are relatively limited and cannot cover all the environments that may appear in the process of vehicle passing.
  • the environment recognition model actually The above does not apply well to the current environment of the vehicle. That is to say, the robustness evaluation method in the existing scheme is actually not conducive to the accurate implementation of the vehicle's automatic driving function.
  • the present application provides a vehicle driving method, device and system to improve the accuracy of implementing a vehicle driving strategy (eg, an automatic driving strategy).
  • a vehicle driving strategy eg, an automatic driving strategy
  • the present application provides a vehicle driving method, which is applicable to a vehicle driving device.
  • the method includes: the vehicle driving device obtains an environmental image obtained by a vehicle photographing a current passing area, and uses the environmental image to evaluate the current driving strategy.
  • the model robustness of the environment recognition model when the model robustness is lower than the preset robustness threshold, adjust the current driving strategy.
  • the environment recognition model is used to perceive the surrounding environment in the current traffic area under the current driving strategy, and the perceived perception result is used to guide the vehicle to pass in the current traffic area.
  • the model robustness of the environment recognition model can be evaluated in a targeted manner by using the environmental images in the process of vehicle traffic, so that the robustness evaluation results can accurately characterize the applicability of the environment recognition model to the current traffic environment, effectively Improve the quality of the evaluation of model robustness.
  • the current driving strategy in time when the environment recognition model adopted by the current driving strategy is no longer suitable for the current environment, it can also try to use an appropriate driving strategy to guide the vehicle to pass, which helps to improve the accuracy of the implementation of the current driving strategy.
  • the vehicle driving device before the vehicle driving device uses the environment image to evaluate the model robustness of the environment recognition model under the current driving strategy, it can also determine the target environmental noise type existing in the current passing area, and determine the current passing area. It is attacked by noise corresponding to the target environmental noise type.
  • This design is equivalent to adding an execution condition to the operation of evaluating the environment recognition model, that is, the environment recognition model is evaluated only when it is determined that the current passing area is under attack, and the environment recognition model can not be evaluated when it is not attacked, so as to save unnecessary unnecessary Waste of resources.
  • the vehicle driving device can evaluate the model robustness of the environment recognition model under the current traffic strategy in any of the following ways:
  • the vehicle driving device acquires a target preset image whose similarity with the environment image is not lower than a preset similarity threshold from a preset image library, and uses the environment image and the target preset image together as a reference image, and uses the reference image.
  • Model Robustness of Image Evaluation Environment Recognition Models are used. In this way, by using the target preset image with the same environmental conditions as the environmental image to evaluate the model robustness of the environment recognition model, the number of samples for evaluation can be effectively increased, and the reliability of the evaluation result can be improved.
  • the vehicle driving device directly uses the environment image as the reference image, and uses the reference image to evaluate the model robustness of the environment recognition model, and can no longer waste resources to store the preset image and query the target preset image, so as to improve the robustness evaluation efficiency.
  • the vehicle driving device acquires a target preset image whose similarity with the environment image is not lower than a preset similarity threshold from the preset image library, and uses the target preset image as a reference image, and uses the reference image to evaluate the environment recognition Model robustness of the model, instead of using environmental images, to reduce the amount of data to be analyzed and improve the efficiency of robustness evaluation.
  • the vehicle driving device uses the benchmark image to evaluate the model robustness of the environment recognition model, including: the vehicle driving device first determines the target environmental noise type existing in the current passing area, and generates a target environmental noise type that conforms to the target environmental noise type.
  • Each noise information corresponds to a different noise level; then, the vehicle driving device repeatedly performs the following operations until all the noise information is traversed, and obtains each disturbance image corresponding to each noise information: in each noise information Traverse the noise information that has not been traversed, add the traversed noise information to the reference image, and obtain a corresponding disturbance image; after that, the vehicle driving device identifies the baseline image and each disturbance image according to the environment recognition model to determine The disturbance image can be accurately identified by the environment recognition model, and then the maximum noise level corresponding to the identified disturbance image can be determined.
  • the ability of the environment recognition model to recognize the disturbance image can accurately characterize the effect of the environment recognition model on the noise changes in the current environment.
  • the adaptability can help to improve the accuracy of the evaluation results.
  • the vehicle driving device may determine that the model robustness of the environment recognition model is lower than a preset robustness threshold in the following manner: the vehicle driving device obtains a preset noise level, and determines that it can accurately identify the The maximum noise level corresponding to the disturbed image is less than the preset noise level.
  • the preset noise level is used to indicate the maximum noise level corresponding to the disturbance image that needs to be accurately identified by the environment identification model. In this way, through the current maximum noise level that can be recognized by the environment recognition model used in the current driving strategy and the maximum noise level that needs to be recognized, it is possible to adjust the current environment in time when the adaptability of the environment recognition model to current environmental changes cannot meet the requirements.
  • the driving strategy can effectively improve the accuracy of the current driving strategy implementation.
  • the vehicle driving device identifies the reference image and each disturbance image according to the environment recognition model, and determines the disturbance image that can be accurately recognized by the environment recognition model, including : The vehicle driving device first determines the recognition accuracy of each disturbance image corresponding to each reference image with respect to each reference image according to the environment recognition model, and then for each noise information in each noise information, according to adding the noise information The identification accuracy of the at least two disturbed images relative to the respective reference images is determined, and the identification accuracy corresponding to the noise information is determined. If the identification accuracy is not lower than the preset accuracy threshold, it is determined to add at least two noise information.
  • Each disturbance image is a disturbance image that can be accurately identified by the environment recognition model. In this design, by taking the noise information as the benchmark, judging the adaptability of the environment recognition model to each noise information disturbance is helpful to find the maximum noise level that the environment recognition model can adapt to.
  • the target environmental noise type may include any of the following: the target environmental noise type obtained by classifying the environmental image using a noise classification model; the target environmental noise type obtained by analyzing the weather forecast information of the current passing area; The target ambient noise type obtained from the dynamic layer of the navigation map; the target ambient noise type requested from the server.
  • the vehicle driving device can adjust the current driving strategy by any of the following operations:
  • Operation 1 The vehicle driving device performs denoising processing on the environment image and then inputs the environment recognition model.
  • This method can relatively improve the recognition effect of the environmental recognition model by improving the image quality of the environmental image input to the environmental recognition model, without using the environmental recognition model, and the operation is relatively simple and easy to implement.
  • the vehicle driving device adopts another environment recognition model different from the environment recognition model under the current driving strategy to complete the current driving strategy.
  • this method can flexibly switch to another environmental recognition model to continue the current driving strategy when the currently used environmental recognition model is no longer applicable to the current environment, which is helpful to maintain the smooth driving of the vehicle and improve the Owner's driving experience.
  • the vehicle driving device switches to the manual driving strategy.
  • This method can be realized by exiting the automatic driving model and prompting the owner to take over the driving operation, which helps to realize flexible switching between the automatic driving strategy and the manual driving strategy.
  • the vehicle driving device takes emergency measures, such as emergency braking.
  • emergency measures such as emergency braking. This method helps to stop the vehicle in an emergency when the current environment is so bad that it is not suitable for the vehicle to drive, and then re-enable the environment recognition model when the environment becomes better, which helps to protect the driving safety of the vehicle owner.
  • the present application provides a vehicle driving device, comprising a processor and a memory, the processor and the memory are connected, the memory is used for storing a computer program, and when the computer program stored in the memory is executed by the processor, the vehicle driving device is executed.
  • the model robustness is lower than the preset robustness threshold, adjust the current driving Strategy.
  • the environment recognition model is used to perceive the surrounding environment in the current traffic area under the current driving strategy, and the perceived perception result is used to guide the vehicle to pass in the current traffic area.
  • the vehicle driving device when the computer program stored in the memory is executed by the processor, the vehicle driving device can also execute: before using the environment image to evaluate the model robustness of the environment recognition model under the current driving strategy, determine The target environmental noise type that exists in the current passing area, and it is determined that the current passing area is attacked by the noise corresponding to the target environmental noise type.
  • the vehicle driving device when the computer program stored in the memory is executed by the processor, the vehicle driving device is made to specifically execute: take the environment image as the reference image, and use the reference image to evaluate the model robustness of the environment recognition model; or, Obtain the target preset image whose similarity with the environment image is not lower than the preset similarity threshold from the preset image library, use the target preset image and the environment image as the benchmark image, and use the benchmark image to evaluate the model robustness of the environment recognition model. Or, obtain the target preset image whose similarity with the environment image is not lower than the preset similarity threshold from the preset image library, use the target preset image as the benchmark image, and use the benchmark image to evaluate the performance of the environment recognition model. Model robustness.
  • the vehicle driving device when the computer program stored in the memory is executed by the processor, the vehicle driving device is made to specifically execute: first determine the target environmental noise type existing in the current passing area, and generate various Noise information, each noise information corresponds to different noise levels; then, repeat the following operations until each noise information has been traversed, and obtain each disturbance image corresponding to each noise information: traverse the untraversed images in each noise information: Noise information, add the traversed noise information to the reference image to obtain a corresponding disturbance image; then, according to the environment recognition model, identify the reference image and each disturbance image, and determine the disturbance image that can be accurately identified by the environment recognition model. , and then determine the maximum noise level corresponding to the identified disturbance image.
  • the vehicle driving device when the computer program stored in the memory is executed by the processor, the vehicle driving device is made to specifically execute: acquire a preset noise level, and determine that the maximum noise level corresponding to the identified disturbance image is smaller than the preset noise level. set noise level.
  • the preset noise level is used to indicate the maximum noise level corresponding to the disturbance image that needs to be accurately identified by the environment identification model.
  • the vehicle driving device when the computer program stored in the memory is executed by the processor, the vehicle driving device is made to specifically execute: in the case that the reference images include at least two, determine, according to the environment recognition model, that each reference image corresponds to The recognition accuracy of each disturbance image relative to each reference image, and for each noise information in each noise information, perform the following operations: According to the identification accuracy of at least two disturbance images with added noise information relative to the respective reference images Determine the recognition accuracy corresponding to the noise information. If the recognition accuracy is not lower than the preset accuracy threshold, then at least two disturbance information is determined as the disturbance information that can be accurately identified by the environment recognition model.
  • the target environmental noise type may include any of the following: use a noise classification model to classify environmental images to obtain the target environmental noise type; analyze the weather forecast information of the current passing area to obtain the target environmental noise type; Obtain the target ambient noise type from the dynamic layer of the map; request the target ambient noise type from the server.
  • the vehicle driving device adjusts the current driving strategy, including any of the following: denoising the environment image and then inputting the environment recognition model ; Use another environment recognition model different from the one under the current driving strategy to complete the current driving strategy; switch to the manual driving strategy; take emergency measures.
  • the vehicle driving device may be a vehicle or a server.
  • the present application provides a vehicle driving device, including a module/unit for executing the method corresponding to any one of the designs of the first aspect.
  • modules/units can be implemented by hardware or by executing corresponding software by hardware.
  • the present application provides a vehicle driving device, which includes a processor and a communication interface, wherein the communication interface is used to receive and transmit signals from other communication devices other than the vehicle driving device described in the second aspect. to a processor or send a signal from the processor to a communication device other than the vehicle driving device described in the above second aspect, the processor is used to implement any one of the first aspect through a logic circuit or executing code instructions Methods.
  • the present application provides a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, and when the computer program is executed, the method according to any one of the above-mentioned first aspects is implemented.
  • the present application provides a chip, including a processor and an interface; the processor is configured to read instructions through the interface to execute the method according to any one of the above-mentioned first aspects.
  • the present application provides a computer program product, the computer program product comprising a computer program, when the computer program is run on a computer, the computer can execute the method according to any one of the above first aspects .
  • the present application provides an Internet of Vehicles system, the system includes a vehicle and a server, the vehicle is used to capture an environmental image of a current passing area, and send the environmental image to the server, and the server is used to execute any one of the first aspects above
  • a driving strategy adjustment instruction is sent to the vehicle, and the vehicle is further configured to adjust the current driving strategy according to the driving strategy adjustment instruction.
  • FIG. 1 is a schematic diagram of a possible system architecture to which an embodiment of the present application is applicable;
  • FIG. 2 exemplarily shows a schematic diagram of a model robustness evaluation method commonly used in the industry
  • FIG. 3 exemplarily shows a schematic diagram of another model robustness evaluation method commonly used in the industry
  • FIG. 4 exemplarily shows a schematic diagram of the hardware architecture of a vehicle provided by an embodiment of the present application
  • FIG. 5 exemplarily shows a schematic flowchart of a vehicle driving method provided in Embodiment 1 of the present application
  • FIG. 6 exemplarily shows a schematic flowchart of a vehicle driving method provided in Embodiment 2 of the present application
  • FIG. 7 exemplarily shows a schematic flowchart of a robustness evaluation provided by an embodiment of the present application.
  • FIG. 8 exemplarily shows a schematic diagram of a disturbance image added with different weather types provided by an embodiment of the present application
  • FIG. 9 exemplarily shows a schematic structural diagram of a vehicle driving device provided by an embodiment of the present application.
  • FIG. 10 exemplarily shows a schematic structural diagram of another vehicle driving device provided by an embodiment of the present application.
  • the vehicle driving solution in the embodiments of the present application can be applied to the Internet of Vehicles, such as vehicle-to-everything (V2X), vehicle-to-vehicle communication long-term evolution-vehicle (LTE-V), vehicle - Vehicle (vehicle-vehicle, V2V) etc.
  • V2X vehicle-to-everything
  • LTE-V long-term evolution-vehicle
  • V2V vehicle-vehicle-vehicle, V2V
  • the other devices include but are not limited to: on-board terminals, on-board controllers, on-board modules, on-board modules, on-board components, on-board chips, on-board units, on-board radars or on-board cameras and other sensors.
  • vehicle-mounted module vehicle-mounted module, vehicle-mounted component, vehicle-mounted chip, vehicle-mounted unit, vehicle-mounted radar or camera, to implement the vehicle driving method provided in this application.
  • vehicle driving solutions in the embodiments of the present application may also be used in other intelligent terminals other than vehicles, or disposed in other intelligent terminals other than vehicles, or in components of the intelligent terminals.
  • the intelligent terminal may be other terminal equipment such as intelligent transportation equipment, smart home equipment, and robots.
  • it includes, but is not limited to, a smart terminal or other sensors such as a controller, a chip, a radar or a camera, and other components in the smart terminal.
  • the intelligent terminal can implement the vehicle driving method provided by the present application by connecting the vehicle.
  • FIG. 1 is a schematic diagram of a possible system architecture to which the embodiments of the present application are applied.
  • the system architecture shown in FIG. 1 includes a server 110 and a vehicle 120 .
  • the server 110 may refer to an apparatus, device or chip with processing functions, such as a physical device such as a host or a processor, a virtual device such as a virtual machine or a container, and a chip or an integrated circuit.
  • the server 110 can usually be an Internet of Vehicles server, also called a cloud server, cloud, cloud, cloud server, or cloud controller, etc.
  • the Internet of Vehicles server can be a single server, or can be composed of multiple servers Server cluster, which is not specifically limited.
  • Vehicle 120 may be any vehicle capable of autonomous driving, including but not limited to cars, vans, trucks, motorcycles, buses, boats, airplanes, helicopters, lawn mowers, recreational vehicles, playground vehicles, construction equipment, Trams, golf carts, trains and carts etc.
  • the vehicle 120 can also generally be registered on the server 110 so as to obtain various services provided by the server 110, such as voice service, navigation service, flight inquiry service or voice broadcast service.
  • the embodiments of the present application do not limit the number of servers 110 and the number of vehicles 120 in the system architecture. Normally, one server 110 can be connected to multiple vehicles 120 at the same time (for example, as shown in FIG. 1 , it can be connected to three vehicles 120 at the same time).
  • the system architecture to which the embodiments of the present application are applicable may also include other devices, such as core network devices, wireless relay devices, and wireless backhaul devices, to which the embodiments of the present application also apply.
  • the server 110 in the embodiment of the present application may integrate all functions on an independent physical device, and may also deploy different functions on multiple independent physical devices, which are not limited in the embodiment of the present application.
  • the vehicle when the vehicle supports the automatic driving function, the vehicle may store an automatic driving map.
  • the automatic driving map may be sent by the server to the vehicle in advance and updated periodically, or it may be constructed by the vehicle collecting data by itself. Autopilot can be done using locally stored autopilot maps.
  • automatic driving can be completed by the automatic driving systems (ADS) in the vehicle.
  • ADS automatic driving systems
  • the whole process mainly includes the following four stages: the positioning stage, which means that the ADS calls the on-board camera to shoot the surrounding environment during the driving process of the vehicle.
  • control phase refers to the ADS control the vehicle automatically according to the target route determined by the decision and the specific traffic mode when the vehicle travels at each location. Drive to your destination.
  • the environment recognition model in the embodiment of the present application is mainly applied to the perception stage of automatic driving, and is used to perceive the environment information related to vision, such as the surrounding environment image captured by the vehicle camera.
  • ADS will input the surrounding environment image captured by the vehicle camera into the environment recognition model, and obtain the output result of the environment recognition model.
  • the environment perception model may include any one of the following models: a feature recognition model, also called a feature recognition algorithm, is used to extract image features in the surrounding environment image, and compare the extracted image features with the preset target object’s features.
  • the image features are compared to determine the target objects and positions in the surrounding environment;
  • the depth recognition model also known as the classification recognition model, is obtained by training the initial neural network using a large number of environmental images with pre-labeled objects and positions.
  • the target object and position output by the depth recognition model can be directly obtained.
  • the model robustness in this embodiment of the present application refers to the ability of the model to adapt to abnormal environments, in other words, the ability to adapt to changes in the environment.
  • vehicles may face a variety of weather and environmental conditions in the process of passing, and it is impossible for the environment recognition model to traverse all environmental images during design and training.
  • the recognition effect of that part of the environmental image may fluctuate, and the smaller the degree of fluctuation, the stronger the model robustness of the environmental recognition model.
  • the environmental recognition model can be applied to automatic driving in the current rainy environment.
  • a small change in rainfall results in a change in the recognition results of the environmental recognition model, it indicates that the model of the environmental recognition model is less robust, and a very slight change in the current rainfall may cause errors in the recognition results of the environmental recognition model.
  • Figure 2 exemplarily shows a schematic diagram of a model robustness evaluation method commonly used in the industry.
  • the method mainly relies on an existing set of specified noise sets to evaluate the model robustness.
  • the specific implementation process It includes: first, obtain some standard images with very little noise or even negligible noise; then, scramble these standard images with an existing set of specified noise sets, and obtain attack images scrambled by each kind of noise on these standard images respectively , where the specified noise set may include, but is not limited to, Gaussian noise, Poisson noise, impulse noise, defocus noise, glass noise, motion noise, zoom noise, snow noise, frost noise, fogging as shown in Figure 2 noise, luminance noise, contrast noise, elastic noise, pixel noise and image compression noise, etc.; then, according to the recognition effect of the model to be evaluated on these standard images and the recognition effect on these attack images, determine the set of specified noise sets to be evaluated for the model
  • the degree of influence of the recognition effect of the model to be evaluated can be used to characterize the model robustness of the model
  • the evaluation method shown in Figure 2 actually uses the standard images obtained in advance to comprehensively evaluate a set of pre-specified noise sets, and the evaluation results cannot represent the model to be evaluated under a certain noise type.
  • the robustness of the model is not guaranteed to be valid for the types of noise present in the non-noise set, nor is it applicable to environments other than those indicated by standard images.
  • this evaluation method is still performed offline, that is to say, when it is applied in the field of autonomous driving, the model robustness of the environment recognition model will be evaluated first according to this evaluation method, and then the model robustness will be determined when the model is more robust. Hershey sent to the vehicle for use. Obviously, this kind of evaluation result cannot be applied to all environmental conditions that may occur in the process of automatic driving, which is not conducive to the accurate implementation of vehicle automatic driving.
  • Figure 3 exemplarily shows a schematic diagram of another model robustness evaluation method commonly used in the industry. As shown in Figure 3, this evaluation method does not need to specify a noise set, but uses a neural network to find the recognition results of the model to be evaluated. The maximum error distance is used as the maximum safe radius of the model to be evaluated.
  • the specific implementation process is as follows: first obtain a specified model to be evaluated and a standard image, use the model to be evaluated to identify the standard image to obtain the label of the standard image (“label 1” in Figure 3), and then follow the order of noise from small to large Continuously add disturbances to the standard image, and use the model to be evaluated to identify the label of the attacked image after adding the disturbance; as the noise increases, the identified label will gradually move away from the label of the standard image, until a noise disturbance is added to make the attack image to be disturbed.
  • the evaluation model When the evaluation model identifies the attack image as the remaining labels (“label 2” or “label 3” in Figure 3), it finds the decision boundary of the model to be evaluated under the remaining labels; the decision boundary of each other label corresponds to The minimum value of the noise disturbance amplitudes is taken as the maximum safe radius of the model to be evaluated (as shown in Figure 3, the disturbance amplitude “R 2 ” corresponding to “Label 2” is significantly smaller than the disturbance amplitude “R 3 ” corresponding to “Label 3” , so the maximum safe radius is R 2 ), that is to say, as long as the noise disturbance added to the standard image is within the maximum safe radius, the model to be evaluated can recognize the same recognition result as the standard image.
  • this evaluation method may not be able to be analyzed. out of the maximum safe radius. Therefore, when this evaluation method is applied to the field of automatic driving, not only the evaluation efficiency of the environment recognition model is low, but also the vehicle may not be able to obtain the environment recognition model for a long time due to the inability to determine the maximum safety radius, which is not conducive to the implementation of automatic driving. Strategy. In addition, this evaluation method also uses a specified environmental image to comprehensively measure various noises to find an optimal maximum safety radius, which cannot solve the technical problems of the evaluation method in Figure 2 above.
  • the present application provides a vehicle driving method, which utilizes the environmental images captured by the vehicle in the current passing area to evaluate the model robustness of the environmental recognition model adopted by the current driving strategy, so as to determine the current Whether the environment recognition model adopted by the driving strategy is sufficient to deal with the current driving environment, so as to help the automatic driving system make a decision whether to adjust the current driving strategy.
  • vehicle driving method in the embodiment of the present application may be executed on the vehicle side or the server side.
  • vehicle driving method executed on the vehicle side as an example.
  • FIG. 4 exemplarily shows a schematic diagram of a hardware architecture of a vehicle provided by an embodiment of the present application.
  • vehicle 400 is only an example, and that the vehicle 400 may have more or fewer components than those shown in the figures, may combine two or more components, or may have different component configurations .
  • the various components shown in the figures may be implemented in hardware, software, or a combination of hardware and software, including one or more signal processing and/or application specific integrated circuits.
  • the vehicle 400 may include a processor 410, a memory 420, a transceiver 430, at least one camera 440, at least one display screen 450, and the like.
  • the processor 410 may include one or more processing units, for example, may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor) processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and/or neural-network processing unit (NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • the processor 410 may execute the vehicle driving method provided by the embodiment of the present application, for example, in response to the current environment, take corresponding measures when the environment recognition model is no longer suitable for the current environment, such as giving a reminder on the display screen 450 .
  • the processor 410 When the processor 410 integrates different devices, such as integrating a CPU and a GPU, the CPU and the GPU may cooperate to execute the vehicle driving method provided by the embodiments of the present application. For example, some algorithms in the vehicle driving method are executed by the CPU, and another part of the algorithms are executed by the GPU , in order to obtain faster processing efficiency.
  • memory 420 may contain instructions (eg, program logic) executable by processor 410 to perform various functions of vehicle 400 , including the vehicle driving functions described above. Memory 420 may also contain additional instructions, including instructions to send data to, receive data from, interact with, and/or control one or more of the vehicle's propulsion system, sensor system, control system, and peripherals. In addition to instructions, memory 420 may also store data such as autonomous driving maps, route information, vehicle location, direction, speed, and other such vehicle data, among other information. Such information may be used by processor 410 or other components in vehicle 400 during operation of vehicle 400 in an autonomous driving mode, a semi-autonomous driving mode, and/or a manual driving mode.
  • instructions eg, program logic
  • the transceiver 430 may be a transmitting unit or a transmitter when sending information, and a receiving unit or a receiver when receiving information, and the transceiver, transmitter or receiver may be a radio frequency circuit.
  • transceiver 430 may also be an input and/or output interface, pin or circuit, etc., for providing information to or receiving information from a user of vehicle 400 .
  • the transceiver 430 may include one or more input/output devices within the set of peripheral devices of the vehicle 400, such as a wireless communication system, a touch screen, a microphone, and a speaker, and the processor 410 may, based on the transceiver 430, select from various Subsystems (eg, propulsion systems, sensor systems, and control systems) and inputs received from the user interface control the functions of vehicle 400 .
  • Subsystems eg, propulsion systems, sensor systems, and control systems
  • the camera 440 may include various types of sensors for sensing the surrounding environment, such as camera sensors, radar sensors, and the like.
  • the camera sensor may include any camera used to obtain an image of the environment in which the vehicle 400 is located, such as a static camera sensor, a dynamic camera sensor, an infrared camera sensor, or a visible light camera sensor, and the like.
  • the radar sensors may include Long Range Radar (LRR), Middle Range Radar (MRR), Short Range Radar (SRR), and the like.
  • the display screen 450 is used to display images, videos, and the like.
  • the display screen 450 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (active-matrix organic light).
  • LED liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light-emitting diode quantum dot light emitting diodes, QLED
  • the display screen 194 may be an integrated flexible display screen, or a spliced display screen composed of two rigid screens and a flexible screen located between the two rigid screens.
  • the processor 410 may prompt the user to switch the driving strategy on the display screen 450 when it is determined that the current driving strategy is not suitable for the current environment.
  • the vehicle 400 may further include a positioning device, a transmission device, a braking device, a steering unit, and the like, which will not be repeated here.
  • the current driving strategy may be an automatic driving strategy, or may be other driving strategies that need to use the environment recognition model to perceive the surrounding environment, which is not specifically limited.
  • the memory 420 may also store model robustness evaluation results of the environment recognition model under various environment types.
  • the processor 410 obtains the environment image of the current passing area from the camera 440, The target environment type to which the environment image belongs may be judged first, and then the model robustness evaluation result of the environment recognition model under the target environment type is directly obtained from the local storage 420 .
  • the evaluation result of the environment recognition model in any environment type can be represented by the maximum noise level that the environment recognition model can accurately identify under this environment type, and the server can use a large number of images belonging to the environment type in advance to evaluate the environment recognition model.
  • the images under the same environment type can include images with the same road type, the same environmental noise type, and the noise information of the environmental noise type is not very different.
  • These images can be obtained from the network query. It can also be obtained by entrusting a special collecting vehicle to collect the real road environment, or obtained through interaction with a third-party device, which is not specifically limited. This embodiment can directly acquire the pre-stored robustness evaluation results without real-time evaluation, which helps to improve the processing efficiency of vehicle driving.
  • the processor 410 may end the robustness evaluation of the current cycle, and wait for the timer to count to a preset time period, and then re-shoot a new environment image to start the robustness evaluation of the next cycle.
  • Step 504 the vehicle adjusts the current driving strategy.
  • the processor 410 adds an image processing module to the automatic driving strategy, uses the image processing module to de-noise the surrounding environment image, and then inputs the de-noised surrounding environment image into the environment recognition model for recognition.
  • the image processing module may exist in the form of software or hardware.
  • the image processing module may specifically refer to a piece of program code stored in the memory 420, and the processor 410 implements the addition operation to the image processing module by calling the program code.
  • the image processing module can be connected to the camera 440 and the memory 420 respectively.
  • the image processing module is not enabled, and the processor 410 directly inputs the surrounding environment image captured by the camera 440 into the memory 420
  • the processor 410 first sends the surrounding environment image captured by the camera 440 to the image processing module for denoising processing, and then inputs the denoised surrounding environment image to the memory 420
  • the environment recognition model stored in This operation relatively improves the recognition effect of the environment recognition model by improving the image quality of the environment image input to the environment recognition model, and the environment recognition model can not be disabled.
  • the operation is relatively simple and easy to implement.
  • the processor 410 adopts another environment recognition model different from the environment recognition model under the current automatic driving strategy to complete the current automatic driving strategy, for example, enabling another environment recognition model suitable for the current environment in the automatic driving strategy.
  • the memory 420 may also store a plurality of environment identification models locally, and each environment identification model may have a corresponding applicable type label, and the applicable type label is used to indicate the applicable road type, weather type and noise value.
  • the processor 410 switches to the manual driving strategy.
  • the processor 410 can stop the automatic driving strategy, that is, exit the automatic driving model, And can prompt the owner to take over the driving operation through the display screen 450 to switch to the manual driving mode, so as to realize the flexible switching between the automatic driving strategy and the manual driving strategy.
  • the processor 410 takes emergency measures, such as emergency braking.
  • emergency measures such as emergency braking.
  • the processor 410 can directly brake and stop, and wait until When the environment gets better, after the user restarts the vehicle, the environment recognition model is re-enabled for automatic driving to improve the driving safety of the owner.
  • operation one is to relatively improve the recognition effect of the environment recognition model by improving the image quality of the surrounding environment image input to the environment recognition model, but the environment recognition model is not deactivated, while operations two to four are: Disable the environment-aware model directly.
  • the above content is only an example to introduce several possible ways to adjust the current driving strategy. Any solution that can improve the adaptability of the current driving strategy to the current environment by adjusting the current driving strategy is within the scope of protection of the present application. I will not list them one by one.
  • the vehicle since the vehicle will adjust the current driving strategy of the vehicle when the robustness of the environment recognition model fails to meet the requirements, as long as the environment recognition model under the current driving strategy is available, it means that the The robust performance meets the requirements, that is, it is considered that the recognition result of the environmental recognition model for the environmental image in the current environment is still accurate.
  • the current environment image as the reference image instead of the standard image in the prior art as the reference image, it is possible to refer to the real weather conditions in the current environment to find out what the environment recognition model can adapt to in the current environment The maximum disturbance can accurately reflect the real robustness of the environment recognition model in the current environment.
  • the solution in the above-mentioned first embodiment is equivalent to setting up a complete robustness evaluation process during the passing process of the vehicle, and evaluating the robustness of the environment recognition model in a targeted manner by using the environment images during the passing process of the vehicle.
  • the robustness evaluation results can accurately represent the applicability of the environment recognition model to the current traffic environment.
  • the robustness of the model evaluated in this way is more accurate, which helps to provide a reference for the accurate implementation of driving strategies.
  • by adjusting the current driving strategy in time when it is determined that the environment recognition model in the current driving strategy is not suitable for the current environment it can also try to use a driving strategy suitable for the current environment to guide the passing of the vehicle, effectively improving the accuracy of implementing the current driving strategy.
  • FIG. 6 exemplarily shows a schematic flowchart of a vehicle driving method provided in Embodiment 2 of the present application, and the method is applicable to various vehicles, such as the vehicle 400 shown in FIG. 4 .
  • the process includes the following steps:
  • step 601 the vehicle acquires an environmental image obtained by photographing the current passing area.
  • Step 602 the vehicle determines the target weather type existing in the current passing area.
  • the weather types may include, but are not limited to, rain, snow, fog, frost, and the like.
  • the processor 410 may determine the weather type existing in the current passing area in any of the following ways:
  • a weather classification model may also be stored in the memory 420, and the processor 410 classifies the weather type to which the environment image belongs by inputting the environment image into the weather classification model stored in the memory 420.
  • the meteorological classification model can be trained according to a large number of environmental images marked with meteorological types in advance.
  • the meteorological classification model has certain robustness, that is, it can accurately identify the correct meteorological type under certain abnormal shooting conditions.
  • the processor 410 analyzes the weather forecast information of the current passing area to obtain the weather type. For example, the processor 410 obtains the weather forecast information of the passing area in advance before departure to estimate the weather type that may occur, or the processor 410 In the process of driving, the weather forecast information of the current passing area is obtained in real time or periodically, so as to analyze the current weather type in real time.
  • the weather forecast information may be obtained by the processor 410 through the transceiver 430 requested from the server side, or obtained by parsing the broadcast information listened to by the voice module in the vehicle 400, or obtained through the transceiver 430 and a third-party device
  • the third-party device may, for example, include other vehicles in the same area as the vehicle, or a roadside unit that the vehicle currently passes through, and so on.
  • the memory 420 may also store a navigation map.
  • the processor 410 parses the dynamic layer of the navigation map to learn the weather type of the current passing area.
  • the navigation map may specifically refer to an automatic driving map, which is usually delivered by the server to the transceiver 430 in the vehicle 400, and then stored in the memory 420, and can also be updated in real time.
  • the navigation map includes static layers and dynamic layers: the static layer is the mapping of the real road, including roads, lanes and lane lines, etc.; the dynamic layer is the mapping of the road environment, including the current weather conditions on each road , such as visibility value, weather type and noise value under weather type, etc. For example, when the weather type is rain, the noise value under the weather type is used to indicate the rainfall, such as light rain, moderate rain, heavy rain and other indicative information, or a specific rainfall value, which is not limited.
  • the processor 410 requests the server for the weather type of the current passing area through the transceiver 430 .
  • the server can obtain the weather type of the current passing area of the vehicle through a network or interact with a third-party device, and return it to the transceiver 430 .
  • the processor 410 may select the weather type with the most severe weather environment as the target weather type from the multiple weather types, and may not consider other weather types, so as to save the Based on the resources, the robustness evaluation is carried out using the most severe meteorological environment. For example, when there are three types of weather, rain, snow and fog at the same time, if the rain level is light rain, the snow level is light snow, and the fog level is medium fog, the foggy weather environment is obviously better than rain and fog. Snow weather conditions are more severe, so the processor 410 may target fogging as the target weather type. Of course, in other solutions, the processor 410 may also use multiple weather types as target weather types, so as to use the comprehensive weather environment for robustness evaluation.
  • Step 603 the vehicle determines whether the current passing area is attacked by noise corresponding to the target weather type, if yes, executes step 604, if not, executes step 601.
  • critical noise levels corresponding to various meteorological types may also be stored in the memory 420, and the critical noise level corresponding to any meteorological type is used to indicate the minimum noise level required to detect the robustness of the model under the meteorological type, for example Take rain as an example, the critical noise level of rain is related to the minimum rainfall that will affect the recognition results of the environmental recognition model. , moderate or heavy rain, etc.
  • the processor 410 may also obtain the current noise level of the current pass area under the target weather type, and obtain from the memory 420 the corresponding noise level of the target weather type.
  • Critical noise level if the current noise level is greater than or equal to the critical noise level, it means that the meteorological features in the current passing area are very obvious (such as heavy rain, heavy snow or heavy fog, etc.), and the quality of the images taken under this meteorological feature is better In this case, it can be considered that the current passing area is attacked by noise corresponding to the target weather type, and the processor 410 needs to be robust to the model of the environment recognition model in the current environment. gender assessment. On the contrary, if the current noise level is lower than the critical noise level, it means that the meteorological features in the current passing area are relatively slight (for example, drizzle, etc.), and the quality of the images captured under this meteorological feature is less disturbed by the current environment and will not be affected. The recognition effect of the environment recognition model is affected. In this case, it can be considered that the current passing area is not attacked by noise corresponding to the target weather type, and the processor 410 can detect the model robustness of the environment recognition model without wasting additional resources.
  • the weather classification model stored in the memory 420 may also be obtained by training using the environmental images marked with the weather type and noise value at the same time. After the current environment image is input into the weather classification model stored in the memory 420, the weather classification model will simultaneously output the weather type and the corresponding noise level.
  • the memory 420 may also store a noise identification model corresponding to each weather type. The processor 410 first inputs the environmental image into the weather classification model stored in the memory 420 to obtain the target weather type, and then inputs the environmental image into the memory 420 for storage The noise identification model corresponding to the target meteorological type to obtain the noise level.
  • the processor 410 can directly analyze the weather forecast information obtained by the transceiver 430 or the voice module, or query the dynamic layer, or send the The weather type and noise level can be obtained at the same time by requesting the server, or by parsing the weather forecast information obtained by the transceiver 430 or the voice module, or querying the dynamic layer, or requesting the server through the transceiver 430 to obtain the weather type. , and then obtain the information related to the weather type in the weather forecast information obtained by the transceiver 430 or the voice module, or query the map elements related to the weather type in the dynamic layer, or request the server through the transceiver 430.
  • Noise value which is not specifically limited.
  • step 603 is an optional step. This step is equivalent to adding an execution condition to the operation of evaluating the environment recognition model, that is, the environment recognition model is evaluated only when it is determined that the current passing area is under attack, and the environment recognition model is not evaluated when it is not attacked. Waste of necessary resources.
  • the processor 410 can also directly evaluate the robustness in a periodic manner without analyzing whether it is attacked before the evaluation.
  • Step 604 the vehicle takes the environmental image as the reference image, and scrambles the reference image with the noise information corresponding to each noise level under the target weather type to obtain each disturbance image.
  • the memory 420 may further store a preset image library, and the preset image library stores multiple preset images.
  • the processor 410 can also find a target preset image whose similarity with the environment image is not less than a preset similarity threshold from the preset image library stored in the memory 420, and set the target preset image.
  • the image also serves as a reference image.
  • the specific implementation process can refer to the following steps:
  • the processor 410 determines the road type of the current passing area and the current noise level of the target weather type.
  • the road types include but are not limited to: expressways, highways, urban roads, factory and mine roads, forest roads, rural roads, ramps or intersections, and the like.
  • the processor 410 may One of the 5 environmental images is arbitrarily selected for identification to obtain the road type and the current noise level.
  • step 2 the processor 410 obtains a preset image library from the memory 420, queries the preset image library according to the road type of the current passing area and the current noise level of the target weather type, and finds a road type with the same road type from the preset image library. A preset image of the target with little difference in the current noise level of the target weather type.
  • the preset images in the preset image library may be stored in partitions according to road types.
  • the processor 410 may first find the sub-area corresponding to the road type from the preset image library of the memory 420 according to the road type of the current passing area, and then determine the weather type corresponding to each preset image in the sub-area and the corresponding weather type.
  • the noise level under the weather type and then find from the partition that the corresponding weather type is the same as the target weather type of the current passing area, and the difference between the noise level under the corresponding weather type and the current noise level of the target weather type is less than the preset difference.
  • the value threshold for that portion of the target preset image is the image.
  • the weather type corresponding to the preset image and the noise level under the corresponding weather type may be inherent attributes pre-marked on the preset image, so that the processor 410 can directly read each preset image in each partition of the memory 420 by directly reading
  • the inherent properties of the memory 420 can quickly determine whether each preset image meets the requirements; of course, it can also be obtained by the processor 410 by inputting each preset image in each partition of the memory 420 into the meteorological classification model in the memory 420, so as to save The resources occupied by the preset image library.
  • Step 3 the processor 410 uses the environment image and the target preset image as the reference image. For example, if the processor 410 obtains 5 preset images that meet the requirements from the preset image library of the memory 420, the processor 410 can combine the 5 environment images captured by the camera 440 with the 5 preset images As a reference image, that is, a total of 10 reference images are obtained.
  • the model robustness of the environment recognition model is evaluated by using the target preset image that has the same road conditions and the same weather conditions as the environment image, which can effectively increase the number of samples for evaluation and help improve The reliability of the evaluation results.
  • the processor 410 can also directly use the environmental image captured by the camera 440 as the reference image, and no longer waste the resources of the memory 420 to store the preset image, and also can no longer waste the processor 410's consultation query target preset images to improve the efficiency of robustness evaluation.
  • the processor 410 may also only use the target preset image similar to the environment image as the reference image, instead of using the environment image, so as to reduce the amount of data to be analyzed and improve the efficiency of robustness evaluation.
  • the processor 410 may first generate N pieces of noise information corresponding to N noise levels under the target weather type, and then traverse the N pieces of noise information in sequence.
  • the noise information is added to the reference image to obtain a disturbed image corresponding to the reference image.
  • the processor 410 can obtain N disturbed images corresponding to each reference image.
  • N is an integer greater than or equal to 2.
  • the N disturbed images corresponding to any reference image are obtained by applying N attacks of different degrees to the reference image. As the applied noise level increases, the reference image is attacked by the target weather type. Also increases, the image quality of the disturbed image will be correspondingly worse.
  • FIG. 7 exemplarily shows a schematic flow chart of a robustness evaluation provided by an embodiment of the present application.
  • the target weather type of the current passing area of the vehicle is “fog”
  • FIG. 7 (A) The original reference image without “fog” is shown
  • Fig. 7(B) shows the disturbed image obtained after adding "fog” with a noise level of 0.1 to the reference image
  • Fig. 7(C) shows the reference image
  • Figure 7 (D) shows the disturbed image obtained by adding "fog” with a noise level of 0.3 to the reference image
  • Figure 7 (E) shows Fig. 7(F) shows the disturbed image obtained by adding "fog” with a noise level of 0.5 to the reference image.
  • Fig. 7 (A) The original reference image without “fog” is shown
  • Fig. 7(B) shows the disturbed image obtained after adding "fog” with a noise level of 0.1 to the reference image
  • Fig. 7(C) shows the reference image
  • FIG. 7 (G) shows a disturbed image obtained by adding a “fog” with a noise level of 0.6 to the reference image
  • FIG. 7 (H) shows a disturbed image obtained by adding a “fog” with a noise level of 0.7 to the reference image
  • Figure 7 (I) shows the disturbance image obtained by adding a "fog” with a noise level of 0.8 to the reference image
  • Figure 7 (J) shows a reference image after adding a "fog” with a noise level of 0.9 The resulting perturbed image.
  • FIG. 7 is only introduced by adding the disturbance of the meteorological type of “fog” as an example, and other meteorological types can also directly refer to the method in FIG. 7 to add disturbance.
  • FIG. 8 exemplarily shows a schematic diagram of a disturbance image added with different meteorological types provided by an embodiment of the present application, wherein FIG. 8(A) illustrates the original undisturbed reference image, and FIG. 8(B) illustrates The disturbance image obtained by adding the disturbance of "rain” to the reference image, Fig. 8(C) shows the disturbance image obtained by adding the disturbance of "snow" to the baseline image, and Fig. 8(D) shows The perturbed image obtained by adding the perturbation of "fog” to the reference image. It can be seen that at the same noise level, the disturbance degree of fog to the reference image may be more serious than that of rain and snow.
  • Step 605 the vehicle identifies the reference image and each disturbance image according to the environment identification model, and determines the maximum noise level corresponding to the disturbance image that can be accurately identified by the environment identification model.
  • the processor 410 uses 5 environment images and 5 target preset images together as 10 reference images, then the processor 410 can determine the maximum noise level of the environment recognition model according to the following steps:
  • Step 1 the processor 410 performs the following analysis for each of the 10 reference images: the reference image and the N disturbed images corresponding to the reference image are input into the environment recognition model, and the recognition result of the reference image and 10 are obtained.
  • the recognition results of the disturbed images are calculated by using a preset similarity algorithm to calculate the similarity between the recognition results of each disturbed image in the 10 disturbed images and the recognition result of the reference image, and the identification of each disturbed image relative to the reference image is obtained.
  • the preset similarity algorithm can be set by those skilled in the art based on experience, for example, the Euclidean metric algorithm, the Pearson correlation coefficient algorithm, the cosine similarity algorithm, or the similarity calculation algorithm based on Map-Reduce can be used.
  • Step 2 the processor 410 obtains 10 disturbed images to which noise information corresponding to the same noise level is added from the 10 ⁇ N disturbed images, and determines the recognition accuracy corresponding to the noise level according to the recognition accuracy of the 10 disturbed images. , if it is determined that the recognition accuracy is not lower than the preset accuracy threshold, it is determined that the environment recognition model can accurately recognize the 10 disturbed images.
  • the recognition accuracy corresponding to the noise level may be, for example, the average or weighted average of the recognition accuracy of the 10 disturbed images.
  • the preset accuracy threshold is used to indicate the minimum accuracy that enables the environment recognition model to have a better recognition result, and the threshold can be set by those skilled in the art based on experience. Under normal circumstances, when the accuracy of the recognition result of the disturbance image relative to the recognition result of the reference image is above 0.5, the environment recognition model can recognize the disturbance image to the side of the recognition result of the reference image. Therefore, the preset accuracy The threshold can be set to a value above 0.5. However, considering that the lower the preset accuracy threshold is set, the greater the discrepancy between the recognition result of the critical disturbance image found by the environment recognition model and the recognition result of the recognition reference image may be.
  • the preset accuracy threshold can be set to a value near 0.7, so that the processor 410 can disable the environment recognition model in time when the current environment deteriorates and starts to affect the recognition effect of the environment recognition model. , instead of deactivating the environment-aware model after it becomes unavailable.
  • the noise levels of 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9 are added.
  • These 9 types of disturbance images of "fog" for example, each type of disturbance image includes 10 disturbance images with the same noise level
  • the average recognition accuracy relative to the reference image is 0.95, 0.9, 0.8, 0.6, 0.45, 0.3 , 0.2, 0.15, 0.1
  • the average recognition accuracy of not less than 0.7 among these 9 average recognition accuracy includes 0.95, 0.9 and 0.8, which means that those disturbances that add "fog" of noise level 0.1, 0.2 and 0.3
  • the image belongs to the disturbed image that the environment recognition model can accurately identify.
  • Step 3 the processor 410 determines the maximum noise level corresponding to the disturbance image that can be accurately identified, and the disturbance image corresponding to the maximum noise level corresponds to the critical disturbance that makes the accuracy of the environment recognition model meet the preset accuracy requirement.
  • the maximum noise level can be used to characterize the model robustness of the environment recognition model.
  • the maximum noise level determined in step 3 above is actually the noise level with the smallest identification accuracy among those noise levels whose identification accuracy is not less than the preset accuracy threshold. For example, continuing to refer to Fig. 7(A) to Fig.
  • the minimum accuracy of not less than 0.7 among the 9 average recognition accuracies is 0.8, and the accuracy of 0.8 corresponds to the noise level of 0.3, that is to say,
  • the environment recognition model can accurately analyze the environment image with a noise level of 0.3 "fog" added. If the noise level of the "fog" in the current environment increases to more than 0.3, the environment recognition model cannot be used. , or the environment recognition model will have a high probability to recognize a different recognition result from the environmental image without adding "fog". Therefore, the maximum noise level is also called the safety radius of the environment recognition model.
  • the environment recognition model can still relatively accurately identify the same recognition results as the original environment image, and the model robustness of the environment recognition model is better.
  • the safety radius is small, a little "fog” may be added to the current environment, and the environment recognition model will recognize a recognition result that is different from the original environment image, and the model robustness of the environment recognition model is poor.
  • the processor 410 may also analyze the N noise levels in order of the noise level from small to large. When there is a noise level, first use the noise level to scramble the 10 reference images to obtain 10 disturbed images, and calculate the recognition accuracy corresponding to the noise level according to the 10 disturbed images, and judge whether the recognition accuracy is less than the predetermined accuracy. If it is not less than the set accuracy threshold, continue to perform the above analysis compared to the next noise level with a larger noise level.
  • the noise level smaller than the noise level can be set to The last noise level is used as a safe radius, and subsequent noise levels need not be analyzed. In this way, this example can not only improve the efficiency of determining the safety radius, but also save computing resources as much as possible.
  • Step 606 the vehicle determines whether the maximum noise level corresponding to the disturbance image that can be accurately identified by the environment recognition model is less than the preset noise level, if so, go to step 607, if not, go to step 601.
  • the preset noise level is used to indicate the maximum noise level corresponding to the disturbed image that is expected or required to be accurately identified by the environment recognition model, which can be set by those skilled in the art based on experience, or can be set by the user according to actual scene requirements.
  • the setting is not limited.
  • the processor 410 may also set different preset noise levels for different weather types, for example, set a preset noise level of 0.6 for "fog", A preset noise level of 0.5 is set for “rain” and a preset noise level of 0.4 is set for “snow”, these weather types and their respective preset noise levels may be stored in the memory 420 .
  • set a preset noise level of 0.6 for "fog” A preset noise level of 0.5 is set for “rain” and a preset noise level of 0.4 is set for “snow”, these weather types and their respective preset noise levels may be stored in the memory 420 .
  • the processor 410 determines that the safety radius of the environment recognition model is 0.3, it compares the safety radius 0.3 with the preset corresponding to “fog”
  • the noise level is 0.6, it can be seen that the environment recognition model can recognize the environment image of “fog” with a noise level of 0.3 plus 0.3 in the current environment.
  • the adaptability of the environment recognition model to the current environment changes is lower than the preset adaptability requirements, and the environment recognition model is no longer suitable for the current driving environment.
  • the safety radius of the environment recognition model is 0.8, it means that the environment recognition model can recognize the environment image of “fog” with a noise level of 0.8 at the maximum in the current environment, and obviously it can also accurately recognize and add the preset noise level of 0.6
  • the “fog” environment image that is, the adaptability of the environment recognition model to the current environment changes meets the preset adaptability requirements, and the environment recognition model can continue to be used to execute the current driving strategy.
  • Step 607 the vehicle adjusts the current driving strategy.
  • the model robustness of the environment recognition model under the target noise type existing in the current environment is evaluated by using the current environment image, instead of using a pre-specified standard image to evaluate the environment recognition model in a specified noise set or
  • the robustness of the comprehensive model under various kinds of noise can not only ensure that the evaluation results are valid for the target noise type in the current environment, but also apply to the current traffic environment of the vehicle, which is helpful for the accurate implementation of the current driving strategy of the vehicle.
  • the above-mentioned embodiment 2 calculates the maximum noise level that can be used by the environmental recognition model by dividing the noise level, as the safety radius of the environmental recognition model, instead of using the neural network to calculate the optimal solution that distorts the environmental recognition model, not only the calculation method is more efficient. It is simple and ensures that the calculation results in a safe radius, which helps to reduce the impact on vehicle traffic caused by adding robustness evaluation to vehicle traffic.
  • Embodiment 1 and Embodiment 2 are only introduced by taking the vehicle driving method executed on the vehicle side as an example.
  • the vehicle driving method in this application can also be executed by a server.
  • the specific implementation includes: the processor in the vehicle 400 410 Obtain the environmental image obtained by the camera 440 shooting the current passing area, and report it to the server through the transceiver 430.
  • the server determines the model robustness of the environmental recognition model used under the current driving strategy of the vehicle according to the environmental image.
  • a driving strategy adjustment instruction is sent to the transceiver 430 , and the processor 410 adjusts the current driving strategy according to the driving strategy adjustment instruction after acquiring the driving strategy adjustment instruction from the transceiver 430 .
  • the driving strategy adjustment instruction may only include an instruction, and the processor 410 determines how to adjust the current driving strategy according to the instruction.
  • the driving strategy adjustment instruction may also include a specific adjustment method, and the processor 410 adjusts according to the instruction. The method performs the adjustment operation of the response, which is not specifically limited.
  • each network element in the above-mentioned implementation includes corresponding hardware structures and/or software modules for executing each function.
  • the present invention can be implemented in hardware or a combination of hardware and computer software in conjunction with the units and algorithm steps of each example described in the embodiments disclosed herein. Whether a function is performed by hardware or computer software driving hardware depends on the specific application and design constraints of the technical solution. Skilled artisans may implement the described functionality using different methods for each particular application, but such implementations should not be considered beyond the scope of the present invention.
  • FIG. 9 exemplarily shows a schematic structural diagram of a vehicle driving device provided by an embodiment of the present application.
  • the vehicle driving device 900 may be a vehicle or a server, and the vehicle driving device 900 may include sequential
  • the image acquisition unit 910, the robustness evaluation unit 950 and the driving strategy adjustment unit 960 are connected.
  • the image acquisition unit 910 is used to acquire the environmental image in the current passing area of the vehicle and send it to the robustness evaluation unit 950.
  • the robustness evaluation unit 950 is used to use the environment image to evaluate the model robustness of the environment recognition model under the current driving strategy
  • the driving strategy adjustment unit 960 is used to adjust the vehicle when the model robustness obtained by the evaluation is lower than the preset robustness threshold.
  • current driving strategy is used to perceive the surrounding environment in the current traffic area under the current driving strategy, and the perceived perception result is used to guide the vehicle to pass in the current traffic area.
  • the image acquisition unit 910 may specifically be a camera in the vehicle, such as a vehicle-mounted camera, and the image acquisition unit 910 obtains an environment image by photographing the current passing area of the vehicle.
  • the image acquisition unit 910 may specifically be a transceiver unit in the server, and the transceiver unit receives an environment image sent by the transceiver in the vehicle, and the environment image is obtained by photographing the current passing area by a camera in the vehicle of.
  • the vehicle driving device 900 may further include an environment perception unit 920, the environment perception unit 920 is connected to the robustness evaluation unit 950, and the environment perception unit 920 may first perceive the target environment existing in the current passing area Noise type, and determine whether the current passing area is attacked by noise corresponding to the target environmental noise type, and send the judgment result to the robustness evaluation unit 950.
  • the robustness evaluation unit 950 determines that it is attacked by noise, it uses the environment image to evaluate the current Model robustness of environment recognition models under driving policy.
  • the environment sensing unit 920 can sense the target ambient noise type in various ways, such as:
  • the environmental perception unit 920 can also be connected to the image acquisition unit 910, and the environmental perception unit 920 acquires the captured environmental image from the image acquisition unit 910, and inputs the environmental image into the noise classification model to classify and obtain the target environmental noise. type.
  • the environment perception unit 920 may acquire weather forecast information of the current passing area of the vehicle, and obtain the target ambient noise type by parsing the weather forecast information.
  • the environment perception unit 920 may acquire the target environment noise type from the dynamic layer of the navigation map.
  • the environment perception unit 920 may also connect to the server, and obtain the target environmental noise type from the server by sending an acquisition request to the server.
  • the robustness evaluation unit 950 is specifically configured to: take the environment image as the reference image, and use the reference image to evaluate the model robustness of the environment recognition model;
  • the target preset image whose similarity of the environment image is not lower than the preset similarity threshold value, the target preset image, or the target preset image and the environment image are used as the reference image, and the model robustness of the environment recognition model is evaluated by using the reference image. .
  • the vehicle driving device 900 may further include a noise generating unit 930 and an image generating unit 940, the noise generating unit 930 is connected to the environment perception unit 920 and the image generating unit 940, and the image generating unit 940 is also connected to the image
  • the acquisition unit 910 and the robustness evaluation unit 950 are obtained.
  • the environment perception unit 920 may also send the determined target environmental noise type existing in the current passing area to the noise generation unit 930; the noise generation unit 930 generates various noise information conforming to the target environmental noise type and sends it to the image generation unit unit 940, wherein each noise information corresponds to different noise levels; the image generation unit 940 obtains the captured environmental image from the image acquisition unit 910, and obtains each noise information from the noise generation unit 930, and repeats the following operations until each noise information is Traversing until the end of the traversal, obtain each disturbance image corresponding to each noise information: traverse the noise information that has not been traversed in each noise information, add the traversed noise information to the reference image, and obtain a corresponding disturbance image.
  • the unit 940 sends the reference image and each generated disturbance image to the robustness evaluation unit 950; the robustness evaluation unit 950 recognizes the reference image and each disturbance image according to the environment recognition model, and determines that the environment recognition model can accurately identify the Perturb the image and determine the maximum noise level corresponding to the identified perturbed image.
  • the robustness evaluation unit 950 is specifically configured to: acquire a preset noise level, and determine that the maximum noise level corresponding to the identified disturbance image is less than the preset noise level.
  • the preset noise level is used to indicate the maximum noise level corresponding to the disturbance image that needs to be accurately identified by the environment identification model.
  • the Lupinity evaluation unit 950 may be specifically configured to: determine, according to the environment recognition model, that each disturbance image corresponding to each reference image is relative to each The recognition accuracy of the reference image, and then for each noise information in each noise information, perform the following operations: According to the recognition accuracy of at least two disturbance images added with noise information relative to the respective reference images, determine the recognition corresponding to the noise information. Accuracy, if the recognition accuracy is not lower than the preset accuracy threshold, then at least two disturbance images are determined as disturbance images that can be accurately identified by the environment recognition model.
  • the driving strategy adjustment unit 960 is specifically configured to: denoise the environment image and then input the environment recognition model, or adopt another environment recognition model different from the environment recognition model under the current driving strategy Complete the current driving strategy, or switch to a manual driving strategy, or take emergency measures.
  • the above division of the units of the vehicle driving device 900 is only a division of logical functions, and may be fully or partially integrated into a physical entity in actual implementation, or may be physically separated.
  • the image acquisition unit 910 can be implemented by the transceiver 430 of the above-mentioned FIG. 4
  • the environment perception unit 920 the noise generation unit 930 , the image generation unit 940 , the robustness evaluation unit 950 and the driving strategy adjustment unit 960 can be implemented by the above-mentioned FIG. 4
  • the processor 410 is implemented.
  • FIG. 9 only takes a vehicle as an example to introduce the structure of the vehicle driving device.
  • the vehicle driving device is a server
  • the server may also have an environment perception unit 920 , a noise generating unit 930 , and an image generating unit as shown in FIG. 9 .
  • the unit 940 , the robustness evaluation unit 950 and the driving strategy adjustment unit 960 and may also have a transceiver unit, which is used to receive the environmental data sent by the vehicle and send it to the image generation unit 940 , and is used for the driving strategy adjustment unit 960
  • the generated driving strategy adjustment command is sent to the vehicle.
  • FIG. 10 is a schematic structural diagram of a vehicle driving device provided by an embodiment of the application.
  • the device may be a vehicle or a server, or a chip or a circuit, such as a device that can be installed in a vehicle.
  • a chip or circuit for example, a chip or circuit that can be provided in a server.
  • vehicle driving device 1001 may further include a bus system, wherein the processor 1002, the memory 1004, and the communication interface 1003 may be connected through the bus system.
  • the above-mentioned processor 1002 may be a chip.
  • the processor 1002 may be a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a system on chip (SoC), or a system on chip (SoC). It can be a central processing unit (CPU), a network processor (NP), a digital signal processing circuit (DSP), or a microcontroller (microcontroller). unit, MCU), it can also be a programmable logic device (PLD) or other integrated chips.
  • FPGA field programmable gate array
  • ASIC application specific integrated circuit
  • SoC system on chip
  • SoC system on chip
  • SoC system on chip
  • MCU microcontroller
  • MCU programmable logic device
  • PLD programmable logic device
  • each step of the above-mentioned method can be completed by an integrated logic circuit of hardware in the processor 1002 or an instruction in the form of software.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as being executed by a hardware processor, or executed by a combination of hardware and software modules in the processor 1002 .
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory 1004, and the processor 1002 reads the information in the memory 1004, and completes the steps of the above method in combination with its hardware.
  • processor 1002 in this embodiment of the present application may be an integrated circuit chip, which has a signal processing capability.
  • each step of the above method embodiments may be completed by a hardware integrated logic circuit in a processor or an instruction in the form of software.
  • the aforementioned processors may be general purpose processors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components .
  • DSPs digital signal processors
  • ASICs application specific integrated circuits
  • FPGAs field programmable gate arrays
  • the methods, steps, and logic block diagrams disclosed in the embodiments of this application can be implemented or executed.
  • a general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
  • the steps of the method disclosed in conjunction with the embodiments of the present application may be directly embodied as executed by a hardware decoding processor, or executed by a combination of hardware and software modules in the decoding processor.
  • the software modules may be located in random access memory, flash memory, read-only memory, programmable read-only memory or electrically erasable programmable memory, registers and other storage media mature in the art.
  • the storage medium is located in the memory, and the processor reads the information in the memory, and completes the steps of the above method in combination with its hardware.
  • the memory 1004 in this embodiment of the present application may be a volatile memory or a non-volatile memory, or may include both volatile and non-volatile memory.
  • the non-volatile memory may be read-only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically programmable Erase programmable read-only memory (electrically EPROM, EEPROM) or flash memory.
  • Volatile memory may be random access memory (RAM), which acts as an external cache.
  • RAM random access memory
  • DRAM dynamic random access memory
  • SDRAM synchronous DRAM
  • SDRAM double data rate synchronous dynamic random access memory
  • ESDRAM enhanced synchronous dynamic random access memory
  • SLDRAM synchronous link dynamic random access memory
  • direct rambus RAM direct rambus RAM
  • the vehicle driving device 1001 may include a processor 1002 , a communication interface 1003 and a memory 1004 .
  • the memory 1004 is used to store instructions
  • the processor 1002 is used to execute the instructions stored in the memory 1004 to implement the relevant solutions of the vehicle in any one or more of the corresponding methods as shown in FIG. 5 or FIG. 6 above, Or execute the method executed by the vehicle in the first embodiment or the second embodiment.
  • the vehicle driving device 1001 may execute: obtain the environmental image obtained by the camera shooting the current passing area through the communication interface 1003, and use the environmental image to evaluate the environment recognition model under the current driving strategy through the processor 1002.
  • Model robustness when the model robustness is lower than a preset robustness threshold, adjust the current driving strategy of the vehicle.
  • the environment recognition model is used to perceive the surrounding environment in the current traffic area under the current driving strategy, and the perceived perception result is used to guide the vehicle to pass in the current traffic area.
  • the vehicle driving device 1001 may include a processor 1002 , a communication interface 1003 and a memory 1004 .
  • the memory 1004 is used for storing instructions
  • the processor 1002 is used for executing the instructions stored in the memory 1004, so as to realize the relevant solution of the server in any one or any of the corresponding methods as shown in FIG. 5 or FIG. 6 above, Or execute the method executed by the vehicle in the first embodiment or the second embodiment.
  • the vehicle driving device 1001 may execute: obtain an environment image obtained by capturing the current passing area by a camera in the vehicle through the communication interface 1003, and use the environment image through the processor 1002 to evaluate the environment under the current driving strategy Identify the model robustness of the model, when the model robustness is lower than the preset robustness threshold, generate a driving strategy adjustment instruction, and send the driving strategy adjustment instruction to the vehicle through the communication interface 1003 to instruct the vehicle to adjust the current driving strategy .
  • the environment recognition model is used to perceive the surrounding environment in the current traffic area under the current driving strategy, and the perceived perception result is used to guide the vehicle to pass in the current traffic area.
  • the present application also provides a computer program product, the computer program product includes: computer program code, when the computer program code is run on a computer, the computer is made to execute the program shown in FIG. 5 or FIG. 6 .
  • the present application further provides a computer-readable storage medium, where the computer-readable medium stores program codes, and when the program codes are executed on a computer, the computer is made to execute FIG. 5 or FIG. 6 .
  • the present application further provides a vehicle, which may include a camera and a processor, wherein the camera is used to capture the current passing area to obtain an environmental image, and the processor is used to execute the above-mentioned FIG. 5 or FIG. 6 Any one or more of the steps shown in the corresponding method are performed by the vehicle.
  • a vehicle which may include a camera and a processor, wherein the camera is used to capture the current passing area to obtain an environmental image, and the processor is used to execute the above-mentioned FIG. 5 or FIG. 6 Any one or more of the steps shown in the corresponding method are performed by the vehicle.
  • the present application further provides an Internet of Vehicles system, which includes the aforementioned vehicle and a server.
  • the above-described embodiments may be implemented in whole or in part by software, hardware, firmware, or any combination thereof.
  • software When implemented in software, it can be implemented in whole or in part in the form of a computer program product.
  • the computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, all or part of the processes or functions described in the embodiments of the present application are generated.
  • the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
  • the computer instructions may be stored in or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions may be downloaded from a website site, computer, server or data center Transmission to another website site, computer, server, or data center by wire (eg, coaxial cable, optical fiber, digital subscriber line, DSL) or wireless (eg, infrared, wireless, microwave, etc.).
  • the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
  • the available media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, high-density digital video discs (DVDs)), or semiconductor media (eg, solid state drives, SSD)) etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Geometry (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

一种车辆驾驶方法,包括:获取车辆拍摄当前通行区域所得到的环境图像(501),使用环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性(502),当模型鲁棒性低于预设的鲁棒性阈值时(503),调整当前驾驶策略(504),从而使鲁棒性评测结果准确表征出环境识别模型对当前通行环境变动的适应能力,有助于提高模型鲁棒性的评测质量,通过在当前驾驶策略所采用的环境识别模型不再适用于当前环境时及时调整当前驾驶策略,从而采用适合当前环境的驾驶策略指导车辆通行,有效提高驾驶策略实施的准确性。还涉及一种车辆驾驶装置及系统。

Description

一种车辆驾驶方法、装置及系统
相关申请的交叉引用
本申请要求在2021年04月26日提交中国专利局、申请号为202110451942.1、申请名称为“一种车辆驾驶方法、装置及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及车联网技术领域,尤其涉及一种车辆驾驶方法、装置及系统。
背景技术
车联网技术领域中,当车辆支持自动驾驶功能时,车辆会在自动驾驶过程中实时地拍摄周边环境图像,并将周边环境图像输入环境识别模型,以识别出车辆周围存在的障碍物、车道线及通行空间等信息,进而基于这些信息自动指导车辆下一步的通行。然而,特殊的天气或光照等气象条件,会对车辆拍摄的图像质量造成一定的影响。在这种场景下,如果环境识别模型的鲁棒性不好,意味着环境识别模型适应异常气象条件的能力越差,环境识别模型针对于异常气象条件下所拍摄的周边环境图像的识别效果也就越差,不利于车辆自动驾驶功能的准确实施,甚至还可能会使车辆的自动驾驶出错。因此,准确评测环境识别模型的鲁棒性,对于车辆自动驾驶功能的准确实施具有至关重要的作用。
现有技术通常在将环境识别模型提供给车辆之前先评测环境识别模型。也即是说,在训练得到环境识别模型之后,先使用专用的鲁棒性检测工具检测环境识别模型对预设图像库中的图像的识别效果,并根据这些图像的识别效果确定环境识别模型的鲁棒性是否达到预设的鲁棒性要求,在确定达到预设的鲁棒性要求时再将环境识别模型提供给车辆,而车辆在接收到环境识别模型后,则会一直使用该环境识别模型执行自动驾驶。然而,预设图像库中的图像较为有限,无法涵盖车辆通行过程中可能会出现的全部环境,因此,当车辆在一个预设图像库中未出现过的环境下行驶时,该环境识别模型实际上并不能很好地适用于车辆的当前环境。也即是说,现有方案中的鲁棒性评测方法实际上并不利于车辆自动驾驶功能的准确实施。
发明内容
本申请提供一种车辆驾驶方法、装置及系统,用以提高车辆驾驶策略(如自动驾驶策略)实施的准确性。
第一方面,本申请提供一种车辆驾驶方法,该方法适用于车辆驾驶装置,该方法包括:车辆驾驶装置获取车辆拍摄当前通行区域所得到的环境图像,使用该环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性,当模型鲁棒性低于预设的鲁棒性阈值时,调整当前驾驶策略。其中,环境识别模型用于在当前驾驶策略下感知当前通行区域内的周边环境,感知的感知结果用于指导车辆在当前通行区域内通行。
在上述设计中,通过利用车辆通行过程中的环境图像有针对性地评测环境识别模型的 模型鲁棒性,能使鲁棒性评测结果准确表征出环境识别模型对当前通行环境的适用程度,有效提高模型鲁棒性的评测质量。且,通过在当前驾驶策略所采用的环境识别模型不再适用于当前环境时及时调整当前驾驶策略,还能尽量采用合适的驾驶策略指导车辆通行,有助于提高当前驾驶策略实施的准确性。
在一种可能地设计中,车辆驾驶装置使用环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性之前,还可以先确定当前通行区域内存在的目标环境噪声类型,并确定当前通行区域受到目标环境噪声类型对应的噪声攻击。该设计相当于为评测环境识别模型的操作添加了一个执行条件,即在确定当前通行区域受到攻击时才评测环境识别模型,在未受到攻击时可以不评测环境识别模型,以尽量节省不必要的资源浪费。
在一种可能地设计中,车辆驾驶装置可以通过如下任一方式评测当前通行策略下的环境识别模型的模型鲁棒性:
方式一,车辆驾驶装置从预设图像库中获取与环境图像的相似度不低于预设的相似度阈值的目标预设图像,并将环境图像和目标预设图像一起作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性。如此,通过使用与环境图像具有相同环境条件的目标预设图像一起来评测环境识别模型的模型鲁棒性,能有效增加评测的样本数量,有助于提高评测结果的可信度。
方式二,车辆驾驶装置直接将环境图像作为基准图像,并使用基准图像评测环境识别模型的模型鲁棒性,而可以不再浪费资源存储预设图像和查询目标预设图像,以提高鲁棒性的评测效率。
方式三,车辆驾驶装置从预设图像库中获取与环境图像的相似度不低于预设的相似度阈值的目标预设图像,并将目标预设图像作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性,而不再使用环境图像,以减少待分析的数据量,提高鲁棒性评测的效率。
在一种可能地设计中,车辆驾驶装置使用基准图像评测环境识别模型的模型鲁棒性,包括:车辆驾驶装置先确定当前通行区域内存在的目标环境噪声类型,并生成符合目标环境噪声类型的各个噪声信息,各个噪声信息分别对应不同的噪声等级;然后,车辆驾驶装置重复执行如下操作,直至各个噪声信息均被遍历到为止,获得分别对应各个噪声信息的各个扰动图像:在各个噪声信息中遍历未被遍历过的噪声信息,将遍历到的噪声信息添加到基准图像上,得到一个对应的扰动图像;之后,车辆驾驶装置根据环境识别模型,对基准图像和各个扰动图像进行识别,以确定环境识别模型能准确识别出的扰动图像,进而确定识别出的扰动图像所对应的最大噪声等级。在该设计中,通过使用当前环境中的目标噪声类型对应的噪声信息加扰基准图像以得到扰动图像,能使环境识别模型对扰动图像的识别能力准确表征环境识别模型对当前环境中的噪声变动的适应能力,有助于提高评测结果的准确性。
在一种可能地设计中,车辆驾驶装置可以通过如下方式确定环境识别模型的模型鲁棒性低于预设的鲁棒性阈值:车辆驾驶装置获取预设的噪声等级,并确定能准确识别出的扰动图像所对应的最大噪声等级小于该预设的噪声等级。其中,预设的噪声等级用于指示需要环境识别模型能准确识别的扰动图像所对应的最大噪声等级。如此,通过对当前驾驶策略采用的环境识别模型当前所能识别出的最大噪声等级和所需识别的最大噪声等级,能在环境识别模型对当前环境变动的适应能力无法达到需求时,及时调整当前驾驶策略,有效提高当前驾驶策略实施的准确性。
在一种可能地设计中,基准图像包含至少两个图像的情况下,车辆驾驶装置根据环境识别模型,对基准图像和各个扰动图像进行识别,确定环境识别模型能准确识别出的扰动图像,包括:车辆驾驶装置先根据环境识别模型,确定出每个基准图像对应的各个扰动图像相对于每个基准图像的识别准确度,再针对于各个噪声信息中的每个噪声信息,根据添加该噪声信息的至少两个扰动图像相对于各自基准图像的识别准确度,确定出该噪声信息对应的识别准确度,若识别准确度不低于预设的准确度阈值,则确定添加该噪声信息的至少两个扰动图像为环境识别模型能准确识别出的扰动图像。在该设计中,通过以噪声信息为基准,判断环境识别模型对每个噪声信息扰动的适应能力,有助于找到环境识别模型所能适应的最大噪声等级。
在一种可能地设计中,目标环境噪声类型可以包括如下任一内容:使用噪声分类模型归类环境图像得到的目标环境噪声类型;解析当前通行区域的天气预报信息,获得的目标环境噪声类型;从导航地图的动态图层中获取的目标环境噪声类型;从服务器请求的目标环境噪声类型。在该设计中,通过提供几种确定目标环境噪声类型的方式,有助于根据实际需求选择最佳的方式执行确定目标噪声类型的操作,有效提高鲁棒性评测的灵活性。
在一种可能地设计中,车辆驾驶装置可以通过如下任一操作调整当前驾驶策略:
操作一,车辆驾驶装置对环境图像进行去噪处理后输入环境识别模型。该方式能通过提高输入给环境识别模型的环境图像的图像质量,以相对改善环境识别模型的识别效果,且不停用环境识别模型,操作相对简单,便于实现。
操作二,车辆驾驶装置采用不同于当前驾驶策略下的环境识别模型的另一环境识别模型完成当前驾驶策略。该方式通过提前预置多个环境识别模型,能在当前使用的环境识别模型不再适用当前环境时,灵活至切换另一环境识别模型继续当前驾驶策略,有助于维持车辆的平稳驾驶,提高车主的行车体验。
操作三,车辆驾驶装置切换至手动驾驶策略。该方式能通过退出自动驾驶模型,且提示车主接管驾驶操作来实现,有助于实现自动驾驶策略和手动驾驶策略的灵活切换。
操作四,车辆驾驶装置采取紧急措施,例如紧急制动。该方式有助于在当前环境差到不适合车辆行驶时紧急停车,待到环境变好的时候再重新启用环境识别模型,有助于保护车主的行车安全。
第二方面,本申请提供一种车辆驾驶装置,包括处理器和存储器,处理器和存储器相连,存储器用于存储计算机程序,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置执行:获取车辆拍摄当前通行区域所得到的环境图像,使用环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性,当模型鲁棒性低于预设的鲁棒性阈值时,调整当前驾驶策略。其中,环境识别模型用于在当前驾驶策略下感知当前通行区域内的周边环境,感知的感知结果用于指导车辆在当前通行区域内通行。
在一种可能地设计中,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置还可以执行:在使用环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性之前,确定当前通行区域内存在的目标环境噪声类型,并确定当前通行区域受到目标环境噪声类型对应的噪声攻击。
在一种可能地设计中,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置具体执行:将环境图像作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性;或者,从预设图像库中获取与环境图像的相似度不低于预设的相似度阈值的目标预设图像, 将目标预设图像和环境图像作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性;或者,从预设图像库中获取与环境图像的相似度不低于预设的相似度阈值的目标预设图像,将目标预设图像作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性。
在一种可能地设计中,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置具体执行:先确定当前通行区域内存在的目标环境噪声类型,并生成符合目标环境噪声类型的各个噪声信息,各个噪声信息对应不同噪声等级;然后,重复执行如下操作,直至各个噪声信息均被遍历到为止,获得分别对应各个噪声信息的各个扰动图像:在各个噪声信息中遍历未被遍历过的噪声信息,将遍历到的噪声信息添加到基准图像上,得到一个对应的扰动图像;之后,根据环境识别模型,对基准图像和各个扰动图像进行识别,确定环境识别模型能准确识别出的扰动图像,进而确定识别出的扰动图像所对应的最大噪声等级。
在一种可能地设计中,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置具体执行:获取预设的噪声等级,并确定识别出的扰动图像所对应的最大噪声等级小于预设的噪声等级。其中,预设的噪声等级用于指示需要环境识别模型能准确识别的扰动图像所对应的最大噪声等级。
在一种可能地设计中,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置具体执行:在基准图像包含至少两个的情况下,根据环境识别模型,确定每个基准图像对应的各个扰动图像相对于每个基准图像的识别准确度,并针对于各个噪声信息中的每个噪声信息,执行如下操作:根据添加噪声信息的至少两个扰动图像相对于各自基准图像的识别准确度,确定噪声信息对应的识别准确度,若识别准确度不低于预设的准确度阈值,则确定至少两个扰动信息为环境识别模型能准确识别出的扰动信息。
在一种可能地设计中,目标环境噪声类型可以包括如下任一内容:使用噪声分类模型归类环境图像得到目标环境噪声类型;解析当前通行区域的天气预报信息,获得目标环境噪声类型;从导航地图的动态图层中获取目标环境噪声类型;从服务器请求目标环境噪声类型。
在一种可能地设计中,当存储器中存储的计算机程序被处理器执行时,使得车辆驾驶装置调整所述当前驾驶策略,包括如下任一内容:对环境图像进行去噪处理后输入环境识别模型;采用不同于当前驾驶策略下的环境识别模型的另一环境识别模型完成当前驾驶策略;切换至手动驾驶策略;采取紧急措施。
在一种可能地设计中,车辆驾驶装置可以为车辆或服务器。
第三方面,本申请提供一种车辆驾驶装置,包括执行上述第一方面中任一项设计所对应的方法的模块/单元。这些模块/单元可以通过硬件实现,也可以通过硬件执行相应的软件实现。
第四方面,本申请提供一种车辆驾驶装置,该装置包括处理器和通信接口,其中,通信接口用于接收来自上述第二方面所述的车辆驾驶装置之外的其它通信装置的信号并传输至处理器或将来自处理器的信号发送给上述第二方面所述的车辆驾驶装置之外的其它通信装置,处理器通过逻辑电路或执行代码指令用于实现如第一方面任一项所述的方法。
第五方面,本申请提供一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,当所述计算机程序被运行时,实现如上述第一方面任一项所述的方法。
第六方面,本申请提供一种芯片,包括处理器和接口;所述处理器用于通过接口读取 指令以执行如上述第一方面任一项所述的方法。
第七方面,本申请提供了一种计算机程序产品,该计算机程序产品包括计算机程序,当该计算机程序在计算机上运行时,使得所述计算机可以执行如上述第一方面任一项所述的方法。
第八方面,本申请提供一种车联网系统,该系统包括车辆和服务器,车辆用于拍摄当前通行区域的环境图像,并将环境图像发送给服务器,服务器用于执行上述第一方面任一项所述的方法,在确定要调节车辆的当前驾驶策略时,向车辆发送驾驶策略调整指示,车辆还用于根据该驾驶策略调整指示调整当前驾驶策略。
上述第二方面至第八方面中任一方面中的设计的有益效果,可以参照上述第一方面中相应设计的有益效果,本申请对此不再一一赘述。
附图说明
图1为本申请实施例适用的一种可能的系统架构示意图;
图2示例性示出业界常用的一种模型鲁棒性评测方法示意图;
图3示例性示出业界常用的另一种模型鲁棒性评测方法示意图;
图4示例性示出本申请实施例提供的一种车辆的硬件架构示意图;
图5示例性示出本申请实施例一提供的一种车辆驾驶方法的流程示意图;
图6示例性示出本申请实施例二提供的一种车辆驾驶方法的流程示意图;
图7示例性示出本申请实施例提供的一种鲁棒性评测的流程示意图;
图8示例性示出本申请实施例提供的一种添加不同气象类型的扰动图像示意图;
图9示例性示出本申请实施例提供的一种车辆驾驶装置的结构示意图;
图10示例性示出本申请实施例提供的另一种车辆驾驶装置的结构示意图。
具体实施方式
需要说明的是,本申请实施例中的车辆驾驶方案可以应用于车联网,如车-万物(vehicle to everything,V2X)、车间通信长期演进技术(long term evolution-vehicle,LTE-V)、车辆-车辆(vehicle-vehicle,V2V)等。例如可以应用于车辆,或者车辆中具有驾驶功能的其它装置。该其它装置包括但不限于:车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或车载摄像头等其它传感器,车辆可通过该车载终端、车载控制器、车载模块、车载模组、车载部件、车载芯片、车载单元、车载雷达或摄像头,实施本申请提供的车辆驾驶方法。当然,本申请实施例中的车辆驾驶方案还可以用于除了车辆之外的其它智能终端,或设置在除了车辆之外的其它智能终端中,或设置于该智能终端的部件中。该智能终端可以为智能运输设备、智能家居设备、机器人等其他终端设备。例如包括但不限于智能终端或智能终端内的控制器、芯片、雷达或摄像头等其它传感器、以及其它部件等。该智能终端可以通过连接车辆实施本申请提供的车辆驾驶方法。
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行描述。应理解,下文所描述的实施例仅仅是本申请的一部分实施例,而不是全部的实施例。
图1为本申请实施例适用的一种可能的系统架构示意图,如图1所示的系统架构包括服务器110和车辆120。其中,服务器110可以是指具有处理功能的装置、器件或芯片, 诸如可以包括主机或处理器等实体设备,也可以包括虚拟机或容器等虚拟设备,还可以包括芯片或集成电路。在车联网中,服务器110通常可以为车联网服务器,也称为云服务器、云、云端、云端服务器或云端控制器等,该车联网服务器可以是单个服务器,也可以是由多个服务器构成的服务器集群,具体不作限定。车辆120可以是具有自动驾驶功能的任意车辆,包括但不限于轿车、货车、卡车、摩托车、公共汽车、船、飞机、直升飞机、割草机、娱乐车、游乐场车辆、施工设备、电车、高尔夫球车、火车和手推车等。车辆120通常还可以在服务器110上进行注册,以便获取服务器110所提供的各项服务,诸如语音服务、导航服务、航班查询服务或语音播报服务等。
应理解,本申请实施例对系统架构中服务器110的数量和车辆120的数量均不作限定。通常情况下,一台服务器110可以同时与多台车辆120连接(例如图1所示意的同时与3台车辆120连接)。此外,本申请实施例所适用的系统架构中除了包括服务器110和车辆120以外,还可以包括其它设备,如核心网设备、无线中继设备和无线回传设备等,对此本申请实施例也不作限定。以及,本申请实施例中的服务器110可以将所有的功能集成在一个独立的物理设备上,也可以将不同功能分别部署在多个独立的物理设备上,对此本申请实施例也不作限定。
在介绍具体的实施方式之前,先对下文中涉及到的部分术语进行介绍。
(1)自动驾驶。
本申请实施例中,当车辆支持自动驾驶功能时,车辆中可以存储有自动驾驶地图,该自动驾驶地图可以是服务器提前发送给车辆且周期更新的,也可以是车辆自行采集数据构建的,车辆可以使用本地存储的自动驾驶地图完成自动驾驶。在实施中,自动驾驶可以由车辆中的自动驾驶系统(automated driving systems,ADS)完成,整个流程主要包括如下四个阶段:定位阶段,是指ADS在车辆的行驶过程中调用车载摄像头拍摄周边环境图像,并获取车载定位模块定位出的当前位置信息,先根据当前位置信息从自动驾驶地图中找到车辆当前所在区域的局部地图,再将周边环境图像与局部地图进行匹配,以确定出车辆在局部地图中的准确位置;感知阶段,是指ADS调用车载摄像头和激光雷达等传感器部件感知周围环境获得传感器信息,并感知传感器信息以获知周围环境中存在的动静态对象及属性;决策和规划阶段,是指ADS根据定位阶段定位出的准确位置,从自动驾驶地图中找到一条从当前位置行驶至目的地的目标路线,且在车辆的行驶过程中,ADS还利用感知阶段感知的动静态对象及属性,确定出车辆在当前位置处的具体通行方式,包括如何避开障碍物等;控制阶段,是指ADS按照决策出的目标路线和车辆行驶在每个位置处时的具体通行方式,控制车辆自动驾驶至目的地。
(2)环境识别模型。
本申请实施例中的环境识别模型主要应用于自动驾驶的感知阶段,用于对与视觉相关的环境信息进行感知,如车载摄像头拍摄的周边环境图像。在实施中,ADS会将车载摄像头拍摄的周边环境图像输入环境识别模型,并获取环境识别模型的输出结果,该输出结果用于指示周边环境中存在的目标对象及位置等,用作规划车辆具体通行方式的一个参照。其中,环境感知模型可以包括如下任一模型:特征识别模型,也称为特征识别算法,用于对周边环境图像中的图像特征进行提取,并将提取到的图像特征与预先设置的目标对象的图像特征进行对比,以确定出周边环境中存在的目标对象及位置;深度识别模型,也称为分类识别模型,是使用预先标注好对象及位置的大量的环境图像训练初始神经网络而得到 的,在使用时,将周边环境图像输入训练好的深度识别模型后,即可直接获取深度识别模型输出的目标对象及位置。
(3)模型鲁棒性。
本申请实施例中的模型鲁棒性是指模型适应异常环境的能力,换句话说,即适应环境变动的能力。在自动驾驶领域中,车辆在通行过程中可能会面临各种各样的天气和环境条件,而环境识别模型在设计和训练时,不可能遍历所有的环境图像,因此环境识别模型对于没有遍历过的那部分环境图像的识别效果可能会出现波动,波动程度越小,则说明环境识别模型的模型鲁棒性越强。例如,当车辆在执行自动驾驶的过程中出现下雨天气时,雨量变化对于环境识别模型的识别效果影响越小,则表明环境识别模型的模型鲁棒性越强,即使当前雨量发生较大变动,环境识别模型的识别结果也仍能保持不变,也即是说,环境识别模型能适用于在当前下雨环境下的自动驾驶。然而,如果雨量变化很小就导致环境识别模型的识别结果发生变化,则表明环境识别模的模型鲁棒性较差,当前雨量发生非常细微的变动,可能就会让环境识别模型的识别结果出错。在这种情况下,如果继续使用该环境识别模型,则很可能会由于环境识别模型的识别结果不准确而导致车辆自动驾驶的过程出错,无法保证车辆通行的安全性。因此,在自动驾驶领域中对环境识别模型进行鲁棒性评测,对于维持车辆通行的安全性至关重要。
下面先示例性介绍两种业界常用的模型鲁棒性评测方法。
图2示例性示出业界常用的一种模型鲁棒性评测方法示意图,如图2所示,该方法主要依赖于现有的一组指定噪声集合来对模型鲁棒性进行测评,具体实现过程包括:首先,获取一些噪声非常小甚至可以忽略不计的标准图像;然后,使用现有的一组指定噪声集合对这些标准图像进行加扰,分别得到这些标准图像被每种噪声加扰的攻击图像,其中,指定噪声集合中可以包括但不限于如图2所示意的高斯噪声、泊松噪声、脉冲噪声、散焦噪声、玻璃噪声、运动噪声、变焦噪声、下雪噪声、霜冻噪声、加雾噪声、亮度噪声、对比度噪声、弹性噪声、像素噪声及图像压缩噪声等;之后,根据待评测模型对这些标准图像的识别效果和对这些攻击图像的识别效果,确定该组指定噪声集合对待评测模型的识别效果的影响程度,该影响程度可用于表征待评测模型的模型鲁棒性:当指定噪声集合对待评测模型的识别效果的影响程度越不明显时,表明待评测模型对标准图像在该种指定噪声集合变动下的适应能力越强,待评测模型的模型鲁棒性越好;当指定噪声集合对待评测模型的识别效果的影响程度越明显时,表明待评测模型对标准图像在该种指定噪声集合变动下的适应能力越弱,待评测模型的模型鲁棒性越差。
根据上述内容可知,图2所示意的评测方法实际上是使用提前获取的标准图像对预先指定好的一组噪声集合进行综合评测,其评测结果既无法表征待评测模型在某一种噪声类型下的模型鲁棒性,也不能保证对非噪声集合中出现的噪声类型有效,还无法适用于除标准图像所指示的环境以外的其它环境。且,该种评测方法还是离线执行的,也即是说,当应用在自动驾驶领域时,会先按照该种评测方式评测出环境识别模型的模型鲁棒性,再在确定模型鲁棒性较好时发送给车辆进行使用。显然,该种评测结果并不能适用于自动驾驶过程中可能会出现的全部环境条件,不利于车辆自动驾驶的准确实施。
图3示例性示出业界常用的另一种模型鲁棒性评测方法示意图,如图3所示,该种评测方法不需要指定噪声集合,而是利用神经网络找到使待评测模型的识别结果不出错的最大距离,作为待评测模型的最大安全半径。具体实现过程如下:先获取一个指定的待评测 模型和标准图像,使用待评测模型识别标准图像得到标准图像的标签(如图3中的“标签1”),然后按照噪声由小到大的顺序不断在标准图像上添加扰动,使用待评测模型识别添加扰动后的攻击图像的标签;随着噪声的增大,识别得到的标签会逐渐远离标准图像的标签,直至添加到某一噪声扰动使得待评测模型将攻击图像识别为其余标签(如图3中的“标签2”或“标签3”)时,即找到了待评测模型在其余标签下的决策边界;将各个其它标签的决策边界所对应的噪声扰动幅度中的最小值作为待评测模型的最大安全半径(如图3中所示,“标签2”对应的扰动幅度“R 2”明显小于“标签3”对应的扰动幅度“R 3”,因此最大安全半径为R 2),也即是说,只要在标准图像上增加的噪声扰动处于最大安全半径内,待评测模型就能识别出和标准图像一样的识别结果。
根据上述内容可知,图3所示意的评测方法实际上是提供了一种模型鲁棒性的认证手段,即通过计算标准图像到与其最近的不同标签的攻击图像的距离来评判待评测模型的鲁棒性,换句话说,即通过在标准图像的基础上找到一个使标准图像失真的最小噪声。然而,神经网络的计算过程非常复杂,且计算量比较大,在具有激活函数的神经网络上求解最小失真是一个非线性复杂程度的非确定性(non-deterministic polynomial,NP)完全问题,在计算上是十分棘手的。目前,虽然在一些小而浅的简单神经网络上已经能求得最小失真,但当扩展到中型或者大型神经网络时,最小失真无法通过直接求解的方式来得到,导致该种评测方式可能无法解析出最大安全半径。因此,在将该种评测方式应用在自动驾驶领域时,不仅环境识别模型的评测效率较低,还可能由于无法确定最大安全半径而导致车辆长时间无法获得环境识别模型,不利于车辆实施自动驾驶策略。此外,该种评测方式也是使用指定的环境图像综合衡量各个噪声,以找到一个最优的最大安全半径,也无法解决上述图2中的评测方法所存在的那些技术问题。
有鉴于此,本申请提供一种车辆驾驶方法,该方法利用车辆在当前通行区域中拍摄的环境图像来评测当前驾驶策略所采用的环境识别模型的模型鲁棒性,以针对性地确定出当前驾驶策略所采用的环境识别模型是否足以应对当前的行车环境,进而帮助自动驾驶系统做出是否要调整当前驾驶策略的决策。
下面将结合附图对本申请作进一步地详细描述。应理解,方法实施例中的具体操作方法也可以应用于装置实施例或系统实施例中。需要说明的是,在本申请的描述中,“至少一个”是指一个或多个,其中,多个是指两个或两个以上。鉴于此,本申请实施例中也可以将“多个”理解为“至少两个”。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。另外,字符“/”,如无特殊说明,一般表示前后关联对象是一种“或”的关系。
需要说明的是,本申请实施例中的车辆驾驶方法可以在车辆侧执行,也可以在服务器侧执行,下面示例性地以在车辆侧执行车辆驾驶方法为例进行介绍。
图4示例性示出本申请实施例提供的一种车辆的硬件架构示意图。
应理解,图示车辆400仅是一个范例,并且车辆400可以具有比图中所示出的更多的或者更少的部件,可以组合两个或更多的部件,或者可以具有不同的部件配置。图中所示出的各种部件可以在包括一个或多个信号处理和/或专用集成电路在内的硬件、软件、或硬件和软件的组合中实现。
如图4所示,车辆400可以包括处理器410,存储器420,收发器430,至少一个摄像器440、以及至少一个显示屏450等。
下面结合图4对车辆400的各个部件进行具体的介绍:
处理器410可以包括一个或多个处理单元,例如,可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。处理器410可以运行本申请实施例提供的车辆驾驶方法,比如可以响应于当前环境,在环境识别模型不再适合当前环境时采取相应地措施,诸如在显示屏450上给出提醒。当处理器410集成不同的器件,比如集成CPU和GPU时,CPU和GPU可以配合执行本申请实施例提供的车辆驾驶方法,比如车辆驾驶方法中的部分算法由CPU执行,另一部分算法由GPU执行,以得到较快的处理效率。
在一些实施例中,存储器420可包含指令(例如,程序逻辑),指令可被处理器410执行来执行车辆400的各种功能,包括以上描述的车辆驾驶功能。存储器420也可包含额外的指令,包括向车辆的推进系统、传感器系统、控制系统和外围设备中的一个或多个发送数据、从其接收数据、与其交互和/或对其进行控制的指令。除了指令以外,存储器420还可存储数据,例如自动驾驶地图、路线信息,车辆的位置、方向、速度以及其它这样的车辆数据,以及其他信息。这种信息可在车辆400的自动驾驶模式、半自动驾驶模式和/或手动驾驶模式的操作期间被处理器410或车辆400中的其它部件使用。
收发器430在发送信息时可以为发送单元或发射器,在接收信息时可以为接收单元或接收器,此收发器、发射器或接收器可以为射频电路。或者,收发器430也可以是输入和/或输出接口、管脚或电路等,用于向车辆400的用户提供信息或从其接收信息。可选地,收发器430可包括在车辆400的外围设备的集合内的一个或多个输入/输出设备,例如无线通信系统、触摸屏、麦克风和扬声器,处理器410可基于收发器430从各种子系统(例如,推进系统、传感器系统和控制系统)以及从用户接口接收的输入来控制车辆400的功能。
摄像器440可以包括用于感知周围环境的各类传感器,诸如相机传感器和雷达传感器等。其中,相机传感器可以包括用于获取车辆400所处环境的图像的任何相机,诸如静态相机传感器、动态相机传感器、红外线相机传感器、或可见光相机传感器等。雷达传感器可以包括长距雷达(Long Range Radar,LRR)、中距雷达(Middle RangeRadar,MRR)和短距雷达(Short Range Radar,SRR)等。
显示屏450用于显示图像,视频等。显示屏450包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。显示屏194可以是一个一体的柔性显示屏,也可以采用两个刚性屏以及位于两个刚性屏之间的一个柔性屏组成的拼接的显示屏。示例性地,当处理器410运行本申请实施例提供的车辆驾驶方法,处理器410可以在确定当前驾驶策略不适合当前环境时,在显示屏450上提示用户切换驾驶策略。
尽管图4中未示出,车辆400还可以包括定位装置、传动装置、制动装置、转向单元等,在此不予赘述。
下面基于图4所示意的车辆架构,介绍在车辆侧实现车辆驾驶方法的具体实现过程。
【实施例一】
基于图4所示意的车辆架构,图5示例性示出本申请实施例提供的一种车辆驾驶方法的流程示意图,该方法适用于各种车辆,例如图4所示意的车辆400。需要说明的是,本申请中的车辆驾驶方法可以周期执行,下面以其中一个周期为例,介绍车辆驾驶方法在每个周期中的具体实现过程。如图5所示,该流程包括如下步骤:
步骤501,车辆获取拍摄当前通行区域所得到的环境图像。
示例性地,在车辆400中,处理器410可以维持一个计时器,每当计时器计时到预设的计时时长时,处理器410即可调用摄像器440拍摄当前通行区域,并获得摄像器440拍摄得到的环境图像,进而基于环境图像启动当前周期内的模型鲁棒性评测。且,处理器410在每次调用摄像器440时,还可以控制摄像器440连续拍摄多张环境图像,通过使用多张环境图像评测环境识别模型的模型鲁棒性,有助于提高评测结果的准确度和说服力。
步骤502,车辆使用环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性。
在上述步骤502中,当前驾驶策略可以是自动驾驶策略,也可以是需要用到环境识别模型感知周边环境的其它驾驶策略,具体不作限定。
具体实施中,环境识别模型可以存储于存储器420,处理器410可以通过访问存储器420获得环境识别模型,进而可以基于从摄像器440中获取的环境图像和从存储器420中获取的环境识别模型,通过如下任一方式评测环境识别模型的模型鲁棒性:
一种实施方式中,处理器410可以先确定当前通行区域内存在的环境噪声类型,然后分别使用该种环境噪声类型对应的不同噪声等级的噪声信息对环境图像进行加扰得到扰动图像,之后将环境图像和扰动图像输入环境识别模型,获得环境图像的识别结果和扰动图像的识别结果,通过比较扰动图像的识别结果和环境图像的识别结果的差异程度,确定出环境识别模型能准确识别出的扰动图像所对应的最大噪声等级,该最大噪声等级即为环境识别模型在当前环境下的模型鲁棒性,用于表征环境识别模型对当前环境中噪声变化的最大的适应能力。其中,环境噪声可以是能影响到环境识别模型的识别结果的任意环境元素,例如气象。该实施方式可以在车辆的通行过程中实时利用当前拍摄到的环境图像评测模型的鲁棒性,评测结果的准确度较高。关于该种实施方式的具体实现过程请参照如下实施例二,此处先不作具体介绍。
另一种实施方式中,存储器420的本地还可以存储有环境识别模型在各种环境类型下的模型鲁棒性评测结果,处理器410在从摄像器440中获得当前通行区域的环境图像后,可以先判断该环境图像所属的目标环境类型,然后直接从存储器420的本地获取环境识别模型在目标环境类型下的模型鲁棒性评测结果。其中,环境识别模型在任一环境类型下的评测结果可以由环境识别模型在该种环境类型下所能准确识别的最大噪声等级来表征,可以是服务器预先使用属于该环境类型的大量图像评测环境识别模型得到并下发给车辆的,同一环境类型下的图像可以包含道路类型一致、环境噪声类型一致,且环境噪声类型的噪声信息差异不大的图像,这些图像可以是从网络上查询得到的,也可以是委托专门的采集车辆采集真实的道路环境得到的,还可以是通过与第三方设备的交互得到的,具体不作限定。该实施方式可以直接获取预先存储的鲁棒性评测结果,而不需要实时评测,有助于提高车辆驾驶的处理效率。
步骤503,车辆判断环境识别模型的模型鲁棒性是否低于预设的鲁棒性阈值,若是, 则执行步骤504,若否,则执行步骤501。
在上述步骤503中,预设的鲁棒性阈值可以存储于存储器420,预设的鲁棒性阈值用于指示需要环境识别模型所具有的对环境变动的最低适应能力,可以由需要环境识别模型能准确识别出的扰动图像所对应的最大噪声等级来表征。如此,处理器410在根据环境图像确定出环境识别模型当前能准确识别出的扰动图像所对应的最大噪声等级后,还可以从存储器420获取需要环境识别模型能准确识别的扰动图像所对应的最大噪声等级,进而判断能准确识别出的最大噪声等级是否小于需要准确识别出最大噪声等级,若是,则说明环境识别模型对当前环境变动的适应能力达不到最低适应能力的要求,当前驾驶策略不再适用于当前环境,需要采取措施调整当前驾驶策略。若否,则说明环境识别模型对当前环境变动的适应能力能达到最低适应能力的要求,该环境识别模型在当前环境下仍能具有较好的适应效果,可以继续使用当前驾驶策略。因此,处理器410可以结束当前周期的鲁棒性评测,并等待计时器计时到预设的计时时长后,重新拍摄新的环境图像,以启动下一周期的鲁棒性评测。
步骤504,车辆调整当前驾驶策略。
示例性地,假设当前驾驶策略为自动驾驶策略,则处理器410会在通行过程中调用摄像器440拍摄周边环境图像,将摄像器440拍摄到的周边环境图像输入存储在存储器420的环境识别模型中,并获取环境识别模型感知并输出的周边环境图像中的障碍物、车道线及通行空间等信息,利用这些信息决策出一种能避开障碍物且不违规的安全通行方式,按照该安全通行方式控制车辆通行。其中,用于自动驾驶的周边环境图像和用于评测模型鲁棒性的环境图像可以相同,也可以不同,具体不作限定。在这种情况下,处理器410如果确定用于自动驾驶策略的环境识别模型对当前环境变动的适应能力不再满足最低适应能力的要求,则可以通过如下任一操作调整当前驾驶策略:
操作一,处理器410在自动驾驶策略中添加图像处理模块,使用图像处理模块对周边环境图像进行去噪声处理后,再将去噪声后的周边环境图像输入环境识别模型进行识别。其中,图像处理模块可以以软件或硬件形式存在。当以软件形式存在时,图像处理模块具体可以是指存储在存储器420中的一段程序代码,处理器410通过调用该程序代码实现对图像处理模块的添加操作。当以硬件形式存在时,图像处理模块可以分别连接摄像器440和存储器420,在默认情况下,图像处理模块未被启用,处理器410直接将摄像器440拍摄的周边环境图像输入至存储器420中存储的环境识别模型,当图像处理模块被启用后,处理器410先将摄像器440拍摄的周边环境图像发送给图像处理模块进行去噪声处理,再将去噪声后的周边环境图像输入至存储器420中存储的环境识别模型。该操作通过提高输入给环境识别模型的环境图像的图像质量,以相对改善环境识别模型的识别效果,且还能不停用环境识别模型,操作相对简单,便于实现。
操作二,处理器410采用不同于当前自动驾驶策略下的环境识别模型的另一环境识别模型完成当前自动驾驶策略,例如在自动驾驶策略中启用适合当前环境的另一环境识别模型。在实施中,存储器420的本地还可以存储有多个环境识别模型,且每个环境识别模型可以具有对应的适用类型标签,适用类型标签用于指示所适用的道路类型、气象类型及噪声值。如此,在确定当前环境识别模型不再适用于当前环境时,处理器410可以在自动驾驶策略中停止使用该环境识别模型,并从存储器420本地存储的多个环境识别模型中找到适用于当前通行区域的道路类型、气象类型及噪声值的目标环境识别模型,在自动驾驶策 略中启动该目标环境识别模型。此外,在启用目标环境识别模型之前,处理器410还可以针对于目标环境识别模型执行上述的模型鲁棒性评测过程,在评测确定目标环境识别模型的模型鲁棒性满足预设的鲁邦性阈值要求后再执行启用操作,以确保只启用对当前环境变动的适应能力满足最低适应能力的要求的目标环境识别模型。该操作通过提前预置多个环境识别模型,能在当前使用的环境识别模型不适用当前环境时灵活至切换另一环境识别模型继续当前驾驶策略,有助于维持车辆的平稳驾驶,提高车主的行车体验。
操作三,处理器410切换至手动驾驶策略。在实施中,当存储器420的本地只存储有当前环境识别模型,或者存储器420本地存储的全部环境识别模型都不适用于当前环境时,处理器410可以停止自动驾驶策略,即退出自动驾驶模型,并可以通过显示屏450提示车主接管驾驶操作,以切换至手动驾驶模式,实现自动驾驶策略和手动驾驶策略的灵活切换。
操作四,处理器410采取紧急措施,例如紧急制动。在实施中,如果发现环境识别模型在当前环境下的模型鲁棒性非常差,意味着当前环境中的环境噪声可能已经差到不适合车辆行驶,因此处理器410可以直接紧急制动停车,等到环境变好的时候,用户重新启动车辆后,再重新启用环境识别模型进行自动驾驶,提高车主行车的安全性。
在上述四种操作中,操作一是通过提高输入给环境识别模型的周边环境图像的图像质量来相对改善环境识别模型的识别效果,但并未停用环境识别模型,而操作二至操作四则是直接停用环境识别模型。应理解,上述内容只是示例性介绍几种调整当前驾驶策略的可能方式,凡是能够通过调整当前驾驶策略以提高当前驾驶策略对当前环境的适应能力的方案都在本申请的保护范围内,本申请对此不再一一列举。
在上述实施例一中,由于车辆在环境识别模型的鲁棒性达不到要求时会调整车辆的当前驾驶策略,因此,只要当前驾驶策略下的环境识别模型可用,则意味着环境识别模型的鲁棒性能达到要求,也即是说,认为环境识别模型对当前环境下的环境图像的识别结果仍然是准确的。在这种情况下,通过以当前的环境图像作为基准图像,而不是以现有技术中的标准图像为基准图像,能参照当前环境下的真实气象条件找到环境识别模型在当前环境下所能适应的最大扰动,该最大扰动情况能准确体现环境识别模型在当前环境下的真实鲁棒性。由此可知,在上述实施例一中的方案相当于在车辆的通行过程中设置了一个完整的鲁棒性评测流程,通过利用车辆通行过程中的环境图像有针对性地评测环境识别模型的鲁棒性,能使鲁棒性评测结果准确表征出环境识别模型对当前通行环境的适用程度,该方式下评测的模型鲁棒性比较准确,有助于为驾驶策略的准确实施提供一种参照。且,通过在确定当前驾驶策略中的环境识别模型不适用于当前环境时及时调整当前驾驶策略,还能尽量采用适合当前环境的驾驶策略指导车辆的通行,有效提高实施当前驾驶策略的准确性。
下面示例性地以环境噪声是指气象为例,基于实施例二进一步介绍车辆驾驶方法的具体实现过程。应理解,下文中的“气象”也可以替换为其它任意类型的环境噪声。
【实施例二】
基于图4所示意的车内架构,图6示例性示出本申请实施例二提供的一种车辆驾驶方法的流程示意图,该方法适用于各种车辆,例如图4所示意的车辆400。如图6所示,该流程包括如下步骤:
步骤601,车辆获取拍摄当前通行区域所得到的环境图像。
步骤602,车辆确定当前通行区域内存在的目标气象类型。
在上述步骤602中,气象类型可以包括但不限于:雨、雪、雾、霜等。
本申请实施例中,处理器410可以通过如下任一方式确定当前通行区域内存在的气象类型:
方式一,存储器420中还可以存储有气象分类模型,处理器410通过将环境图像输入存储器420中存储的气象分类模型,以归类得到环境图像所属的气象类型。其中,气象分类模型可以是根据预先标注有气象类型的大量的环境图像训练得到的,气象分类模型具有一定的鲁棒性,即能在一定的异常拍摄条件下准确识别出正确的气象类型。
方式二,处理器410解析当前通行区域的天气预报信息获得气象类型,例如处理器410在出发之前提前获取将通过区域的天气预报信息,以预估可能会出现的气象类型,或者处理器410在行车的过程中实时或周期地获取当前正通过区域的天气预报信息,以实时解析当前已经出现的气象类型等。其中,天气预报信息可以是处理器410通过收发器430从服务器侧请求得到的,也可以是通过解析车辆400中的语音模块收听的广播信息得到的,还可以是通过收发器430与第三方设备交互获知的,第三方设备例如可以包括与该车辆处于同一区域的其它车辆或该车辆当前所通过的路侧单元等。
方式三,存储器420中还可以存储有导航地图,处理器410从存储器420中获取导航地图后,解析导航地图的动态图层以获知当前通行区域的气象类型。其中,导航地图具体可以是指自动驾驶地图,通常由服务器下发给车辆400中的收发器430,进而存储在存储器420中,且还可以实时更新。导航地图中包括静态图层和动态图层:静态图层是对真实道路的映射,包括道路、车道及车道线等;动态图层是对道路环境的映射,包括每条道路上的当前天气情况,如能见度值、气象类型和气象类型下的噪声值等。示例来说,当气象类型为下雨时,气象类型下的噪声值用于指示雨量,如可以是小雨、中雨、大雨等指示性信息,也可以是具体的雨量值,不作限定。
方式四,处理器410通过收发器430向服务器请求当前通行区域的气象类型。而服务器在接收到请求后,可以通过网络或与第三方设备交互等方式获得车辆当前通行区域的气象类型并返回给收发器430。
应理解,上述内容只是示例性介绍几种获取气象类型的可能方式,本申请并不限定只能使用这几种方式获得气象类型。凡是能够获得车辆当前通行区域的气象类型的方式都在本申请的保护范围内,本申请对此不作具体限定。
示例性地,如果当前通行区域内存在多个气象类型,则处理器410可以从多个气象类型中选择气象环境最为严峻的气象类型作为目标气象类型,而可以不考虑其它气象类型,以便在节省资源的基础上利用当前最严峻的气象环境进行鲁棒性评测。例如,当同时存在下雨、下雪和起雾这三种气象时,如果雨量等级为小雨,雪量等级为小雪,起雾等级为中雾,则起雾的气象环境明显比下雨和下雪的气象环境更为严峻,因此处理器410可以将起雾作为目标气象类型。当然,在其它方案中,处理器410也可以将多个气象类型都作为目标气象类型,以便利用综合的气象环境进行鲁棒性评测。
步骤603,车辆判断当前通行区域是否受到目标气象类型对应的噪声攻击,若是,则执行步骤604,若否,则执行步骤601。
示例性地,存储器420中还可以存储有各种气象类型对应的临界噪声等级,任一气象类型对应的临界噪声等级用于指示该气象类型下的需要检测模型鲁棒性的最小噪声等级,例如以下雨为例,下雨的临界噪声等级与会影响到环境识别模型的识别结果的最小雨量相 关,例如可以是最小雨量的具体数值,也可以是用于指示最小雨量的标识信息,如毛毛雨、小雨、中雨或大雨等。在这种情况下,处理器410在确定出当前通行区域内存在的目标气象类型后,还可以获取当前通行区域在目标气象类型下的当前噪声等级,以及从存储器420中获取目标气象类型对应的临界噪声等级,如果当前噪声等级大于或等于该临界噪声等级,则说明当前通行区域内的气象特征非常明显(例如下大雨、下大雪或起大雾等),该气象特征下拍摄的图像质量较差,导致当前环境对环境识别模型的识别效果造成影响,这种情况下,可以认为当前通行区域受到目标气象类型对应的噪声攻击,处理器410需要对环境识别模型在当前环境下的模型鲁棒性进行评测。相反的,如果当前噪声等级小于该临界噪声等级,则说明当前通行区域内的气象特征比较轻微(例如毛毛雨等),该气象特征下拍摄的图像质量受当前环境的干扰较小,基本不会对环境识别模型的识别效果造成影响,这种情况下,可以认为当前通行区域未受到目标气象类型对应的噪声攻击,处理器410可以无需额外浪费资源检测环境识别模型的模型鲁邦性。
在上述示例中,如果处理器410是按照上述方式一获得气象类型,则存储器420中存储的气象分类模型还可以是采用同时标注有气象类型和噪声值的环境图像训练得到的,如此,在将当前环境图像输入存储器420中存储的气象分类模型后,气象分类模型会同时输出气象类型和对应的噪声等级。或者,存储器420中还可以存储有每种气象类型对应的噪声识别模型,处理器410先将环境图像输入存储器420中存储的气象分类模型以获得目标气象类型,再将环境图像输入存储器420中存储的目标气象类型对应的噪声识别模型以获得噪声等级。如果处理器410是按照上述方式二至方式四获得气象类型,则处理器410可以直接通过解析收发器430或语音模块所获取到的天气预报信息、或查询动态图层、或通过收发器430向服务器请求等方式同时获得气象类型和噪声等级,也可以先通过解析收发器430或语音模块所获取到的天气预报信息、或查询动态图层、或通过收发器430向服务器请求等方式获得气象类型,再通过解析收发器430或语音模块所获取到的天气预报信息中与气象类型相关的信息、或查询动态图层中与气象类型相关的地图元素、或通过收发器430向服务器请求等方式获得噪声值,具体不作限定。
需要说明的是,上述步骤603是一个可选步骤。该步骤相当于为评测环境识别模型的操作添加了一个执行条件,即在确定当前通行区域受到攻击时才评测环境识别模型,在未受到攻击时不评测环境识别模型,该方式有助于节省不必要的资源浪费。在另一种情况下,处理器410也可以按照周期方式直接评测鲁棒性,而可以不用在评测前先分析是否受到攻击。
步骤604,车辆将环境图像作为基准图像,分别使用目标气象类型下的各个噪声等级对应的噪声信息加扰基准图像,获得各个扰动图像。
在一种可选地实施方式中,存储器420中还可以存储有预设图像库,预设图像库中存储有多张预设图像。处理器410在使用噪声信息加扰基准图像之前,还可以从存储器420存储的预设图像库中找到与环境图像的相似度不小于预设的相似度阈值的目标预设图像,将目标预设图像也作为基准图像。具体实现过程可以参照如下步骤:
步骤一,处理器410确定当前通行区域的道路类型和目标气象类型的当前噪声等级。其中,道路类型包括但不限于:高速路、公路、城市道路、厂矿道路、林区道路、乡村小路、匝道或路口等。例如,假设摄像器440连续拍摄得到5张环境图像,则由于5张环境图像是连续拍摄的,因此这5张环境图像的道路类型和目标气象类型的当前噪声等级基本 一致,因此处理器410可以从这5张环境图像中任意选择其中一张环境图像进行识别,以获得道路类型和当前噪声等级。
步骤二,处理器410从存储器420中获取预设图像库,根据当前通行区域的道路类型和目标气象类型的当前噪声等级查询预设图像库,从预设图像库中找到具有相同道路类型且与目标气象类型的当前噪声等级差别不大的目标预设图像。
示例性地,在存储器420中,预设图像库中的预设图像可以按照道路类型进行分区存储。在实施中,处理器410可以先根据当前通行区域的道路类型从存储器420的预设图像库中找到该道路类型对应的分区,再确定该分区中的每个预设图像对应的气象类型和对应气象类型下的噪声等级,然后从该分区中找到对应气象类型与当前通行区域的目标气象类型相同、且对应气象类型下的噪声等级与目标气象类型的当前噪声等级的差值小于预设的差值阈值的那部分目标预设图像。其中,预设图像对应的气象类型和对应气象类型下的噪声等级可以是预先标注在预设图像上的固有属性,以便处理器410能通过直接读取存储器420的各个分区中每个预设图像的固有属性快速确定每个预设图像是否满足要求;当然,也可以是处理器410通过将存储器420的各个分区中的每个预设图像输入存储器420中的气象分类模型而获取的,以节省预设图像库所占用的资源。
步骤三,处理器410将环境图像和目标预设图像作为基准图像。例如,假设处理器410从存储器420的预设图像库中获取到5张满足要求的预设图像,则处理器410可以将摄像器440拍摄到的5张环境图像和这5张预设图像一起作为基准图像,即,共得到10张基准图像。
在上述步骤一至步骤三中,通过使用与环境图像具有相同道路条件和相同气象条件的目标预设图像一起来评测环境识别模型的模型鲁棒性,能有效增加评测的样本数量,有助于提高评测结果的可信度。
应理解,上述步骤一至步骤三是可选步骤。在实际操作中,处理器410也可以直接将摄像器440拍摄的环境图像作为基准图像,而不再浪费存储器420的资源存储预设图像,也可以不再浪费处理器410的咨询查询目标预设图像,以提高鲁棒性评测的效率。或者,处理器410还可以只将与环境图像相似的目标预设图像作为基准图像,而不再使用环境图像,以减少待分析的数据量,提高鲁棒性评测的效率。
本申请实施例中,在获得扰动图像时,处理器410可以先生成目标气象类型下的N个噪声等级对应的N个噪声信息,然后依次遍历N个噪声信息,在遍历每个噪声信息时,将该噪声信息添加到基准图像上,得到基准图像对应的一张扰动图像,如此,当N个噪声信息都遍历完成后,处理器410可以获得每张基准图像对应的N张扰动图像。其中,N为大于或等于2的整数。在这种情况下,任一基准图像对应的N张扰动图像都是对该基准图像施加不同程度的N个攻击而得到的,随着施加的噪声等级增大,基准图像受目标气象类型的攻击也加大,扰动图像的图像质量也会相应变差。
举例来说,图7示例性示出本申请实施例提供的一种鲁棒性评测的流程示意图,在该示例中,车辆当前通行区域的目标气象类型为“雾”,图7中(A)示意出原始未加“雾”的基准图像,图7中(B)示意出在基准图像中添加噪声等级为0.1的“雾”后得到的扰动图像,图7中(C)示意出在基准图像中添加噪声等级为0.2的“雾”后得到的扰动图像,图7中(D)示意出在基准图像中添加噪声等级为0.3的“雾”后得到的扰动图像,图7中(E)示意出在基准图像中添加噪声等级为0.4的“雾”后得到的扰动图像,图7中(F)示意出在基准图像 中添加噪声等级为0.5的“雾”后得到的扰动图像,图7中(G)示意出在基准图像中添加噪声等级为0.6的“雾”后得到的扰动图像,图7中(H)示意出在基准图像中添加噪声等级为0.7的“雾”后得到的扰动图像,图7中(I)示意出在基准图像中添加噪声等级为0.8的“雾”后得到的扰动图像,图7中(J)示意出在基准图像中添加噪声等级为0.9的“雾”后得到的扰动图像。根据图7中(A)至图7中(J)可知,当添加“雾”的噪声等级越大,获得的扰动图像中的雾越明显,扰动图像越不清晰。
需要说明的是,上述图7只是以添加“雾”这种气象类型的扰动为例进行介绍,其它的气象类型也可以直接参照图7中的方式添加扰动。例如图8示例性示出本申请实施例提供的一种添加不同气象类型的扰动图像示意图,其中,图8中(A)示意出原始未加扰动的基准图像,图8中(B)示意出在基准图像中添加“雨”这种扰动后得到的扰动图像,图8中(C)示意出在基准图像中添加“雪”这种扰动后得到的扰动图像,图8中(D)示意出在基准图像中添加“雾”这种扰动后得到的扰动图像。可见,同一噪声等级中,雾对于基准图像的扰动程度相对于雨和雪对于基准图像的扰动程度来说可能会更加严重。
步骤605,车辆根据环境识别模型,对基准图像和各个扰动图像进行识别,确定环境识别模型能准确识别出的扰动图像所对应的最大噪声等级。
具体实施中,假设处理器410将5张环境图像和5张目标预设图像一起作为10张基准图像,则处理器410可以按照如下步骤确定环境识别模型的最大噪声等级:
步骤一,处理器410针对于10张基准图像中的每张基准图像进行如下分析:将该基准图像和该基准图像对应的N张扰动图像输入环境识别模型,获得该基准图像的识别结果和10张扰动图像的识别结果,采用预设的相似度算法计算10张扰动图像中的每张扰动图像的识别结果与该基准图像的识别结果的相似度,获得每张扰动图像相对于基准图像的识别准确度。其中,预设的相似度算法可以由本领域技术人员根据经验进行设置,例如可以采用欧几里得度量算法、皮尔逊相关系数算法、余弦相似度算法或基于Map-Reduce的相似度计算算法等。
步骤二,处理器410从10×N张扰动图像中获取添加了同一噪声等级对应的噪声信息的10张扰动图像,根据这10张扰动图像的识别准确度,确定该噪声等级对应的识别准确度,若确定该识别准确度不低于预设的准确度阈值,则确定环境识别模型能准确识别出这10张扰动图像。其中,噪声等级对应的识别准确度,例如可以是这10张扰动图像的识别准确度的平均值或加权平均值。
其中,预设的准确度阈值用于指示能让环境识别模型具有较好识别结果的最小准确度,该阈值可以由本领域技术人员根据经验进行设置。通常情况下,当扰动图像的识别结果相对于基准图像的识别结果的准确度在0.5以上时,环境识别模型就能将扰动图像识别至基准图像的识别结果一侧,因此,预设的准确度阈值可以设置为0.5以上的一个值。然而,考虑到预设的准确度阈值设置的越低,环境识别模型找到的临界扰动图像的识别结果和识别基准图像的识别结果的差异可能也越大,因此,为保证临界扰动图像也能具有较好的识别效果,优选的可以将预设的准确度阈值设置为0.7附近的一个值,以便于处理器410在当前环境变差至开始影响环境识别模型的识别效果就及时停用环境识别模型,而不是在环境识别模型不可用之后再停用环境识别模型。
继续参照图7中(A)至图7中(J)所示,假设预设的准确度阈值为0.7,添加噪声等级为0.1、0.2、0.3、0.4、0.5、0.6、0.7、0.8、0.9的“雾”的这9类扰动图像(例如每类 扰动图像中包括添加同一噪声等级的10张扰动图像),相对于基准图像的平均识别准确度分别为0.95、0.9、0.8、0.6、0.45、0.3、0.2、0.15、0.1,则:这9个平均识别准确度中不小于0.7的平均识别准确度包括0.95、0.9和0.8,意味着,添加噪声等级0.1、0.2和0.3的“雾”的那些扰动图像属于环境识别模型能够准确识别的扰动图像。
步骤三,处理器410确定能准确识别出的扰动图像所对应的最大噪声等级,该最大噪声等级所对应的那些扰动图像对应为使环境识别模型的准确度满足预设的准确度要求的临界扰动图像,该最大噪声等级能用于表征环境识别模型的模型鲁棒性。
理论上来说,当添加的噪声等级越大时,扰动图像越不清晰,环境识别模型对扰动图像的识别准确度也就越差。例如,随着添加“雾”的噪声等级逐渐增大,噪声等级对应的识别准确度也逐渐降低,噪声等级的识别准确度和噪声等级之间存在正相关的对应关系。在这种情况下,上述步骤三中所确定出的最大噪声等级,实际上也即是识别准确度不小于预设的准确度阈值的那些噪声等级中具有最小识别准确度的噪声等级。例如,继续参照图7中(A)至图7中(J)所示,9个平均识别准确度中不小于0.7的最小准确度为0.8,准确度0.8对应噪声等级0.3,也即是说,环境识别模型在当前环境下最大能对添加0.3噪声等级的“雾”的环境图像进行准确分析,若当前环境中的“雾”的噪声等级增大至0.3以上,则环境识别模型就不可用了,或者说环境识别模型就大概率会识别出和不添加“雾”的环境图像不一样的识别结果,因此,该最大噪声等级也称为环境识别模型的安全半径。当安全半径较大时,即使在当前环境中添加了较大的“雾”,环境识别模型也仍能相对准确地识别出原始环境图像一样的识别结果,环境识别模型的模型鲁棒性较好。当安全半径较小时,当前环境中可能再添加一点“雾”,环境识别模型就会识别出与原始环境图像不一样的识别结果,环境识别模型的模型鲁棒性较差。
示例性地,考虑到噪声等级的识别准确度会随着噪声等级的增大而减小,因此,处理器410还可以按照噪声等级由小到大的顺序依次分析N个噪声等级,在分析每个噪声等级时,先使用该噪声等级分别加扰10张基准图像中得到10张扰动图像,并根据这10张扰动图像计算出该噪声等级对应的识别准确度,判断该识别准确度是否小于预设的准确度阈值,如果不小于,则继续对比该噪声等级大的下一噪声等级执行上述分析,如果小于,则说明当前已经达到了识别准确度的临界边界,可以将比该噪声等级小的上一个噪声等级作为安全半径,而不用再分析后面的噪声等级。如此,该示例不仅能提高确定安全半径的效率,还能尽量节省计算资源。
步骤606,车辆判断环境识别模型能准确识别出的扰动图像所对应的最大噪声等级是否小于预设的噪声等级,若是,则执行步骤607,若否,执行步骤601。
其中,预设的噪声等级用于指示希望或需要环境识别模型能准确识别出的扰动图像所对应的最大噪声等级,可以由本领域技术人员根据经验进行设置,也可以由用户根据实际的场景需求进行设置,具体不作限定。
示例性地,为提高鲁棒性分析的精细程度,处理器410还可以针对于不同的气象类型设置有不同的预设的噪声等级,例如针对于“雾”设置0.6的预设的噪声等级,针对于“雨”设置0.5的预设的噪声等级,针对于“雪”设置0.4的预设的噪声等级,这些气象类型和各自对应的预设的噪声等级可以存储在存储器420中。在实施中,继续参照图7中(A)至图7中(J)所示,如果处理器410确定环境识别模型的安全半径为0.3,则对比该安全半径0.3和“雾”对应的预设的噪声等级0.6可知,环境识别模型在当前环境下最大能识别加0.3噪声 等级的“雾”的环境图像,而“雾”对应的预设的噪声等级要求环境识别模型能识别加0.6噪声等级及以下的“雾”的环境图像,显然,环境识别模型对当前环境变动的适应能力低于预设的适应能力要求,环境识别模型不再适用于当前的行车环境。相反的,如果环境识别模型的安全半径为0.8,则说明环境识别模型在当前环境下最大能识别加0.8噪声等级的“雾”的环境图像,显然也就能准确识别加预设的噪声等级0.6的“雾”的环境图像,即环境识别模型对当前环境变动的适应能力满足预设的适应能力要求,可继续使用环境识别模型执行当前驾驶策略。
步骤607,车辆调整当前驾驶策略。
在上述实施例二中,通过使用当前环境图像评测环境识别模型在当前环境内存在的目标噪声类型下的模型鲁棒性,而不是使用预先指定好的标准图像评测环境识别模型在指定噪声集合或各类噪声下的综合模型鲁棒性,不仅能保证评测结果对当前环境中的目标噪声类型有效,还能适用于车辆的当前通行环境,有助于车辆当前驾驶策略的准确实施。且,上述实施例二通过划分噪声等级计算环境识别模型最大能使用的噪声等级,作为环境识别模型的安全半径,而不是使用神经网络计算使环境识别模型失真的最优解,不仅计算方式更为简单,还能确保计算得到安全半径,有助于降低在车辆通行中增加鲁棒性评测而对车辆通行所产生的影响。
需要说明的是,上述实施例一和实施例二只是以在车辆侧执行车辆驾驶方法为例进行介绍,本申请中的车辆驾驶方法还可以由服务器执行,具体实现包括:车辆400中的处理器410获取摄像器440拍摄当前通行区域所得到的环境图像,通过收发器430上报给服务器,服务器根据环境图像确定车辆当前驾驶策略下使用的环境识别模型的模型鲁棒性,若模型鲁棒性低于预设的鲁棒性阈值,则向收发器430下发驾驶策略调整指令,处理器410从收发器430获取该驾驶策略调整指令后,根据该驾驶策略调整指令调整当前驾驶策略。其中,驾驶策略调整指令可以只包含一个指示,处理器410根据该指示自行确定如何调整当前驾驶策略,或者,驾驶策略调整指令中也可以包含具体的调整方式,处理器410按照该指示中的调整方式执行响应的调整操作,具体不作限定。
需要说明的是,上述各个信息的名称仅仅是作为示例,随着通信技术的演变,上述任意信息均可能改变其名称,但不管其名称如何发生变化,只要其含义与本申请上述信息的含义相同,则均落入本申请的保护范围之内。
上述主要从各个网元之间交互的角度对本申请提供的方案进行了介绍。可以理解的是,上述实现各网元为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本发明能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本发明的范围。
根据前述方法,图9示例性示出本申请实施例提供的一种车辆驾驶装置的结构示意图,如图9所示,该车辆驾驶装置900可以为车辆或服务器,该车辆驾驶装置900可以包括依次连接的图像获取单元910、鲁棒性评测单元950和驾驶策略调整单元960,图像获取单元910用于获取车辆当前通行区域内的环境图像后发送给鲁棒性评测单元950,鲁棒性评测单元950用于使用该环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性,驾驶 策略调整单元960用于在评测得到的模型鲁棒性低于预设的鲁棒性阈值时,调整车辆的当前驾驶策略。其中,环境识别模型用于在当前驾驶策略下感知当前通行区域内的周边环境,感知的感知结果用于指导车辆在当前通行区域内通行。
本申请实施例中,当车辆驾驶装置900为车辆时,图像获取单元910具体可以是车辆中的摄像器,诸如车载摄像头,图像获取单元910通过拍摄车辆的当前通行区域得到环境图像。当车辆驾驶装置900为服务器时,图像获取单元910具体可以是服务器中的收发单元,收发单元接收车辆中的收发器发送的环境图像,该环境图像为车辆中的摄像器拍摄当前通行区域而得到的。
在一种可选地实施方式中,该车辆驾驶装置900还可以包括环境感知单元920,环境感知单元920连接鲁棒性评测单元950,环境感知单元920可以先感知当前通行区域内存在的目标环境噪声类型,并判断当前通行区域是否受到目标环境噪声类型对应的噪声攻击,将判断结果发送给鲁棒性评测单元950,鲁棒性评测单元950在确定受到噪声攻击时,使用该环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性。
在实施中,环境感知单元920可以通过多种方式感知目标环境噪声类型,例如:
一种方式下,环境感知单元920还可以连接图像获取单元910,环境感知单元920从图像获取单元910中获取拍摄到的环境图像,并将环境图像输入噪声分类模型,以归类得到目标环境噪声类型。
另一种方式下,环境感知单元920可以获取车辆当前通行区域的天气预报信息,通过解析该天气预报信息以获得目标环境噪声类型。
又一种方式下,环境感知单元920可以从导航地图的动态图层中获取目标环境噪声类型。
再一种方式下,环境感知单元920还可以连接服务器,通过向服务器发送获取请求,以从服务器请求得到目标环境噪声类型。
在一种可选地实施方式中,鲁棒性评测单元950具体用于:将环境图像作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性;或者,从预设图像库中获取与环境图像的相似度不低于预设的相似度阈值的目标预设图像,将目标预设图像、或目标预设图像和环境图像作为基准图像,使用基准图像评测环境识别模型的模型鲁棒性。
在一种可选地实施方式中,该车辆驾驶装置900还可以包括噪声产生单元930和图像生成单元940,噪声产生单元930连接环境感知单元920和图像生成单元940,图像生成单元940还连接图像获取单元910和鲁棒性评测单元950。在实施中,环境感知单元920还可以将确定出的当前通行区域内存在的目标环境噪声类型发送给噪声产生单元930;噪声产生单元930生成符合目标环境噪声类型的各个噪声信息后发送给图像生成单元940,其中,各个噪声信息对应不同噪声等级;图像生成单元940从图像获取单元910获得拍摄的环境图像,以及从噪声产生单元930获得各个噪声信息,重复执行如下操作,直至各个噪声信息均被遍历到为止,获得分别对应各个噪声信息的各个扰动图像:在各个噪声信息中遍历未被遍历过的噪声信息,将遍历到的噪声信息添加到基准图像上,得到一个对应的扰动图像,图像生成单元940将基准图像和生成的各个扰动图像发送给鲁棒性评测单元950;鲁邦性评测单元950根据环境识别模型,对基准图像和各个扰动图像进行识别,确定环境识别模型能准确识别出的扰动图像,并确定识别出的扰动图像所对应的最大噪声等级。
在一种可选地实施方式中,鲁邦性评测单元950具体用于:获取预设的噪声等级,并 确定识别出的扰动图像所对应的最大噪声等级小于预设的噪声等级。其中,预设的噪声等级用于指示需要环境识别模型能准确识别的扰动图像所对应的最大噪声等级。
在一种可选地实施方式中,基准图像包含至少两个的情况下,鲁邦性评测单元950可以具体用于:根据环境识别模型,确定每个基准图像对应的各个扰动图像相对于每个基准图像的识别准确度,再针对于各个噪声信息中的每个噪声信息,执行如下操作:根据添加噪声信息的至少两个扰动图像相对于各自基准图像的识别准确度,确定噪声信息对应的识别准确度,若识别准确度不低于预设的准确度阈值,则确定至少两个扰动图像为环境识别模型能准确识别出的扰动图像。
在一种可选地实施方式中,驾驶策略调整单元960具体用于:对环境图像进行去噪处理后输入环境识别模型,或者采用不同于当前驾驶策略下的环境识别模型的另一环境识别模型完成当前驾驶策略,或者切换至手动驾驶策略,或者采取紧急措施。
应理解,以上车辆驾驶装置900的单元的划分仅仅是一种逻辑功能的划分,实际实现时可以全部或部分集成到一个物理实体上,也可以物理上分开。例如,图像获取单元910可以由上述图4的收发器430实现,环境感知单元920、噪声产生单元930、图像生成单元940、鲁棒性评测单元950和驾驶策略调整单元960可以由上述图4的处理器410实现。
需要说明的是,上述图9只是以车辆为例介绍车辆驾驶装置的结构,当车辆驾驶装置为服务器时,服务器也可以具有如图9所示意的环境感知单元920、噪声产生单元930、图像生成单元940、鲁棒性评测单元950和驾驶策略调整单元960,且还可以具有收发单元,收发单元用于接收车辆发送的环境数据后发送给图像生成单元940,以及用于将驾驶策略调整单元960生成的驾驶策略调整指令发送给车辆。
根据前述方法,图10为本申请实施例提供的一种车辆驾驶装置的结构示意图,如图10所示,该装置可以为车辆或服务器,也可以为芯片或电路,比如可设置于车辆中的芯片或电路,再比如可设置于服务器中的芯片或电路。
进一步的,该车辆驾驶装置1001还可以进一步包括总线系统,其中,处理器1002、存储器1004、通信接口1003可以通过总线系统相连。
应理解,上述处理器1002可以是一个芯片。例如,该处理器1002可以是现场可编程门阵列(field programmable gate array,FPGA),可以是专用集成芯片(application specific integrated circuit,ASIC),还可以是系统芯片(system on chip,SoC),还可以是中央处理器(central processor unit,CPU),还可以是网络处理器(network processor,NP),还可以是数字信号处理电路(digital signal processor,DSP),还可以是微控制器(micro controller unit,MCU),还可以是可编程控制器(programmable logic device,PLD)或其他集成芯片。
在实现过程中,上述方法的各步骤可以通过处理器1002中的硬件的集成逻辑电路或者软件形式的指令完成。结合本申请实施例所公开的方法的步骤可以直接体现为硬件处理器执行完成,或者用处理器1002中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器1004,处理器1002读取存储器1004中的信息,结合其硬件完成上述方法的步骤。
应注意,本申请实施例中的处理器1002可以是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法实施例的各步骤可以通过处理器中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器可以是通用处理器、数字信号处理器(DSP)、专 用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器,处理器读取存储器中的信息,结合其硬件完成上述方法的步骤。
可以理解,本申请实施例中的存储器1004可以是易失性存储器或非易失性存储器,或可包括易失性和非易失性存储器两者。其中,非易失性存储器可以是只读存储器(read-only memory,ROM)、可编程只读存储器(programmable ROM,PROM)、可擦除可编程只读存储器(erasable PROM,EPROM)、电可擦除可编程只读存储器(electrically EPROM,EEPROM)或闪存。易失性存储器可以是随机存取存储器(random access memory,RAM),其用作外部高速缓存。通过示例性但不是限制性说明,许多形式的RAM可用,例如静态随机存取存储器(static RAM,SRAM)、动态随机存取存储器(dynamic RAM,DRAM)、同步动态随机存取存储器(synchronous DRAM,SDRAM)、双倍数据速率同步动态随机存取存储器(double data rate SDRAM,DDR SDRAM)、增强型同步动态随机存取存储器(enhanced SDRAM,ESDRAM)、同步连接动态随机存取存储器(synchlink DRAM,SLDRAM)和直接内存总线随机存取存储器(direct rambus RAM,DR RAM)。应注意,本文描述的系统和方法的存储器旨在包括但不限于这些和任意其它适合类型的存储器。
该车辆驾驶装置1001对应上述方法中的车辆的情况下,该车辆驾驶装置1001可以包括处理器1002、通信接口1003和存储器1004。该存储器1004用于存储指令,该处理器1002用于执行该存储器1004存储的指令,以实现如上图5或图6中所示的任一项或任多项对应的方法中车辆的相关方案,或执行上述实施例一或实施例二中车辆所执行的方法。例如,执行实施例一时,该车辆驾驶装置1001可以执行:通过通信接口1003获取摄像器拍摄当前通行区域所得到的环境图像,通过处理器1002使用该环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性,当模型鲁棒性低于预设的鲁棒性阈值时,调整车辆的当前驾驶策略。其中,环境识别模型用于在当前驾驶策略下感知当前通行区域内的周边环境,感知的感知结果用于指导车辆在当前通行区域内通行。
该车辆驾驶装置1001对应上述方法中的服务器的情况下,该车辆驾驶装置1001可以包括处理器1002、通信接口1003和存储器1004。该存储器1004用于存储指令,该处理器1002用于执行该存储器1004存储的指令,以实现如上图5或图6中所示的任一项或任多项对应的方法中服务器的相关方案,或执行上述实施例一或实施例二中车辆所执行的方法。例如,执行实施例一时,该车辆驾驶装置1001可以执行:通过通信接口1003获取车辆中的摄像器拍摄当前通行区域所得到的环境图像,通过处理器1002使用该环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性,当模型鲁棒性低于预设的鲁棒性阈值时,生成驾驶策略调整指令,通过通信接口1003将驾驶策略调整指令发送给车辆,以指示车辆调整当前驾驶策略。其中,环境识别模型用于在当前驾驶策略下感知当前通行区域内的周边环境,感知的感知结果用于指导车辆在当前通行区域内通行。
该车辆驾驶装置1001所涉及的与本申请实施例提供的技术方案相关的概念,解释和 详细说明及其他步骤请参见前述方法或其他实施例中关于这些内容的描述,此处不做赘述。
根据本申请实施例提供的方法,本申请还提供一种计算机程序产品,该计算机程序产品包括:计算机程序代码,当该计算机程序代码在计算机上运行时,使得该计算机执行图5或图6所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种计算机可读存储介质,该计算机可读介质存储有程序代码,当该程序代码在计算机上运行时,使得该计算机执行图5或图6所示实施例中任意一个实施例的方法。
根据本申请实施例提供的方法,本申请还提供一种车辆,该车辆可以包括摄像器和处理器,其中,摄像器用于拍摄当前通行区域获得环境图像,处理器用于执行如上图5或图6中所示的任一项或任多项对应的方法中车辆所执行的步骤。
根据本申请实施例提供的方法,本申请还提供一种车联网系统,其包括前述的车辆和服务器。
上述实施例可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一个计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质(例如,软盘、硬盘、磁带)、光介质(例如,高密度数字视频光盘(digital video disc,DVD))、或者半导体介质(例如,固态硬盘(solid state drive,SSD))等。
显然,本领域的技术人员可以对本申请进行各种改动和变型而不脱离本申请的精神和范围。这样,倘若本申请的这些修改和变型属于本申请权利要求及其等同技术的范围之内,则本申请也意图包含这些改动和变型在内。

Claims (33)

  1. 一种车辆驾驶方法,其特征在于,所述方法包括:
    获取车辆拍摄当前通行区域所得到的环境图像;
    使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性;
    当所述模型鲁棒性低于预设的鲁棒性阈值时,调整所述当前驾驶策略;
    其中,所述环境识别模型用于在所述当前驾驶策略下感知所述当前通行区域内的周边环境,所述感知的感知结果用于指导所述车辆在所述当前通行区域内通行。
  2. 如权利要求1所述的方法,其特征在于,所述使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性之前,所述方法还包括:
    确定所述当前通行区域内存在的目标环境噪声类型;
    确定所述当前通行区域受到所述目标环境噪声类型对应的噪声攻击。
  3. 如权利要求1或2所述的方法,其特征在于,所述使用所述环境图像评测当前通行策略下的环境识别模型的模型鲁棒性,包括:
    将所述环境图像作为基准图像;或者,从预设图像库中获取与所述环境图像的相似度不低于预设的相似度阈值的目标预设图像,将所述目标预设图像、或所述目标预设图像和所述环境图像作为基准图像;
    使用所述基准图像评测所述环境识别模型的模型鲁棒性。
  4. 如权利要求3所述的方法,其特征在于,所述使用所述基准图像评测所述环境识别模型的模型鲁棒性,包括:
    确定所述当前通行区域内存在的目标环境噪声类型;
    生成符合所述目标环境噪声类型的各个噪声信息,所述各个噪声信息对应不同噪声等级;
    重复执行如下操作,直至所述各个噪声信息均被遍历到为止,获得分别对应各个噪声信息的各个扰动图像:在所述各个噪声信息中遍历未被遍历过的噪声信息,将遍历到的噪声信息添加到所述基准图像上,得到一个对应的扰动图像;
    根据所述环境识别模型,对所述基准图像和所述各个扰动图像进行识别,确定所述环境识别模型能准确识别出的扰动图像;
    确定所述识别出的扰动图像所对应的最大噪声等级。
  5. 如权利要求4所述的方法,其特征在于,通过如下方式确定所述模型鲁棒性低于预设的鲁棒性阈值:
    获取预设的噪声等级,所述预设的噪声等级用于指示需要所述环境识别模型能准确识别的扰动图像所对应的最大噪声等级;
    确定所述识别出的扰动图像所对应的最大噪声等级小于所述预设的噪声等级。
  6. 如权利要求4或5所述的方法,其特征在于,所述基准图像包含至少两个;
    所述根据所述环境识别模型,对所述基准图像和所述各个扰动图像进行识别,确定所述环境识别模型能准确识别出的扰动图像,包括:
    根据所述环境识别模型,确定每个基准图像对应的各个扰动图像相对于所述每个基准图像的识别准确度;
    针对于所述各个噪声信息中的每个噪声信息:根据添加所述噪声信息的至少两个扰动 图像相对于各自基准图像的识别准确度,确定所述噪声信息对应的识别准确度,若所述识别准确度不低于预设的准确度阈值,则确定所述至少两个扰动图像为所述环境识别模型能准确识别出的扰动图像。
  7. 如权利要求2或4所述的方法,其特征在于,所述目标环境噪声类型包括如下任一内容:
    使用噪声分类模型归类所述环境图像得到的所述目标环境噪声类型;
    解析所述当前通行区域的天气预报信息,获得的所述目标环境噪声类型;
    从导航地图的动态图层中获取的所述目标环境噪声类型;
    从服务器请求的所述目标环境噪声类型。
  8. 如权利要求1至7中任一项所述的方法,其特征在于,所述调整所述当前驾驶策略,包括如下任一内容:
    对所述环境图像进行去噪处理后输入所述环境识别模型;
    采用不同于所述当前驾驶策略下的环境识别模型的另一环境识别模型完成所述当前驾驶策略;
    切换至手动驾驶策略;
    采取紧急措施。
  9. 一种车辆驾驶装置,其特征在于,包括处理器和存储器,所述处理器和所述存储器相连,所述存储器存储计算机程序,当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置执行:
    获取车辆拍摄当前通行区域所得到的环境图像;
    使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性;
    当所述模型鲁棒性低于预设的鲁棒性阈值时,调整所述当前驾驶策略;
    其中,所述环境识别模型用于在所述当前驾驶策略下感知所述当前通行区域内的周边环境,所述感知的感知结果用于指导所述车辆在所述当前通行区域内通行。
  10. 如权利要求9所述的装置,其特征在于,当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置还执行:
    在使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性之前,确定所述当前通行区域内存在的目标环境噪声类型,并确定所述当前通行区域受到所述目标环境噪声类型对应的噪声攻击。
  11. 如权利要求9或10所述的装置,其特征在于,当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置具体执行:
    将所述环境图像作为基准图像;或者,从预设图像库中获取与所述环境图像的相似度不低于预设的相似度阈值的目标预设图像,将所述目标预设图像、或所述目标预设图像和所述环境图像作为基准图像;
    使用所述基准图像评测所述环境识别模型的模型鲁棒性。
  12. 如权利要求11所述的装置,其特征在于,当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置具体执行:
    确定所述当前通行区域内存在的目标环境噪声类型;
    生成符合所述目标环境噪声类型的各个噪声信息,所述各个噪声信息对应不同噪声等级;
    重复执行如下操作,直至所述各个噪声信息均被遍历到为止,获得分别对应各个噪声信息的各个扰动图像:在所述各个噪声信息中遍历未被遍历过的噪声信息,将遍历到的噪声信息添加到所述基准图像上,得到一个对应的扰动图像;
    根据所述环境识别模型,对所述基准图像和所述各个扰动图像进行识别,确定所述环境识别模型能准确识别出的扰动图像;
    确定所述识别出的扰动图像所对应的最大噪声等级。
  13. 如权利要求12所述的装置,其特征在于,当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置具体执行:
    获取预设的噪声等级,所述预设的噪声等级用于指示需要所述环境识别模型能准确识别的扰动图像所对应的最大噪声等级;
    确定所述识别出的扰动图像所对应的最大噪声等级小于所述预设的噪声等级。
  14. 如权利要求12或13所述的装置,其特征在于,所述基准图像包含至少两个;
    当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置具体执行:
    根据所述环境识别模型,确定每个基准图像对应的各个扰动图像相对于所述每个基准图像的识别准确度;
    针对于所述各个噪声信息中的每个噪声信息:根据添加所述噪声信息的至少两个扰动图像相对于各自基准图像的识别准确度,确定所述噪声信息对应的识别准确度,若所述识别准确度不低于预设的准确度阈值,则确定所述至少两个扰动图像为所述环境识别模型能准确识别出的扰动图像。
  15. 如权利要求10或12所述的装置,其特征在于,所述目标环境噪声类型包括如下任一内容:
    使用噪声分类模型归类所述环境图像得到所述目标环境噪声类型;
    解析所述当前通行区域的天气预报信息,获得所述目标环境噪声类型;
    从导航地图的动态图层中获取所述目标环境噪声类型;
    从服务器请求所述目标环境噪声类型。
  16. 如权利要求9至15中任一项所述的装置,其特征在于,当所述存储器中存储的所述计算机程序被所述处理器执行时,使得所述车辆驾驶装置调整所述当前驾驶策略,包括如下任一内容:
    对所述环境图像进行去噪处理后输入所述环境识别模型;
    采用不同于所述当前驾驶策略下的环境识别模型的另一环境识别模型完成所述当前驾驶策略;
    切换至手动驾驶策略;
    采取紧急措施。
  17. 如权利要求9至16中任一项所述的装置,其特征在于,所述装置为车辆或服务器。
  18. 一种车辆驾驶装置,其特征在于,包括执行如上述权利要求1至8中任一项方法的模块/单元。
  19. 一种车联网系统,其特征在于,包括车辆和服务器;
    所述车辆,用于拍摄当前通行区域得到环境图像,将所述环境图像发送给所述服务器;
    所述服务器,用于使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒 性,当所述模型鲁棒性低于预设的鲁棒性阈值时,向所述车辆发送驾驶策略调整指示;
    所述车辆,还用于根据所述驾驶策略调整指示调整所述当前驾驶策略;
    其中,所述环境识别模型用于在所述当前驾驶策略下感知所述当前通行区域内的周边环境,所述感知的感知结果用于指导所述车辆在所述当前通行区域内通行。
  20. 一种芯片,其特征在于,包括处理器和接口,所述处理器用于通过所述接口读取指令以执行如权利要求1至8中任一项所述的方法。
  21. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质存储有计算机程序,当所述计算机程序被运行时,实现如权利要求1至8中任一项所述的方法。
  22. 一种计算机程序产品,其特征在于,所述计算机程序产品包括计算机程序,当所述计算机程序在计算机上运行时,实现如权利要求1至8中任一项所述的方法。
  23. 一种车辆,其特征在于,所述车辆包括如权利要求9至16中任一项所述的装置。
  24. 一种服务器,其特征在于,所述服务器包括如权利要求9至16中任一项所述的装置。
  25. 一种车联网系统,其特征在于,包括车辆驾驶装置和服务器;
    所述车辆驾驶装置,用于拍摄当前通行区域得到环境图像,将所述环境图像发送给所述服务器;
    所述服务器,用于使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性,当所述模型鲁棒性低于预设的鲁棒性阈值时,向所述车辆驾驶装置发送驾驶策略调整指示;
    所述车辆驾驶装置,还用于根据所述驾驶策略调整指示调整所述当前驾驶策略;
    其中,所述环境识别模型用于在所述车辆驾驶装置在所述当前驾驶策略下感知所述当前通行区域内的周边环境,所述感知的感知结果用于指导车辆在所述当前通行区域内通行。
  26. 如权利要求25所述的系统,其特征在于,所述服务器在使用所述环境图像评测当前驾驶策略下的环境识别模型的模型鲁棒性之前,还用于:
    确定所述当前通行区域内存在的目标环境噪声类型;
    确定所述当前通行区域受到所述目标环境噪声类型对应的噪声攻击。
  27. 如权利要求25或26所述的系统,其特征在于,所述服务器具体用于:
    将所述环境图像作为基准图像;或者,从预设图像库中获取与所述环境图像的相似度不低于预设的相似度阈值的目标预设图像,将所述目标预设图像、或所述目标预设图像和所述环境图像作为基准图像;
    使用所述基准图像评测所述环境识别模型的模型鲁棒性。
  28. 如权利要求27所述的系统,其特征在于,所述服务器具体用于:
    确定所述当前通行区域内存在的目标环境噪声类型;
    生成符合所述目标环境噪声类型的各个噪声信息,所述各个噪声信息对应不同噪声等级;
    重复执行如下操作,直至所述各个噪声信息均被遍历到为止,获得分别对应各个噪声信息的各个扰动图像:在所述各个噪声信息中遍历未被遍历过的噪声信息,将遍历到的噪声信息添加到所述基准图像上,得到一个对应的扰动图像;
    根据所述环境识别模型,对所述基准图像和所述各个扰动图像进行识别,确定所述环境识别模型能准确识别出的扰动图像;
    确定所述识别出的扰动图像所对应的最大噪声等级。
  29. 如权利要求28所述的系统,其特征在于,所述服务器通过如下方式确定所述模型鲁棒性低于预设的鲁棒性阈值:
    获取预设的噪声等级,所述预设的噪声等级用于指示需要所述环境识别模型能准确识别的扰动图像所对应的最大噪声等级;
    确定所述识别出的扰动图像所对应的最大噪声等级小于所述预设的噪声等级。
  30. 如权利要求28或29所述的系统,其特征在于,所述基准图像包含至少两个;
    所述服务器具体用于:
    根据所述环境识别模型,确定每个基准图像对应的各个扰动图像相对于所述每个基准图像的识别准确度;
    针对于所述各个噪声信息中的每个噪声信息:根据添加所述噪声信息的至少两个扰动图像相对于各自基准图像的识别准确度,确定所述噪声信息对应的识别准确度,若所述识别准确度不低于预设的准确度阈值,则确定所述至少两个扰动图像为所述环境识别模型能准确识别出的扰动图像。
  31. 如权利要求26或28所述的系统,其特征在于,所述目标环境噪声类型包括如下任一内容:
    使用噪声分类模型归类所述环境图像得到的所述目标环境噪声类型;
    解析所述当前通行区域的天气预报信息,获得的所述目标环境噪声类型;
    从导航地图的动态图层中获取的所述目标环境噪声类型;
    从服务器请求的所述目标环境噪声类型。
  32. 如权利要求25至31中任一项所述的系统,其特征在于,所述车辆驾驶装置具体用于:
    对所述环境图像进行去噪处理后输入所述环境识别模型;
    采用不同于所述当前驾驶策略下的环境识别模型的另一环境识别模型完成所述当前驾驶策略;
    切换至手动驾驶策略;
    或者,
    采取紧急措施。
  33. 如权利要求25至32中任一项所述的系统,其特征在于,所述车辆驾驶装置包含于车辆中。
PCT/CN2022/088025 2021-04-26 2022-04-20 一种车辆驾驶方法、装置及系统 WO2022228251A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110451942.1A CN115320627A (zh) 2021-04-26 2021-04-26 一种车辆驾驶方法、装置及系统
CN202110451942.1 2021-04-26

Publications (1)

Publication Number Publication Date
WO2022228251A1 true WO2022228251A1 (zh) 2022-11-03

Family

ID=83847808

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/088025 WO2022228251A1 (zh) 2021-04-26 2022-04-20 一种车辆驾驶方法、装置及系统

Country Status (2)

Country Link
CN (1) CN115320627A (zh)
WO (1) WO2022228251A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116279554A (zh) * 2023-01-15 2023-06-23 润芯微科技(江苏)有限公司 基于图像识别及移动位置服务调整驾驶策略的系统及方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117034732A (zh) * 2023-04-14 2023-11-10 北京百度网讯科技有限公司 基于真实与仿真对抗学习的自动驾驶模型训练方法

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007983A1 (en) * 2009-07-12 2011-01-13 Electronics And Telecommunications Research Institute Method and apparatus of detecting image objects
US20200074234A1 (en) * 2018-09-05 2020-03-05 Vanderbilt University Noise-robust neural networks and methods thereof
US20200226430A1 (en) * 2020-03-26 2020-07-16 Intel Corporation Devices and methods for accurately identifying objects in a vehicle's environment
CN111523515A (zh) * 2020-05-13 2020-08-11 北京百度网讯科技有限公司 自动驾驶车辆环境认知能力评价方法、设备及存储介质
US20200265590A1 (en) * 2019-02-19 2020-08-20 The Trustees Of The University Of Pennsylvania Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning
CN112346450A (zh) * 2019-07-22 2021-02-09 沃尔沃汽车公司 鲁棒的自主驾驶设计
CN112541520A (zh) * 2019-09-20 2021-03-23 罗伯特·博世有限公司 用于为神经网络生成反事实数据样本的设备和方法
CN112699765A (zh) * 2020-12-25 2021-04-23 北京百度网讯科技有限公司 评估视觉定位算法的方法、装置、电子设备及存储介质

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110007983A1 (en) * 2009-07-12 2011-01-13 Electronics And Telecommunications Research Institute Method and apparatus of detecting image objects
US20200074234A1 (en) * 2018-09-05 2020-03-05 Vanderbilt University Noise-robust neural networks and methods thereof
US20200265590A1 (en) * 2019-02-19 2020-08-20 The Trustees Of The University Of Pennsylvania Methods, systems, and computer readable media for estimation of optical flow, depth, and egomotion using neural network trained using event-based learning
CN112346450A (zh) * 2019-07-22 2021-02-09 沃尔沃汽车公司 鲁棒的自主驾驶设计
CN112541520A (zh) * 2019-09-20 2021-03-23 罗伯特·博世有限公司 用于为神经网络生成反事实数据样本的设备和方法
US20200226430A1 (en) * 2020-03-26 2020-07-16 Intel Corporation Devices and methods for accurately identifying objects in a vehicle's environment
CN111523515A (zh) * 2020-05-13 2020-08-11 北京百度网讯科技有限公司 自动驾驶车辆环境认知能力评价方法、设备及存储介质
CN112699765A (zh) * 2020-12-25 2021-04-23 北京百度网讯科技有限公司 评估视觉定位算法的方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116279554A (zh) * 2023-01-15 2023-06-23 润芯微科技(江苏)有限公司 基于图像识别及移动位置服务调整驾驶策略的系统及方法
CN116279554B (zh) * 2023-01-15 2024-02-13 润芯微科技(江苏)有限公司 基于图像识别及移动位置服务调整驾驶策略的系统及方法

Also Published As

Publication number Publication date
CN115320627A (zh) 2022-11-11

Similar Documents

Publication Publication Date Title
US11990036B2 (en) Driver behavior monitoring
US11897460B2 (en) Risk processing for vehicles having autonomous driving capabilities
US11520331B2 (en) Methods and apparatus to update autonomous vehicle perspectives
US10885777B2 (en) Multiple exposure event determination
WO2021004077A1 (zh) 一种检测车辆的盲区的方法及装置
CN112712717B (zh) 一种信息融合的方法、装置和设备
WO2022228251A1 (zh) 一种车辆驾驶方法、装置及系统
US20190009785A1 (en) System and method for detecting bullying of autonomous vehicles while driving
JP7266627B2 (ja) 早期警報方法、装置、電子機器、記録媒体及びコンピュータプログラム製品
US11836985B2 (en) Identifying suspicious entities using autonomous vehicles
US20210201070A1 (en) Systems and methods for semantic map-based adaptive auto-exposure
WO2017123665A1 (en) Driver behavior monitoring
US11699205B1 (en) Providing a GUI to enable analysis of time-synchronized data sets pertaining to a road segment
US20220237919A1 (en) Method, Apparatus, and Computing Device for Lane Recognition
JP7050099B2 (ja) 危険道路行為に対する警告方法、装置、サーバ、システム、デバイス、記憶媒体、及びプログラム
CN112447060A (zh) 识别车道的方法、装置及计算设备
CN117315624A (zh) 障碍物检测方法、车辆控制方法、装置、设备及存储介质
US20220357453A1 (en) Lidar point cloud segmentation using box prediction
WO2022170540A1 (zh) 交通灯检测的方法和装置
EP3989031A1 (en) Systems and methods for fusing road friction data to enhance vehicle maneuvering
US11946749B2 (en) Driving data guided spatial planning
US20230182742A1 (en) System and method for detecting rainfall for an autonomous vehicle
US20240071109A1 (en) Computer-based tools and techniques for vehicle detection
KR20230156770A (ko) Gpu 가속 신경망의 실시간 무결성 체크

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22794722

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22794722

Country of ref document: EP

Kind code of ref document: A1