WO2022133939A1 - Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur - Google Patents

Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur Download PDF

Info

Publication number
WO2022133939A1
WO2022133939A1 PCT/CN2020/139135 CN2020139135W WO2022133939A1 WO 2022133939 A1 WO2022133939 A1 WO 2022133939A1 CN 2020139135 W CN2020139135 W CN 2020139135W WO 2022133939 A1 WO2022133939 A1 WO 2022133939A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
preset
scene factor
classification result
sensing data
Prior art date
Application number
PCT/CN2020/139135
Other languages
English (en)
Chinese (zh)
Inventor
高杨
任卫红
徐斌
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to CN202080075421.4A priority Critical patent/CN114641419A/zh
Priority to PCT/CN2020/139135 priority patent/WO2022133939A1/fr
Publication of WO2022133939A1 publication Critical patent/WO2022133939A1/fr

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces

Definitions

  • the present application relates to the technical field of intelligent driving, and in particular, to a driving control method, device, automobile, and computer-readable storage medium.
  • the intelligent driving car In the field of intelligent driving, the intelligent driving car mainly perceives the environmental information of the environment in which the intelligent driving car is located through sensors, and plans its own driving according to the environmental information.
  • the environment in which the intelligent driving car is located is complex and changeable. In complex scenes such as bad weather, poor road conditions and poor lighting, the intelligent driving car is prone to make wrong decisions, resulting in car accidents, and it is impossible to guarantee the intelligent driving car. Drive safely.
  • embodiments of the present application provide a driving control method, a device, a vehicle, and a computer-readable storage medium, which aim to improve the driving safety of an intelligent driving vehicle.
  • an embodiment of the present application provides a driving control method, including:
  • Each of the preset controlled components is controlled to operate according to the target control strategy of each of the preset controlled components.
  • an embodiment of the present application further provides a driving control device, the driving control device includes a memory and a processor; the memory is used for storing a computer program;
  • the processor is configured to execute the computer program and implement the following steps when executing the computer program:
  • Each of the preset controlled components is controlled to operate according to the target control strategy of each of the preset controlled components.
  • the embodiments of the present application also provide an intelligent driving vehicle, the intelligent driving vehicle comprising:
  • a power system which is arranged in the vehicle body and is used to improve the mobile power for the intelligent driving vehicle;
  • a sensor arranged on the vehicle body, for collecting sensing data of the environment where the intelligent driving vehicle is located;
  • the driving control device as described above is provided in the vehicle body and is used to control the intelligent driving vehicle.
  • an embodiment of the present application further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, the processor implements the above-mentioned The steps of the driving control method.
  • the embodiments of the present application provide a driving control method, a device, a vehicle, and a computer-readable storage medium.
  • a driving control method By acquiring sensing data collected by a sensor, and inputting the sensing data into a preset scene factor classification model, a first scene is obtained.
  • the factor classification result determine the target control strategy of at least one preset controlled component in the intelligent driving vehicle according to the first scene factor classification result, and finally control the at least one preset controlled component according to the target control strategy of the at least one preset controlled component.
  • the operation of the controlled components enables the intelligent driving vehicle to use different control strategies to control the controlled components based on the adaptive scene, which can greatly improve the driving safety of the intelligent driving vehicle.
  • FIG. 1 is a schematic diagram of a scenario for implementing the driving control method provided by the embodiment of the present application
  • FIG. 2 is a schematic flowchart of steps of a driving control method provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of steps of another driving control method provided by an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a scene in which an intelligent driving vehicle and a target object travel at a constant distance in an embodiment of the present application;
  • FIG. 5 is a schematic block diagram of the structure of a driving control device provided by an embodiment of the present application.
  • FIG. 6 is a schematic structural block diagram of an intelligent driving vehicle provided by an embodiment of the present application.
  • the intelligent driving car In the field of intelligent driving, the intelligent driving car mainly perceives the environmental information of the environment in which the intelligent driving car is located through sensors, and plans its own driving according to the environmental information.
  • the environment in which the intelligent driving car is located is complex and changeable. In complex scenes such as bad weather, poor road conditions and poor lighting, the intelligent driving car is prone to make wrong decisions, resulting in car accidents, and it is impossible to guarantee the intelligent driving car. Drive safely.
  • the embodiments of the present application provide a driving control method, a device, a vehicle, and a computer-readable storage medium, by acquiring sensing data collected by sensors, and inputting the sensing data into a preset scene factor classification model. , obtain the first scene factor classification result, then determine the target control strategy of at least one preset controlled component in the intelligent driving vehicle according to the first scene factor classification result, and finally control the target control strategy of at least one preset controlled component according to the first scene factor classification result.
  • the operation of at least one preset controlled component enables the intelligent driving vehicle to use different control strategies to control the controlled component based on the adaptive scene, which can greatly improve the driving safety of the intelligent driving vehicle.
  • FIG. 1 is a schematic diagram of a scene for implementing the driving control method provided by the embodiment of the present application.
  • the intelligent driving car 100 includes a car body 110 , a sensor 120 disposed on the car body 110 and a power system 130 disposed on the car body 110 , and the sensor 120 is used to collect sensing data , the power system 130 is used to provide mobile power for the intelligent driving vehicle 100 .
  • the senor 120 includes a visual sensor, a radar device, an inertial measurement unit and an odometer, and the radar device may include a lidar and a millimeter-wave radar.
  • the intelligent driving vehicle 100 may include one or more radar devices.
  • lidar can obtain laser point clouds by emitting laser beams to detect the position, speed and other information of objects in an environment.
  • Lidar can transmit detection signals to the environment including the target object, and then receive the reflected signal reflected from the target object, and obtain laser light according to the reflected detection signal, the received reflected signal, and data parameters such as the interval time between sending and receiving.
  • point cloud can include N points, and each point can include parameters such as x, y, z coordinates and intensity (reflectivity).
  • the intelligent driving car 100 may further include a driving control device (not shown in FIG. 1 ), and the driving control device is used to acquire the sensory data collected by the sensor 120 and input the sensory data into a preset scene. a factor classification model to obtain a first scene factor classification result; and a target control strategy for determining at least one preset controlled component in the intelligent driving vehicle 100 according to the first scene factor classification result; The target control strategy of the controlled component is used to control the operation of at least one preset controlled component, so that the intelligent driving vehicle 100 can use different control strategies to control the controlled component based on the adaptive scene, which can greatly improve the driving safety of the intelligent driving vehicle.
  • a driving control device not shown in FIG. 1
  • the driving control device is used to acquire the sensory data collected by the sensor 120 and input the sensory data into a preset scene.
  • a factor classification model to obtain a first scene factor classification result
  • a target control strategy for determining at least one preset controlled component in the intelligent driving vehicle 100 according to the first scene factor classification result The target control strategy of the controlled
  • the intelligent driving vehicle in FIG. 1 and the above naming of the components of the intelligent driving vehicle are only for the purpose of identification, and therefore do not limit the embodiments of the present application.
  • the driving control method provided by the embodiments of the present application will be described in detail with reference to the scene in FIG. 1 . It should be noted that the scene in FIG. 1 is only used to explain the driving control method provided by the embodiment of the present application, but does not constitute a limitation on the application scene of the driving control method provided by the embodiment of the present application.
  • FIG. 2 is a schematic flowchart of steps of a driving control method provided by an embodiment of the present application.
  • the driving control method can be applied to an intelligent driving car for providing the driving safety of the intelligent driving car.
  • the driving control method includes steps S101 to S103 .
  • the sensors in the intelligent driving car include visual sensors and radar devices.
  • the visual sensor can be a monocular vision sensor or a binocular vision sensor.
  • the radar device can include lidar and millimeter-wave radar.
  • the sensing data can be
  • the image data may also be point cloud data, or may include both image data and point cloud data.
  • the scene factor classification model may be a pre-trained convolutional neural network model (Convolutional Neural Networks, CNN), and the training method may be: acquiring a plurality of first sample data, wherein the first This data includes the sensing data collected by the sensor and the labeled first scene factor classification result; the preset convolutional neural network model is iteratively trained according to multiple first sample data, until the iteratively trained convolutional neural network The model converges and the scene factor classification model is obtained.
  • the scene factor classification model can accurately determine the scene factor classification results of the environment in which the intelligent driving vehicle is located, which is convenient for subsequent accurate control of its own driving based on the scene factor classification results.
  • the first scene factor classification result includes the classification result of at least one scene factor among time, weather, illumination, road surface state, road shape, road area, and image quality, and the time classification result includes day, night, evening, etc., and the weather
  • the classification results include sunny, cloudy, rainy, snowy, foggy, etc.
  • the classification results of road conditions include normal road, wet road, water on the road, snow on the road, etc.
  • the classification results of road shape include straight road, curved road , downhill, uphill, etc.
  • the classification results of road areas include expressways, highways, tunnels, intersections, commercial areas, residential areas, etc.
  • the classification results of image quality include normal, overexposure, reflection, glare, blur, and too dark
  • the classification result of the illumination includes that the illumination intensity is between the first preset illumination intensity and the second preset illumination intensity (the illumination is normal), the illumination intensity is less than the first preset illumination intensity (the illumination is too dark), and the illumination intensity is greater than the second preset illumination intensity. Set the light intensity (light is too bright), etc., and the first preset light intensity is smaller than the second preset light intensity.
  • the first preset illumination intensity and the second preset illumination intensity may be set based on actual conditions, which are not specifically limited in this embodiment of the present application.
  • the at least one preset controlled component may be each or a part of all the preset controlled components.
  • the preset controlled components include but are not limited to lights, wipers, engines and visual sensors of the intelligent driving car, and the first preset relationship table between scene factors, classification results of scene factors, and control strategies of the controlled components can be The setting is made based on the actual situation, which is not specifically limited in this embodiment of the present application.
  • the first preset relationship table among the scene factor, the classification result of the scene factor, and the control strategy of the controlled component may be as shown in Table 1.
  • the control strategy of the controlled component is: turn on the lights, turn on the visual sensor
  • the exposure parameter is adjusted to the second exposure parameter, the wiper is turned on, the gear of the wiper is adjusted according to the amount of rain, slow down, and drive at a lower speed than the current road maximum.
  • the target control strategy of the preset controlled components includes: turning on the lights, adjusting the exposure parameter of the vision sensor to the second exposure parameter, turning on the wipers, adjusting the gear position of the wipers according to the amount of rain, and slowing down. Therefore, the control intelligent The lights of the driving car are turned on, and the exposure parameter of the vision sensor is adjusted to the second exposure parameter. At the same time, the gear position of the wiper is determined according to the amount of rain, and the operation of the wiper is controlled according to the gear position, and the speed of the engine of the intelligent driving car is also reduced. to reduce driving speed.
  • the target control strategy of the preset controlled component includes: turning on the lights and adaptively adjusting the driving speed according to the visibility. Therefore, the lights of the intelligent driving car are controlled to be turned on, the driving speed is determined according to the visibility, and the driving speed is controlled according to the driving speed. Smart driving car driving. Among them, the higher the visibility, the faster the driving speed, but does not exceed the maximum driving speed of the current driving road, and the lower the visibility, the slower the driving speed.
  • the driving control method provided by the above embodiment obtains the sensing data collected by the sensor, and inputs the sensing data into the preset scene factor classification model to obtain the first scene factor classification result, and then determines the first scene factor classification result according to the first scene factor classification result.
  • the target control strategy of at least one preset controlled component in the intelligent driving car and finally control the operation of at least one preset controlled component according to the target control strategy of the at least one preset controlled component, so that the intelligent driving car can be based on the adaptive scene.
  • Using different control strategies to control the controlled components can greatly improve the driving safety of intelligent driving vehicles.
  • FIG. 3 is a schematic flowchart of steps of another driving control method provided by an embodiment of the present application.
  • the driving control method includes steps S201 to S204.
  • the sensors in the intelligent driving car include visual sensors and radar devices.
  • the visual sensor can be a monocular vision sensor or a binocular vision sensor.
  • the radar device can include lidar and millimeter-wave radar.
  • the sensing data can be
  • the image data may also be point cloud data, or may include both image data and point cloud data.
  • the sensor data includes image data
  • the target processing strategy includes at least one of the following: adjusting the brightness of the image data, adjusting the contrast of the image data, increasing the clarity of the image data, and adjusting the sharpening parameters of the image data.
  • the target scene factor includes at least one of time, illumination and image quality
  • the second preset relationship table between the target scene factor, the classification result of the target scene factor and the processing strategy of the sensory data can be based on the actual situation. setting, which is not specifically limited in this embodiment of the present application.
  • the second preset relationship table between the target scene factor, the classification result of the target scene factor, and the processing strategy of the sensing data may be as shown in Table 2.
  • the target processing strategy of the sensor data is to increase the brightness of the image data
  • the target processing strategy of the sensor data is: Adjust the sharpening parameters of the image data.
  • the target object includes vehicles, pedestrians, traffic signs, lane lines, etc.
  • the target detection information includes the category, position coordinates, length, height, width, etc. of the target object.
  • the target detection model is a pre-trained neural network model.
  • the training method may be: acquiring a plurality of second sample data, wherein the second sample data includes sensing data collected by sensors and marked target detection information; and iterating a preset neural network model according to the plurality of second sample data Training is performed until the neural network model after iterative training converges, and a target detection model is obtained.
  • the sensor data is input into a preset target detection model to obtain the first target detection information of the target object in the environment where the intelligent driving car is located; the target sensor data is input into the preset target detection model to obtain the intelligent The second target detection information of the target object in the environment where the driving car is located; the target detection information of the target object is determined according to the first target detection information and the second target detection information.
  • the target detection information is determined by the target sensor data and the pre-processed sensor data, which can further improve the accuracy of the target detection information.
  • the method of determining the target detection information of the target object may be: determining the first product of the first target detection information and the first preset coefficient, and determining The second product of the second target detection information and the second preset coefficient; determining the sum of the first product and the second product, and determining the sum of the first product and the second product as the target detection information of the target object.
  • the sum of the first preset coefficient and the second preset coefficient is equal to 1, and the first preset coefficient is smaller than the second preset coefficient.
  • the first preset coefficient and the second preset coefficient can be set based on the actual situation. This is not specifically limited in the application examples. For example, the first preset coefficient is 0.4, and the second preset coefficient is 0.6. In another example, the first preset coefficient is 0.45 and the second preset coefficient is 0.55.
  • the target trajectory of the intelligent driving vehicle itself can be generated, and the target trajectory can be presented on the visual interface, which is convenient for the user to read.
  • the planning includes at least one of the following: traveling with a constant distance from the target object, stopping traveling, and traveling in a detour.
  • the intelligent driving car 11 is driving on the road, and the intelligent driving car can obtain the target detection information of the vehicle 12 and the vehicle 13 , so that the intelligent driving car 11 can communicate with the vehicle 12 based on the target detection information of the vehicle 12 . Travel a constant distance.
  • the sensing data collected by the sensor is acquired, and the sensing data is input into a preset scene factor classification model to obtain a first scene factor classification result; the target of the sensing data is determined according to the first scene factor classification result.
  • processing strategy and process the sensing data according to the target processing strategy to obtain the target sensing data; input the target sensing data into the preset scene factor classification model to obtain the second scene factor classification result; according to the second scene factor classification result Determine a target control strategy of at least one preset controlled component in the intelligent driving vehicle; control each of the preset controlled components to operate according to the target control strategy of the at least one preset controlled component.
  • Adaptively processing the sensor data based on the first scene factor classification result can improve the accuracy of the sensor data, and then determining the second scene factor classification result based on the processed sensor data, which can improve the accuracy of the scene factor classification result. Therefore, the accuracy of determining the target control strategy of the controlled component can be improved, so that when the controlled component is controlled based on the accurate control strategy, the safety of the intelligent driving vehicle can be guaranteed.
  • the driving control method provided by the above embodiment obtains the sensing data collected by the sensor, and inputs the sensing data into the preset scene factor classification model to obtain the first scene factor classification result, and then determines the first scene factor classification result according to the first scene factor classification result.
  • the target processing strategy of the sensor data, and the sensor data is processed according to the target processing strategy to obtain the target sensor data.
  • the target sensor data is input into the preset target detection model to obtain the target object of the environment where the intelligent driving vehicle is located. According to the target detection information, the intelligent driving car's own driving is planned according to the target detection information.
  • Adaptive processing of sensor data based on scene factor classification results can improve the accuracy of sensor data and facilitate subsequent target detection of sensor data, thereby improving the accuracy of target detection results and ensuring that intelligent driving vehicles are based on the target.
  • the detection results can be used to plan the accuracy of their own driving and improve driving safety.
  • FIG. 5 is a schematic structural block diagram of a driving control device provided by an embodiment of the present application.
  • the driving control device 300 includes a processor 301 and a memory 302, and the processor 301 and the memory 302 are connected through a bus 303, such as an I2C (Inter-integrated Circuit) bus.
  • a bus 303 such as an I2C (Inter-integrated Circuit) bus.
  • the processor 301 may be a micro-controller unit (Micro-controller Unit, MCU), a central processing unit (Central Processing Unit, CPU) or a digital signal processor (Digital Signal Processor, DSP) or the like.
  • MCU Micro-controller Unit
  • CPU Central Processing Unit
  • DSP Digital Signal Processor
  • the memory 302 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • ROM Read-Only Memory
  • the memory 302 may be a Flash chip, a read-only memory (ROM, Read-Only Memory) magnetic disk, an optical disk, a U disk, a mobile hard disk, and the like.
  • the processor 301 is used for running the computer program stored in the memory 302, and implements the following steps when executing the computer program:
  • Each of the preset controlled components is controlled to operate according to the target control strategy of each of the preset controlled components.
  • the first scene factor classification result includes a plurality of scene factor classification results
  • the processor determines at least one preset in the intelligent driving vehicle according to the first scene factor classification result.
  • a target control strategy of at least one preset controlled component in the intelligent driving vehicle is determined.
  • the scene factor classification model is a pre-trained convolutional neural network model
  • the processor is further configured to implement the following steps:
  • the first sample data includes the sensing data collected by the sensor and the labeled first scene factor classification result
  • the preset convolutional neural network model is iteratively trained according to the plurality of first sample data, until the iteratively trained convolutional neural network model converges, and the scene factor classification model is obtained.
  • the processor is further configured to implement, after inputting the sensing data into a preset scene factor classification model and obtaining a first scene factor classification result:
  • the processor When implementing the target control strategy for determining at least one preset controlled component in the intelligent driving vehicle according to the first scene factor classification result, the processor is configured to implement:
  • a target control strategy of at least one preset controlled component in the intelligent driving vehicle is determined according to the second scene factor classification result.
  • the processor is further configured to implement, after inputting the sensing data into a preset scene factor classification model and obtaining a first scene factor classification result:
  • the driving of the intelligent driving vehicle itself is planned.
  • the processor when implementing the target processing strategy for determining the sensing data according to the first scene factor classification result, is configured to implement:
  • a target processing strategy for the sensing data is determined.
  • the target scene factor includes at least one of time, lighting, and image quality.
  • the sensory data includes image data
  • the target processing strategy includes at least one of the following:
  • the planning includes at least one of the following: traveling with a constant distance from the target object, stopping traveling, and traveling in a detour.
  • the processor when the processor inputs the target sensing data into a preset target detection model to obtain target detection information of the target object in the environment where the intelligent driving vehicle is located, the processor is used to achieve:
  • target detection information of the target object is determined.
  • the target detection model is a pre-trained neural network model
  • the processor is further configured to implement the following steps:
  • the second sample data includes sensing data collected by the sensor and marked target detection information
  • the preset neural network model is iteratively trained according to the plurality of second sample data, until the neural network model after the iterative training converges, and the target detection model is obtained.
  • the first scene factor classification result includes a classification result of at least one scene factor among time, weather, illumination, road surface state, road shape, road area, and image quality.
  • the classification result of the illumination includes that the illumination intensity is between the first preset illumination intensity and the second preset illumination intensity, the illumination intensity is smaller than the first preset illumination intensity, and the illumination intensity is greater than the first preset illumination intensity. Any one of two preset illumination intensities, and the first preset illumination intensity is smaller than the second preset illumination intensity.
  • the preset controlled components include lights, wipers, engines and visual sensors of the intelligent driving car.
  • FIG. 6 is a schematic structural block diagram of an intelligent driving vehicle provided by an embodiment of the present application.
  • the intelligent driving car 400 includes a car body 410 , a power system 420 , a sensor 430 and a driving control device 440 .
  • the power system 420 , the sensor 430 and the driving control device 440 are provided on the car body 410 , and the power system 420 is used for To provide moving power for the intelligent driving car 400 , the sensor 430 is used to collect sensor data, and the driving control device 440 is used to control the intelligent driving car 400 .
  • Embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored in the computer-readable storage medium, the computer program includes program instructions, and the processor executes the program instructions, so as to realize the provision of the above embodiments.
  • the steps of the driving control method are described in detail below.
  • the computer-readable storage medium may be an internal storage unit of the intelligent driving vehicle described in any of the foregoing embodiments, such as a hard disk or a memory of the intelligent driving vehicle.
  • the computer-readable storage medium can also be an external storage device of the intelligent driving car, such as a plug-in hard disk, a smart memory card (Smart Media Card, SMC), a secure digital (Secure Digital) equipped on the intelligent driving car , SD) card, flash memory card (Flash Card), etc.

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

L'invention concerne un procédé et un dispositif de commande de conduite, une automobile et un support d'enregistrement lisible par ordinateur, le procédé consistant à : acquérir des données de détection collectées par un capteur, et entrer les données de détection dans un modèle de classification de facteur de scène prédéfini, de sorte à obtenir un premier résultat de classification de facteur de scène (S101) ; déterminer, en fonction du premier résultat de classification de facteur de scène, une politique de commande cible d'au moins un composant commandé prédéfini dans un véhicule à conduite intelligente (S102) ; et commander, en fonction de la politique de commande cible de chacun des composants commandés prédéfinis, le fonctionnement de chacun des composants commandés prédéfinis (S103). Le procédé peut améliorer la sécurité de conduite de véhicules à conduite intelligente.
PCT/CN2020/139135 2020-12-24 2020-12-24 Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur WO2022133939A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202080075421.4A CN114641419A (zh) 2020-12-24 2020-12-24 驾驶控制方法、装置、汽车及计算机可读存储介质
PCT/CN2020/139135 WO2022133939A1 (fr) 2020-12-24 2020-12-24 Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/139135 WO2022133939A1 (fr) 2020-12-24 2020-12-24 Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur

Publications (1)

Publication Number Publication Date
WO2022133939A1 true WO2022133939A1 (fr) 2022-06-30

Family

ID=81944531

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/139135 WO2022133939A1 (fr) 2020-12-24 2020-12-24 Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur

Country Status (2)

Country Link
CN (1) CN114641419A (fr)
WO (1) WO2022133939A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376092A (zh) * 2022-10-21 2022-11-22 广州万协通信息技术有限公司 辅助驾驶中的图像识别方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024113265A1 (fr) * 2022-11-30 2024-06-06 华为技术有限公司 Procédé et appareil de traitement de données, et dispositif de conduite intelligent

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200089969A1 (en) * 2018-09-14 2020-03-19 Honda Motor Co., Ltd. Scene classification
CN110979202A (zh) * 2019-11-08 2020-04-10 华为技术有限公司 改变汽车风格的方法、装置以及系统
CN111002993A (zh) * 2019-12-23 2020-04-14 苏州智加科技有限公司 一种基于场景识别的自动驾驶低油耗运动规划方法及系统
CN111434554A (zh) * 2019-01-15 2020-07-21 通用汽车环球科技运作有限责任公司 基于乘客和环境感知驾驶风格简档来控制自主车辆
CN112092813A (zh) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 车辆控制方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200089969A1 (en) * 2018-09-14 2020-03-19 Honda Motor Co., Ltd. Scene classification
CN111434554A (zh) * 2019-01-15 2020-07-21 通用汽车环球科技运作有限责任公司 基于乘客和环境感知驾驶风格简档来控制自主车辆
CN110979202A (zh) * 2019-11-08 2020-04-10 华为技术有限公司 改变汽车风格的方法、装置以及系统
CN111002993A (zh) * 2019-12-23 2020-04-14 苏州智加科技有限公司 一种基于场景识别的自动驾驶低油耗运动规划方法及系统
CN112092813A (zh) * 2020-09-25 2020-12-18 北京百度网讯科技有限公司 车辆控制方法、装置、电子设备及存储介质

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115376092A (zh) * 2022-10-21 2022-11-22 广州万协通信息技术有限公司 辅助驾驶中的图像识别方法及装置
CN115376092B (zh) * 2022-10-21 2023-02-28 广州万协通信息技术有限公司 辅助驾驶中的图像识别方法及装置

Also Published As

Publication number Publication date
CN114641419A (zh) 2022-06-17

Similar Documents

Publication Publication Date Title
US10377301B2 (en) Lamp light control method and apparatus, computer storage medium and in-vehicle device
WO2022133939A1 (fr) Procédé et dispositif de commande de conduite, automobile, et support d'enregistrement lisible par ordinateur
JP5892876B2 (ja) 車載用環境認識装置
CN103874931B (zh) 用于求取车辆的环境中的对象的位置的方法和设备
US11472417B2 (en) Method of adapting tuning parameter settings of a system functionality for road vehicle speed adjustment control
JP2021169235A (ja) 車両の走行支援装置
JP2024503671A (ja) 可視光カメラ情報と熱カメラ情報とを組合せて処理を行うシステム及び方法
EP4141816A1 (fr) Procédé de détermination de niveau de sécurité de scène, dispositif et support d'informations
WO2022244356A1 (fr) Détection d'interférence lumineuse pendant la conduite d'un véhicule
CN116142186A (zh) 不良环境下车辆安全行驶预警方法、装置、介质和设备
CN110588499A (zh) 一种基于机器视觉的前照灯自适应控制系统及方法
CN113870246A (zh) 一种基于深度学习的障碍物探测和识别方法
CN111301348B (zh) 基于电子地平线的雨刷控制方法、终端设备及存储介质
CN115593312B (zh) 一种基于环境监测分析的电子后视镜模式切换方法
JP6929481B1 (ja) 配光制御装置、配光制御方法及び配光制御プログラム
Nayak et al. Reference Test System for Machine Vision Used for ADAS Functions
WO2022011773A1 (fr) Procédé de commande de phare adaptatif et dispositif terminal et support de stockage
CN115278095A (zh) 一种基于融合感知的车载摄像头控制方法及装置
US20240010228A1 (en) Method for operating a transportation vehicle and a transportation vehicle
JP2022142826A (ja) 自車位置推定装置
WO2023236738A1 (fr) Procédé et appareil de commande de conduite autonome pour véhicule, et support de stockage informatique
WO2022193154A1 (fr) Procédé de commande d'essuie-glace de pare-brise, véhicule automobile et support de stockage lisible par ordinateur
CN117901756B (zh) 一种车辆照明灯控制系统
CN116279554B (zh) 基于图像识别及移动位置服务调整驾驶策略的系统及方法
JP7378673B2 (ja) 前照灯制御装置、前照灯制御システム、及び前照灯制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20966511

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20966511

Country of ref document: EP

Kind code of ref document: A1