CN115580970A - Car lamp control method based on multi-sensor fusion - Google Patents

Car lamp control method based on multi-sensor fusion Download PDF

Info

Publication number
CN115580970A
CN115580970A CN202211272471.9A CN202211272471A CN115580970A CN 115580970 A CN115580970 A CN 115580970A CN 202211272471 A CN202211272471 A CN 202211272471A CN 115580970 A CN115580970 A CN 115580970A
Authority
CN
China
Prior art keywords
vehicle
data
driving
lamp control
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211272471.9A
Other languages
Chinese (zh)
Inventor
徐健
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Xingyu Automotive Lighting Systems Co Ltd
Original Assignee
Changzhou Xingyu Automotive Lighting Systems Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Xingyu Automotive Lighting Systems Co Ltd filed Critical Changzhou Xingyu Automotive Lighting Systems Co Ltd
Priority to CN202211272471.9A priority Critical patent/CN115580970A/en
Publication of CN115580970A publication Critical patent/CN115580970A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/165Controlling the light source following a pre-assigned programmed sequence; Logic control [LC]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/20Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
    • G06F16/29Geographical information databases
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/11Controlling the light source in response to determined parameters by determining the brightness or colour temperature of ambient light
    • HELECTRICITY
    • H05ELECTRIC TECHNIQUES NOT OTHERWISE PROVIDED FOR
    • H05BELECTRIC HEATING; ELECTRIC LIGHT SOURCES NOT OTHERWISE PROVIDED FOR; CIRCUIT ARRANGEMENTS FOR ELECTRIC LIGHT SOURCES, IN GENERAL
    • H05B47/00Circuit arrangements for operating light sources in general, i.e. where the type of light source is not relevant
    • H05B47/10Controlling the light source
    • H05B47/105Controlling the light source in response to determined parameters
    • H05B47/115Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings

Abstract

The invention discloses a vehicle lamp control method based on multi-sensor fusion, which comprises the following steps of S1, carrying out vehicle matching positioning through an inertial navigation system to obtain vehicle self-positioning data; s2, collecting the environment of the road in front of the vehicle through a camera to obtain the environment information of the road in front of the vehicle; and S3, modeling is carried out according to the environmental information of the road in front of the vehicle, and then the modeling is matched with the self-positioning data of the vehicle, so that the real-time positioning of the vehicle based on the high-precision map is completed. The invention provides a vehicle lamp control method based on multi-sensor fusion, which senses road information through a visual detection algorithm, combines positioning system data fusion, realizes real-time positioning of a vehicle based on a high-precision map, and simultaneously provides a self-learning algorithm model, establishes a common historical route database, effectively reduces repeated calculation amount of an AI chip, reduces energy consumption, accurately controls vehicle light of a common road section, and effectively reduces energy consumption of a whole vehicle.

Description

Car lamp control method based on multi-sensor fusion
Technical Field
The invention relates to a vehicle lamp control method based on multi-sensor fusion.
Background
At present, as the automobiles are accelerated to merge and develop to electromotion, intellectualization, networking and sharing, high-precision maps closely related to the automobiles become more important. The advanced self-position evaluation and peripheral environment perception of the high-precision map are the basis of safe driving of the automatic driving automobile, and have the same indispensable function as a sensor in the perception of the road traffic environment, which is also the reason that more and more enterprises start to arrange the high-precision map and the industry heat is continuously improved.
Meanwhile, the current national standard clearly indicates that the consumption of the new electric vehicle is more than 14% in the second stage. Although most of the existing L3+ intelligent automobiles realize automatic driving in a point-to-point specific scene through a high-precision map, a common road database is not established to reduce the calculation amount of an algorithm and further reduce the energy consumption of vehicles. Meanwhile, the whole vehicle can be effectively reduced through correct light control, the perception result of the front road environment is more used for vehicle control and is not derived to the control of the vehicle lamps, and data reuse is not achieved to a certain extent.
Disclosure of Invention
The invention aims to solve the technical problem of overcoming the defects of the prior art, and provides a vehicle lamp control method based on multi-sensor fusion, which senses road information through a visual detection algorithm, realizes the real-time positioning of a vehicle based on a high-precision map by combining with the data fusion of a positioning system, and simultaneously provides a self-learning algorithm model, establishes a common historical route database, effectively reduces the repeated calculated amount of an AI chip, reduces the energy consumption, accurately controls the vehicle light of a common road section, and effectively reduces the energy consumption of the whole vehicle.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a vehicle lamp control method based on multi-sensor fusion comprises the following steps:
s1, carrying out vehicle matching positioning through an inertial navigation system to obtain vehicle self-positioning data;
s2, collecting the environment of the road in front of the vehicle through a camera to obtain the environment information of the road in front of the vehicle;
s3, modeling is carried out according to environmental information of a road in front of the vehicle, and then matching is carried out with self-positioning data of the vehicle, so that real-time positioning of the vehicle based on a high-precision map is completed;
s4, acquiring historical driving data in a historical travel of a vehicle driven by a user and real-time driving data of a current travel in the process of driving the vehicle by the user;
s5, removing abnormal data from the acquired historical driving data and real-time driving data to form a new training data set;
s6, deep learning is carried out according to the data to be processed in the new training data set, and a common historical route database is established according to the learning result;
s7, when the vehicle is started, judging according to the real-time positioning of the vehicle based on the high-precision map, and judging whether the current real-time path is a path in a common historical route database;
if the historical route database is judged to be yes, directly calling the automobile lighting control strategy in the common historical route database to execute relevant operations;
if not, memorizing according to the requirement of the driver by combining the environmental perception and the decision control result, and adding a new route to be stored in the common historical route database.
Further, the step S1 specifically includes the following steps:
the GNSS sensor provides absolute positioning by adopting a GNSS + IMU combined positioning mode, and the position of the GNSS sensor is set as (x) i ,y i ,z i ) Positioning the satellite position as (x) p ,y p ,z p ) The pseudo-range Si constitutes a localization model as follows:
Figure BDA0003895252710000021
wherein, t p Time, t, of the GNSS sensor clock module i Time of the satellite clock module;
by receiving data information of a plurality of positioning satellites, a plurality of observation equation sets of pseudo distances are established, then a least square linearized equation is established to solve coordinates of positioning points, and vehicle self-positioning data are obtained.
Further, the driving data includes vehicle running speed, engine speed, preceding vehicle distance, accelerator brake and steering wheel data, vehicle lamp state, safety belt state, safety air bag state and driving environment information.
Furthermore, the driving environment information is detected through a forward camera and a look-around camera, and comprises traffic light information of passing road sections, zebra crossing information of road sections, road section congestion information and front vehicle information.
Further, the step S5 specifically includes the following steps:
and abnormal data removing is carried out on the driving data of the non-design operation domain, the driving data of the abnormal driving scene and the driving data of the accident scene in the acquired historical driving data and real-time driving data, and the residual normal driving data form a new training data set.
Further, the step S6 specifically includes the following steps:
and selecting the driving parameters to be learned according to the data to be processed in the new training data set, calculating the data to be processed according to the calculation rules corresponding to the driving parameters to obtain the learning result of the driving parameters, instructing the automatic driving system to set the driving parameters according to the learning result in the automatic driving process according to the learning result, and finally forming a common historical route database.
Further, the vehicle lighting control strategy comprises a vehicle light control strategy in the current time period, and the vehicle light control strategy in the current time period is as follows:
when natural light is sufficient in the daytime, the headlamp does not need to be turned on;
when the light intensity is gradually changed in the early morning and the evening, the light intensity is inversely changed along with the light intensity;
the light intensity was turned on to 100% when driving at night.
Further, the vehicle lighting control strategy includes a current road condition vehicle light control strategy, which is:
when the current road section is an urban road section with better lighting conditions, the lamplight intensity is turned on by 50 percent;
when the current road section is a rural road section with poor lighting conditions, the light intensity is set to be 100%.
Further, the vehicle lighting control strategy comprises a current weather condition vehicle light control strategy, wherein the current weather condition vehicle light control strategy is as follows:
when the weather is clear and the visibility is high, the light intensity is turned on to 70 percent;
when it is rainfog or extreme weather, the light intensity is turned on to 100%.
Further, the vehicle lighting control strategy includes a current driving condition vehicle light control strategy, and the current driving condition vehicle light control strategy is as follows:
and judging whether a vehicle is running in the corresponding distance in front according to the current speed, if so, judging that the current running condition is the following running condition, and adjusting the illumination distance of the headlamp according to the actual distance.
By adopting the technical scheme, the invention has the following beneficial effects:
1. the method has the advantages that the road perception calculation amount of the common road sections is greatly reduced by establishing the common historical route database, so that the energy consumption caused by repeated calculation is reduced, and the effects of energy conservation and emission reduction are achieved.
2. By establishing the common historical route database, the light of the common road section can be regulated and controlled, the energy consumption waste caused by light abuse is reduced to the maximum extent, and the effects of energy conservation and emission reduction are achieved.
3. The plurality of sensors are conventionally configured for the intelligent vehicle, are simple and convenient to carry and are easy to popularize.
4. The invention adopts a self-learning algorithm, and can optimize the algorithm according to the individualized driving data of the user in the driving process so as to achieve the performance parameters according with the driving habits of the user.
Drawings
FIG. 1 is a flow chart of a vehicle light control method based on multi-sensor fusion according to the present invention;
FIG. 2 is a flow chart of the creation of a historical route database of the present invention;
FIG. 3 is a flow chart of data processing of the present invention;
FIG. 4 is a flow chart of the multi-sensor fusion of the present invention;
FIG. 5 is an exemplary diagram of a conventional historical route database according to the present invention.
Detailed Description
In order that the present invention may be more readily and clearly understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments that are illustrated in the appended drawings.
As shown in fig. 1, the present embodiment provides a method for controlling a vehicle lamp based on multi-sensor fusion, which includes:
the method comprises the following steps that S1, vehicle matching positioning is carried out through an inertial navigation system, and vehicle self-positioning data are obtained, and the method specifically comprises the following steps:
regarding the self-positioning of the vehicle, the present embodiment adopts a GNSS + IMU combined positioning manner, the GNSS sensor provides an absolute positioning, and the position of the GNSS sensor is set as (x) i ,y i ,z i ) Positioning the satellite position as (x) p ,y p ,z p ) The pseudo-range Si constitutes a positioning model as follows:
Figure BDA0003895252710000041
wherein, t p Time, t, of the GNSS sensor clock module i Time of the satellite clock module;
by receiving data information of a plurality of positioning satellites, a plurality of observation equation sets of pseudo distances are established, then a least square linearized equation is established to solve coordinates of positioning points, and vehicle self-positioning data are obtained.
And S2, collecting the environment of the road in front of the vehicle through the camera to obtain the environment information of the road in front of the vehicle.
And S3, modeling is carried out according to the environmental information of the road in front of the vehicle, and then the modeling is matched with the self-positioning data of the vehicle, so that the real-time positioning of the vehicle based on the high-precision map is completed. Since the positioning technique has a problem of accumulated errors, in order to realize high-precision positioning, the step S2 in this embodiment also uses a visual SLAM technique. Through the vision SLAM technology, the front road environment is collected through the camera for modeling, and then the vehicle is matched with the self-positioning data of the vehicle body, so that the high-precision positioning of the vehicle is completed.
And S4, acquiring historical driving data in a historical journey of a vehicle driven by a user and real-time driving data of a current journey of the vehicle driven by the user. The driving data includes vehicle driving speed, engine speed, distance ahead, throttle brake and steering wheel data, vehicle light status, seat belt status, airbag status, and driving environment information. The driving environment information is detected through the forward camera and the look-around camera, and comprises traffic light information of passing road sections, zebra crossing information of road sections, road section congestion information and front vehicle information.
S5, removing abnormal data from the acquired historical driving data and the acquired real-time driving data to form a new training data set, and specifically comprising the following steps of:
as shown in fig. 3, abnormal data removal is performed on the driving data of the non-design operation domain, the driving data of the abnormal driving scene, and the driving data of the accident scene in the acquired historical driving data and real-time driving data, and the remaining normal driving data are formed into a new training data set. The abnormal data mainly aim at driving data in a non-design operation domain, driving data in an abnormal driving scene, driving data in an accident scene and the like, and the abnormal data do not accord with safe driving requirements or traffic rules and the like, so that the abnormal data in the driving data need to be continuously removed, and the driving safety can be ensured by system parameters self-learned by an automatic driving system. The embodiment adopts a method of eliminating abnormal data in real time to reduce data storage capacity and reduce occupation of system storage space.
S6, deep learning is carried out according to the data to be processed in the new training data set, and a common historical route database is established according to the learning result, and the method specifically comprises the following steps:
and selecting the driving parameters to be learned according to the data to be processed in the new training data set, calculating the data to be processed according to the calculation rules corresponding to the driving parameters to obtain the learning result of the driving parameters, instructing the automatic driving system to set the driving parameters according to the learning result in the automatic driving process according to the learning result, and finally forming a common historical route database.
S7, as shown in the figure 1, when the vehicle is started, judging according to the vehicle real-time positioning based on the high-precision map, and judging whether the current real-time path is a path in a common historical route database;
if the vehicle lighting control strategy is judged to be the relevant operation, the vehicle lighting control strategy in the common historical route database is directly called to execute the relevant operation, so that repeated information acquisition and calculation processes of various sensors, controllers and the like can be avoided, and the power consumption of the system is further reduced;
if not, memorizing according to the requirements of the driver by combining the environmental perception and the decision control result, and adding a new route to be stored in a common historical route database.
The analysis is performed by taking a new route as an example:
after the vehicle starts to run, obtaining the current vehicle positioning by combining a high-precision map according to a GPS signal, and judging whether the current path is a common route or not; if not, a popup window appears in the human-computer interaction interface, the driver is inquired whether to record the travel data at the time, if so, a new route data file is created, and the operation of the driver and the automobile illumination control strategy are recorded into the current path data so as to be called directly next time.
The automotive lighting control strategy of the present embodiment includes:
the light intensity of an LED lamp in the vehicle headlamp is set to be 100% when the LED lamp is fully opened.
1. The current time period vehicle light control strategy is as follows:
when natural light is sufficient in the daytime, the headlamp does not need to be turned on;
when the light intensity is gradually changed in the early morning and the evening, the light intensity is inversely changed along with the light intensity;
when the vehicle is driven at night, the light intensity is started to 100%, and the specific time period is self-adaptively adjusted according to the measurement result of the light intensity sensor.
2. The current road condition vehicle lamp control strategy is as follows:
when the current road section is an urban road section with better lighting conditions, the lamplight intensity is turned on by 50 percent;
when the current road section is a rural road section with poor lighting conditions, the light intensity is set to be 100%, and the light intensity is adjusted in a self-adaptive mode according to the environment sensing result and the measurement result of the illumination intensity sensor.
3. Current weather condition car light control strategy:
when the weather is clear and the visibility is high, the light intensity is turned on to 70 percent;
when it is rainfog or extreme weather, the light intensity is turned on to 100%.
4. Vehicle light control strategy for current driving condition:
and judging whether a vehicle is running within a corresponding distance in front according to the current vehicle speed, if so, judging that the current running condition is the following running condition, and adjusting the illumination distance of the headlamp according to the actual vehicle distance.
The following distance analysis is performed for the automotive lighting control strategy:
as shown in fig. 5, 20 in the summer night: 30. under the condition of clear weather, the automobile runs from the east door of the satellite center to the No. 2 door of a new city gym, the GNSS positioning is carried out, the sensing system identifies the lane line and whether the automobile follows the lane, the illumination intensity sensor feeds back the illumination condition of the street lamp, and the illumination intensity is output after the comprehensive analysis. It can be seen from fig. 5 that the road lamps of the road sections 3, 5 and 6 have good lighting conditions, so that the number of the lights which are turned on is 50% whether following the light of the vehicle, and the number of the lights which are turned on is about 90% when the lighting conditions are poor in the road section 4.
In addition, the vehicle end of the vehicle of the embodiment includes an automatic driving area controller, a communication module, a central control screen, a switch and other systems. The automatic driving domain controller is a control module for realizing the functions of vehicle driving assistance or an automatic driving system; the communication module is used for realizing communication between the vehicle end and the cloud end; the central control screen is a human-computer interaction interface; the switch is a physical or analog switch at the vehicle end and is used for activating the self-learning function of the automatic driving system. As shown in fig. 4, in the technical solution of this embodiment, each sensor and module of the vehicle may first collect user driving data, upload the user driving data to the cloud server from the vehicle end, complete model optimization learning training on the server, return updated performance parameters to the vehicle end, and control the vehicle to realize related functions of automatic driving, so as to optimize user experience.
The above embodiments are described in further detail to solve the technical problems, technical solutions and advantages of the present invention, and it should be understood that the above embodiments are only examples of the present invention and are not intended to limit the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A car light control method based on multi-sensor fusion is characterized by comprising the following steps:
s1, carrying out vehicle matching positioning through an inertial navigation system to obtain vehicle self-positioning data;
s2, collecting the environment of the road in front of the vehicle through a camera to obtain the environment information of the road in front of the vehicle;
s3, modeling is carried out according to environmental information of a road in front of the vehicle, and then matching is carried out with self-positioning data of the vehicle, so that real-time positioning of the vehicle based on a high-precision map is completed;
s4, acquiring historical driving data in a historical travel of a vehicle driven by a user and real-time driving data of a current travel in the process of driving the vehicle by the user;
s5, removing abnormal data from the acquired historical driving data and real-time driving data to form a new training data set;
s6, deep learning is carried out according to the data to be processed in the new training data set, and a common historical route database is established according to the learning result;
s7, when the vehicle is started, judging according to the real-time positioning of the vehicle based on the high-precision map, and judging whether the current real-time path is a path in a common historical route database;
if yes, directly calling the automobile lighting control strategy in the common historical route database to execute relevant operations;
if not, memorizing according to the requirements of the driver by combining the environmental perception and the decision control result, and adding a new route to be stored in a common historical route database.
2. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the step S1 specifically includes the steps of:
the GNSS sensor provides absolute positioning by adopting a GNSS + IMU combined positioning mode, and the position of the GNSS sensor is set as (x) i ,y i ,z i ) Positioning the satellite position as (x) p ,y p ,z p ) The pseudo-range Si constitutes a positioning model as follows:
Figure FDA0003895252700000011
wherein, t p Time, t, of the GNSS sensor clock module i Time of the satellite clock module;
and establishing a plurality of observation equation sets of pseudo distances by receiving data information of a plurality of positioning satellites, and then establishing a least square linearized equation to solve the coordinates of the positioning points to obtain the self-positioning data of the vehicle.
3. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the driving data comprises vehicle running speed, engine rotating speed, front vehicle distance, accelerator brake and steering wheel data, vehicle lamp state, safety belt state, safety air bag state and driving environment information.
4. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the driving environment information is detected through a forward camera and a look-around camera, and comprises traffic light information of passing road sections, zebra crossing information of road sections, congestion information of road sections and front vehicle information.
5. The vehicle lamp control method based on multi-sensor fusion as claimed in claim 1, wherein the step S5 specifically comprises the steps of:
and abnormal data removing is carried out on the driving data of the non-design operation domain, the driving data of the abnormal driving scene and the driving data of the accident scene in the acquired historical driving data and real-time driving data, and the remaining normal driving data form a new training data set.
6. The vehicle lamp control method based on multi-sensor fusion of claim 1, wherein the step S6 specifically comprises the following steps:
and selecting the driving parameters to be learned according to the data to be processed in the new training data set, calculating the data to be processed according to the calculation rules corresponding to the driving parameters to obtain the learning result of the driving parameters, instructing the automatic driving system to set the driving parameters according to the learning result in the automatic driving process according to the learning result, and finally forming a common historical route database.
7. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the automobile illumination control strategy comprises a current time period automobile lamp control strategy, wherein the current time period automobile lamp control strategy is as follows:
when natural light is sufficient in the daytime, the headlamp does not need to be turned on;
when the illumination intensity is gradually changed in the early morning and the evening, the lamplight intensity is inversely changed along with the illumination intensity;
the light intensity was turned on to 100% when driving at night.
8. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the automobile illumination control strategy comprises a current road condition automobile lamp control strategy, wherein the current road condition automobile lamp control strategy is as follows:
when the current road section is an urban road section with better lighting conditions, the lamplight intensity is turned on by 50 percent;
when the current road section is a rural road section with poor lighting conditions, the light intensity is set to be 100%.
9. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the automobile illumination control strategy comprises a current weather condition automobile lamp control strategy, wherein the current weather condition automobile lamp control strategy is as follows:
when the weather is clear and the visibility is high, the light intensity is turned on to 70 percent;
when it is rainy or extreme, the light intensity is turned on to 100%.
10. The multi-sensor fusion-based vehicle lamp control method according to claim 1, characterized in that: the automobile illumination control strategy comprises a current driving condition automobile lamp control strategy, wherein the current driving condition automobile lamp control strategy is as follows:
and judging whether a vehicle is running in the corresponding distance in front according to the current speed, if so, judging that the current running condition is the following running condition, and adjusting the illumination distance of the headlamp according to the actual distance.
CN202211272471.9A 2022-10-18 2022-10-18 Car lamp control method based on multi-sensor fusion Pending CN115580970A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211272471.9A CN115580970A (en) 2022-10-18 2022-10-18 Car lamp control method based on multi-sensor fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211272471.9A CN115580970A (en) 2022-10-18 2022-10-18 Car lamp control method based on multi-sensor fusion

Publications (1)

Publication Number Publication Date
CN115580970A true CN115580970A (en) 2023-01-06

Family

ID=84585845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211272471.9A Pending CN115580970A (en) 2022-10-18 2022-10-18 Car lamp control method based on multi-sensor fusion

Country Status (1)

Country Link
CN (1) CN115580970A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115837876A (en) * 2023-02-17 2023-03-24 徐州昊德照明有限公司 Vehicle lamp control system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115837876A (en) * 2023-02-17 2023-03-24 徐州昊德照明有限公司 Vehicle lamp control system

Similar Documents

Publication Publication Date Title
WO2022063331A1 (en) V2x-based formation driving networked intelligent passenger vehicle
US20200089241A1 (en) Intelligent motor vehicles, systems, and control logic for real-time eco-routing and adaptive driving control
US10377301B2 (en) Lamp light control method and apparatus, computer storage medium and in-vehicle device
CN112859830B (en) Design operation region ODD judgment method, device and related equipment
KR20190054374A (en) Autonomous drive learning apparatus and method using drive experience information
CN103692955A (en) Intelligent car light control method based on cloud computing
CN109715461A (en) Outbound householder method and device
US10882449B2 (en) Vehicle light platoon
CN109987099A (en) Vehicle control system, control method for vehicle and storage medium
CN107728610A (en) Automated driving system
CN113135183B (en) Control system for vehicle, control method for control system for vehicle, and computer-readable recording medium
JP7409257B2 (en) Traffic light recognition device, traffic light recognition method, vehicle control device
CN111775934A (en) Intelligent driving obstacle avoidance system of automobile
CN102358230A (en) Smart car lamp control device and method for car
CN110217231A (en) Controller of vehicle, control method for vehicle and storage medium
CN115580970A (en) Car lamp control method based on multi-sensor fusion
JP7286691B2 (en) Determination device, vehicle control device, determination method, and program
JP2020520025A (en) Method of generating overtaking probability collection, method of operating vehicle control device, overtaking probability collection device and control device
CN116135654A (en) Vehicle running speed generation method and related equipment
CN109835343A (en) Controller of vehicle, control method for vehicle and storage medium
CN115158322A (en) Map information generation device and vehicle position estimation device
JP2022142826A (en) Self-position estimation device
CN114973644A (en) Road information generating device
CN115063987B (en) Vehicle control method and device, vehicle and storage medium
RU2777851C1 (en) Vehicle with the function of forming a recovery energy efficient track of a vehicle in operation equipped with an electricity energy recovery system when braking, when the operating vehicle is driving across the point of road, including the road section

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination