CN114228491A - Head-up display system and method with night vision enhanced virtual reality - Google Patents

Head-up display system and method with night vision enhanced virtual reality Download PDF

Info

Publication number
CN114228491A
CN114228491A CN202111644173.3A CN202111644173A CN114228491A CN 114228491 A CN114228491 A CN 114228491A CN 202111644173 A CN202111644173 A CN 202111644173A CN 114228491 A CN114228491 A CN 114228491A
Authority
CN
China
Prior art keywords
vehicle
environment
information processing
processing unit
virtual reality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111644173.3A
Other languages
Chinese (zh)
Other versions
CN114228491B (en
Inventor
吴仁钢
颜长深
杜先起
谭皓月
张宗全
赵蕾
刘大全
陈斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202111644173.3A priority Critical patent/CN114228491B/en
Publication of CN114228491A publication Critical patent/CN114228491A/en
Application granted granted Critical
Publication of CN114228491B publication Critical patent/CN114228491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/176Camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/179Distances to obstacles or vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/102Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/207Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using multi-purpose displays, e.g. camera image and navigation or video on same display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a night vision enhanced virtual reality head-up display system and a method, wherein the system comprises a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud end information processing unit, an experience driving computer and virtual reality head-up display equipment; the vehicle-mounted terminal constructs a 3D model of the surrounding environment of the vehicle by aggregating information such as environment sensing equipment or a sensor and positioning equipment which are installed on the vehicle, so that the vehicle can sense the environment in multiple dimensions; the autonomous communication function between the vehicles and the cloud information processing unit and between the vehicles realizes the functions of sensor data sharing and the like between vehicle information nodes on the same road section, and further improves the safety of road traffic. This system improves the supplementary and driver's environment identification ability in adverse circumstances such as low light, highlight, rain, fog, cigarette, haze, raise dust of vehicle intelligent driving through increasing night vision sensor, strengthens driving safety nature.

Description

Head-up display system and method with night vision enhanced virtual reality
Technical Field
The invention relates to an automobile driving system, in particular to a night vision augmented virtual reality head-up display system and method.
Background
In the driving process of the automobile, in order to improve the driving safety of the automobile in severe or sudden environments such as weak light, strong light, rain, fog, smoke, haze and dust, a night vision function needs to be added to the automobile driving auxiliary system, a current road three-dimensional model is displayed in the visual field of a driver through a virtual reality head-up display and is attached to the road, and the resolution of the driver to the environment is improved. The Chinese patent with the application number of 2019101242782 discloses a night vision head-up display device with an eyeball tracking function, which comprises a head-up display base placed in an automobile cab, a display screen and an infrared camera module installed at the head of the automobile, wherein the infrared camera module is connected with the head-up display base, the display screen is connected with the head-up display base, and the head-up display base comprises a human eye detection camera, a thermal image processing module, a human eye tracking processing module and a projection module; by combining the head-up display and the infrared night vision goggles, a driver can observe an infrared image of the front environment in a dark environment on the premise of keeping the sight line not deviating from the driving direction. By detecting the sight line change of a driver and finely adjusting the display screen, the drifting condition of the picture can be reduced, and an observer is helped to observe a complete and medium infrared image.
In the system disclosed above, the road environment and the infrared characteristics cannot be fed back to a vehicle having an intelligent driving assistance system or a non-intelligent driving assistance system that is traveling behind through a cloud network; technical support cannot be provided for intelligent road traffic safety in severe environment, and a method for identifying environmental factors for automatic control is not explicitly described.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a system and a method for reconstructing a scene of an environment around a vehicle, so that a driver can obtain an enhanced view field in a severe environment or a dangerous environment, and can feed back information for a rear vehicle or reconstruct a road model after receiving road information from other vehicles, thereby improving driving safety.
In order to solve the technical problems, the invention adopts the following technical scheme:
a head-up display system with night vision augmented virtual reality is characterized by comprising a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud end information processing unit, an experience driving computer and virtual reality head-up display equipment;
the vehicle driving environment data collection unit comprises a laser radar and/or an imaging radar for sensing the position and the angle of a static object and a moving object around the vehicle relative to the vehicle, a camera and/or a radar for acquiring and detecting the road environment around the vehicle and the obstacles around the vehicle;
the vehicle calibration data collection unit comprises a driver monitoring camera, a rainfall environment light sensor and a vehicle size and body height data module, wherein the driver monitoring camera is used for detecting and acquiring eye actions and sight line attention points of a driver, and the rainfall environment light sensor is used for measuring the current environment illumination intensity and rainfall;
the vehicle-end information processing unit is used for receiving signals of the vehicle driving environment data unit and the vehicle calibration data collecting unit, reconstructing the surrounding environment of the vehicle according to the received signals and the iteration of an artificial intelligence algorithm, and displaying the reconstructed scene in the air of the connection line of the eyes of the driver and the road environment elements through virtual reality head-up display equipment at the view angle of the driver;
the cloud information processing unit is used for receiving information of the vehicle driving environment data unit, comparing the received corresponding road section environment information with historical data according to the vehicle driving position, updating the road section environment information in real time, and sending the information to a non-intelligent vehicle-mounted navigation software user and a mobile terminal navigation software user behind the vehicle through a low-delay mobile network;
the experience driving computer is used for receiving information of the vehicle end information processing unit and the cloud end information processing unit, is connected with the virtual reality head-up display device and a display screen on the vehicle, and outputs audio and video information. Therefore, important environment information is transmitted to the vehicle-mounted terminal or the mobile terminal navigation program using the networking navigation program through the cloud information processing unit, so that the capabilities of surrounding vehicle drivers of finding safety risks and avoiding accidents are improved. After the cloud information processing unit and the vehicle-side information processing unit are used in combination, the information acquisition time can be shortened, and a user can be reminded by timely screen projection. The system passes through infrared imaging camera, the visible light camera, the millimeter wave radar, 4D imaging radar, the environmental data that laser radar gathered, combine vehicle size, automobile body height data, driver's eyes position, calibration data such as sensor angle and artificial intelligence algorithm iteration, reconstruct vehicle surrounding environment scene, and show the scene of reconsitution in virtual reality new line display equipment with driver's visual angle, especially infrared imaging camera can compensate laser radar and receive rain easily, fog, cigarette, the haze, the not enough of raise dust environmental impact, the adaptability of vehicle in complicated changeable environment has been strengthened. Through car and high in the clouds information processing unit, realize functions such as road environment information sharing between the anonymous vehicle that traveles at same highway section, high in the clouds information processing unit provides the infrared formation of image camera image in the place ahead and environment reconstruction model based on the vehicle position, satisfies the vehicle that does not dispose infrared formation of image camera, lidar at night, user demands such as safe driving in highlight, rain, fog, cigarette, haze, raise dust environment. The intelligent driving auxiliary system can be used for an intelligent driving auxiliary system of a vehicle, provides a front real-time infrared imaging video and a reconstructed road scene for a user in the environments of night, strong light, rain, fog, smoke, haze and dust raising, provides a reference suggestion for the user to drive the vehicle, and improves the capability of surrounding vehicle drivers to find safety risks and avoid accidents.
The system further comprises an information transmission unit, wherein the information transmission unit is used for transmitting information between the vehicle driving environment data collection unit data and the cloud information processing unit, the surrounding mobile terminals or the vehicle. Therefore, the information transmission unit is used for providing data and images for the vehicle-end information processing unit and the cloud information processing unit, and communication interaction is achieved.
Furthermore, the vehicle driving environment data collection unit senses the positions and angles of static and moving objects around the vehicle relative to the vehicle by adopting a laser radar and a 4D imaging millimeter wave radar, detects whether an obstacle exists behind the vehicle by adopting a backward radar, detects whether obstacles exist at four corners of the vehicle by adopting an angle radar, detects whether an obstacle exists in 5 meters around by adopting an ultrasonic radar, and estimates the distance of the nearby obstacle; the method comprises the following steps that a medium-distance and long-distance forward-looking camera is adopted to collect environment images at different distances, and the environment images are used for an intelligent driving auxiliary system to identify objects around a vehicle; the method comprises the following steps of (1) collecting road and environment images around a vehicle by adopting a panoramic camera; acquiring an infrared image of the surrounding environment by adopting an infrared imaging camera; the laser radar and the 4D imaging radar are arranged on a bumper or a roof in front of the vehicle, and emit and receive reflected echoes through electromagnetic waves to generate environment point cloud data; the angle radar is installed around the vehicle, the rear radar is installed at the vehicle afterbody, but the forward-looking camera is installed in windshield rear, windscreen wiper clean area, the panoramic camera is installed around the vehicle, infrared imaging camera is installed on vehicle the place ahead bumper or grid. Therefore, the laser radar and the imaging radar complement each other, the provided vehicle direction and angle are more comprehensive, and the provision of the vehicle direction and angle information can not be influenced after one of the vehicle direction and angle is in fault. The angle radar and the reverse radar can detect whether obstacles exist around the vehicle or not, and supplement each other. The adopted forward-looking camera, the panoramic camera and the infrared imaging camera can provide images of the surrounding environment of the vehicle for the intelligent driving assistance system, are wider than the visual angle of a driver, and can provide danger early warning in a visual field blind area for the driver. The ultrasonic radar detects whether obstacles exist in a range of several meters around the vehicle by transmitting ultrasonic waves and receiving reflected echoes, and the provided information is more accurate and comprehensive after the ultrasonic radar is linked with the information acquired by the camera.
Furthermore, the vehicle calibration data collection unit also comprises a chassis height sensor used for measuring the distance between the vehicle body and the ground, and the vehicle calibration data collection unit transmits information with the vehicle end information processing unit through the zone controller. In this way, the chassis height sensor is able to measure the distance between the vehicle body and the ground, thereby providing data support for calculating the relative linear position between the driver's eyes, the target road element and the virtual imagery.
A vehicle environment visual field enhancement method based on the vehicle environment visual field enhancement system with the night vision augmented virtual reality head-up display system is characterized by comprising the following steps: s1, after the vehicle is started, synchronizing time with the mobile network server, and judging whether the vehicle is at night currently; s2, providing the environmental point cloud data, the image, the detected obstacle distance and the angle information collected by the sensor to a vehicle end information processing unit, and providing the rainfall environmental light sensor feedback value to a region controller; s3, comparing whether the current image and the value accord with the scene influencing the driving visual field through a deep learning neural network; executing S4 if it is determined that there is the driving visual field influence condition, and executing S5 if it is determined that there is no driving visual field influence environment; s4, reconstructing environmental elements around the vehicle in real time according to the environment data collected by the vehicle-side information processing unit and/or the cloud information processing unit, the calibration data of the vehicle and the iteration of the intelligent algorithm of workers, and determining the environmental elements to be enhanced and displayed; when the surrounding environment elements of the vehicle are reconstructed, the sight line of a driver is synchronously detected, the display position of the reconstructed road environment elements is determined according to the sight line of the driver, and then the reconstructed surrounding environment images of the vehicle are displayed on virtual reality head-up display equipment or a vehicle display screen according to the determined display position; and S5, ending. Like this, some vehicles on the road possess infrared imaging and autopilot function, but in night, highlight, rain, fog, cigarette, haze, raise dust perception surrounding road environment, and sensor data through the autopilot car is modelled again, and the model that will establish shows in the environment that driver's visual angle corresponds through virtual reality new line display equipment to increase the usability of vehicle in adverse circumstances, avoid the environmental factor that appears suddenly to influence driving safety.
Further, if the current vehicle is a non-intelligent vehicle and the navigation software runs on the vehicle-mounted terminal or the mobile terminal, acquiring the surrounding environment image or the environment element information from the cloud information processing unit according to the real-time position of the current vehicle-mounted terminal or the mobile terminal, and displaying the reconstructed vehicle surrounding environment image and the reconstructed environment element information through the display device of the current vehicle-mounted terminal or the mobile terminal; if the current vehicle is an intelligent vehicle-mounted navigation software user, the perception of the current vehicle to the environment can be enhanced by acquiring surrounding environment images or environment element information from the cloud information processing unit. Therefore, vehicles on the road without the infrared imaging function can acquire traffic road information uploaded by the vehicles with the infrared imaging function through the cloud end to obtain prompts and driving suggestions in partial emergency situations.
Further, in S3, the scenes affecting the driving vision include night, bright light, rain, fog, smoke, haze, dust environment, and pedestrian, non-motor vehicle, animal, obstacle, and road damage scenes. If pedestrians, non-motor vehicles, animals, obstacles and road damage scenes exist around the vehicle, the display object to be enhanced is determined in the following mode: step a, calling an image collected by a vehicle-end information processing unit, carrying out scene superposition with virtual reality head-up display equipment, and displaying a vehicle real-time environment image according to the scene superposition; b, judging that barrier scenes such as pedestrians, motor vehicles, non-motor vehicles, animals, non-standard barriers, intentionally-arranged roadblocks, road damage and the like exist around the surrounding vehicle according to the displayed real-time environment image of the vehicle, if so, executing a step e, and if not, executing a step c; step c, judging whether the current track of the vehicle is possibly intersected with the obstacle, if so, executing the step d, and if not, executing the step e; d, displaying the current barrier prompt information on the virtual reality head-up display equipment, and sending out a prompt tone or directly executing the step e according to the current user prompt tone setting; and e, ending the step. Therefore, the vehicle synchronizes time with the navigation system and the mobile network server after being started, judges whether the vehicle is at night at present, verifies the environmental conditions through the camera and the rainfall environment light sensing data, and controls the final wiring harness effect according to the degree of the recognized environment influencing the driving visual field and the display parameters of the superposition scene. When the obstacle is detected, the obstacle outline is displayed on the virtual reality head-up display device in an enhanced mode through image feature recognition of the obstacle belonging to the categories of people, wild animals, vehicles, stones and the like, if the obstacle has a large influence on driving, a prompt tone is sent according to the setting, and the response capability of a driver to emergency situations is improved. When driving at night on the country road or mountain area road that do not have the street lamp, the light is not good and leads to the driver to judge unclear to the surrounding environment, can not in time discover pedestrian and animal, when the user opened automatic infrared imaging reinforcing function at night, the system is at night or when the light is not enough the automatic start infrared imaging function, gather surrounding environment infrared imaging picture and video, the road environment element in the discernment image, the structure, material and object profile, combine whole car sensor data to construct the vehicle surrounding scene, show the place ahead road scene of reconstruction on virtual reality new line display device, reinforcing road environment degree of discernment, thereby reinforcing driver's confidence, continue driving the vehicle safely.
Further, in S4, the reconstructed vehicle environment audio-video display position is determined as follows: step I, acquiring an image through a driver monitoring camera, and outputting the position of the eyes of a driver in the image; step II, acquiring the distance and the angle of the obstacle relative to the vehicle through a laser radar and a millimeter wave radar; step III, extracting contour characteristic lines of the obstacles according to images of the front-looking camera and the infrared camera; step IV, identifying the type of the obstacle through a visual deep learning neural network model; v, acquiring a vehicle height value through a vehicle height sensor; step VI, calculating the imaging angle and position of the virtual reality head-up display equipment with the human eyes and the obstacles in a straight line according to the optical geometry, and calculating the scaling size of the object outline according to the distance; and step VII, displaying the outline of the obstacle on the virtual reality head-up display equipment, and prompting the type and distance information of the obstacle. Therefore, when the eye position of the driver moves in the eye box of the virtual reality head-up display device, the eye position of the user can be determined by the driver monitoring camera, and the position of projection of the virtual reality head-up display device is adjusted in real time, so that the display position of the obstacle is kept at a correct angle, and the driving safety is ensured. The adopted laser radar and the millimeter wave radar can acquire the external characteristics and the outline of the barrier, prompt is provided for a driver, driving safety is improved, and emergency capacity of the driver caused by fatigue or the situation that emergency cannot be distinguished due to visual blind areas is improved.
Further, in S1, when it is determined that the vehicle is in the night driving state according to the current time, the night vision function is automatically turned on as follows: step i, point cloud data are collected through a laser radar and a 4D imaging radar; step ii, images collected by the front-view camera and the panoramic camera are provided for the vehicle-end information processing unit, and the rainfall environment light sensor feeds back a measured value to the area controller; step iii, establishing a 3D model based on images and a 3D model based on radar; step iv, comparing whether each object in the 3D model identified by the radar can be found in the 3D model of the image, if so, entering step v, and if not, executing step vii; step v, judging whether the distance of the object in the 3D model is smaller than the maximum value of the visual recognition capability and larger than the minimum value of the visual recognition capability, if so, entering the step vi, and if not, executing the step vii; and step vi, starting the infrared camera, and starting a night vision enhancement function of the virtual reality head-up display device. Therefore, when the environment changes suddenly and the visual resolution is inferior to that of the laser radar and the millimeter wave radar, the influence on the visual field of the driver can be automatically determined, the environment perception is required to be enhanced, and then the night vision function is automatically started. By adopting the mode, the vehicle has certain environment cognitive ability, and the visual field enhancement service required in the current environment is automatically provided for the user. Different from a display screen, the user can master the road environment under the shielding of environmental factors such as rain, fog and the like without leaving the road.
Compared with the prior art, the night vision augmented virtual reality head-up display system and method provided by the invention have the following advantages:
1. visual field enhancement: the road environment is reconstructed by detecting the environment information by the sensor, and the road environment obstacle recognition in the visual field range of the driver is enhanced by projecting the virtual reality head-up display device on a windshield. Even if the vehicle is in the environment of weak light, strong light, rain, fog, smoke, haze and dust, the visual field condition for driving can be obtained.
2. And (3) enhancing virtual reality: due to the fact that the screen projection display mode of the virtual reality head-up display device is adopted, the virtual reality head-up display device is different from a display screen, the user can master the road environment under the shielding of environmental factors such as rain, fog and the like without leaving the road.
3. Information transmission: the position data of the obstacle and the infrared imaging characteristics are uploaded to the cloud server, data support can be provided for the auxiliary driving function of other vehicles passing in the near term, the visual field enhancement service can also be provided for non-intelligent vehicle-mounted navigation software users and mobile terminal navigation software users, the driving advice is improved, and the driving safety is enhanced.
4. Adaptive driver position: when the eye position of the driver moves in the eye box of the virtual reality head-up display device, the eye position of the user can be determined by the driver monitoring camera, and the position of projection of the virtual reality head-up display device is adjusted in real time, so that the display position of the obstacle is kept at a correct angle, and the driving safety is ensured.
5. And (4) enhancing blind areas: the panoramic camera has the environmental view around the vehicle, is far wider than the angle that the driver can see, has continuous detection capability, and can provide the danger prompt function in the blind area of the driver or outside the current view range.
6. Automatically starting a night vision function and actively providing services: when the environment changes and the visual resolution capability is inferior to that of the laser radar and the millimeter wave radar, the influence on the visual field of the driver can be automatically determined, the environment perception capability needs to be enhanced, and then the night vision function is automatically started. The visual data and radar data comparison algorithm enables the vehicle to have certain environment cognitive ability, and visual field enhancement services required in the current environment are automatically provided for users.
Drawings
FIG. 1 is a schematic diagram of a hardware configuration of an embodiment of a head-up display system with night vision augmented virtual reality;
FIG. 2 is a flowchart of determining an enhanced display object according to an embodiment;
FIG. 3 is a flowchart of determining a display position of a virtual reality heads-up display device according to an embodiment;
FIG. 4 is a flowchart illustrating automatic turning on of night vision function of the virtual reality heads-up display device under night vision conditions in the embodiment.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
Example (b):
the system comprises a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud end information processing unit, a navigation device, an experience driving computer and virtual reality head-up display equipment;
the vehicle driving environment data collection unit comprises a laser radar and/or an imaging radar for sensing the position and the angle of a static object and a moving object around the vehicle relative to the vehicle, a camera and/or a radar for acquiring and detecting the road environment around the vehicle and the obstacles around the vehicle;
the vehicle calibration data collection unit comprises a driver monitoring camera, a rainfall environment light sensor and a vehicle size and body height data module, wherein the driver monitoring camera is used for detecting and acquiring eye actions and sight line attention points of a driver, and the rainfall environment light sensor is used for measuring the current environment illumination intensity and rainfall;
the vehicle-end information processing unit is used for receiving signals of the vehicle driving environment data unit and the vehicle calibration data collecting unit, reconstructing the surrounding environment of the vehicle according to the received signals and the iteration of an artificial intelligence algorithm, and displaying the reconstructed scene in the air of the connection line of the eyes of the driver and the road environment elements through virtual reality head-up display equipment at the view angle of the driver;
the cloud information processing unit is used for receiving information of the vehicle driving environment data unit, comparing the received corresponding road section environment information with historical data according to the vehicle driving position, updating the road section environment information in real time, and sending the information to a non-intelligent vehicle-mounted navigation software user and a mobile terminal navigation software user behind the vehicle through a low-delay mobile network; the cloud information processing unit is further used for sending prompt information to a non-intelligent automobile vehicle-mounted navigation software user and a mobile terminal navigation software user behind the automobile through a low-delay mobile network when receiving the road environment infrared image influencing driving, and if the environmental factors influencing driving continuously exist in a certain road location, the infrared image is continuously sent to a user close to the vehicle until the environmental condition is improved.
The navigation device is used for processing data of a navigation satellite, an inertial navigation chip, wheel rotating speed signals and a high-precision map and accurately positioning the movement direction and position of the vehicle;
the experience driving computer is used for receiving information of the vehicle end information processing unit and the cloud end information processing unit, is connected with the virtual reality head-up display device and a display screen on the vehicle, and outputs audio and video information.
As shown in fig. 1, two vehicle-side information processing units in this embodiment are provided, the information acquisition and transmission modes of the two vehicle-side information processing units are the same, and the two vehicle-side information processing units can communicate with each other, and one of the two vehicle-side information processing units is used as a backup to prevent that the use of the system is affected after one of the two vehicle-side information processing units is damaged.
The vehicle driving environment data acquisition unit is used for acquiring vehicle driving environment data, and the cloud information processing unit is used for processing the vehicle driving environment data. The method comprises the following steps: the vehicle-mounted infrared camera is used for shooting images, angles and heights of the camera and the like, and obstacle information detected by the vehicle panoramic camera, the laser radar and the millimeter wave radar. Specifically, when the vehicle is in a short-time severe environment such as night, strong light, rain, fog, smoke, haze, raise dust and the like, the user can leave a dangerous road section by the infrared imaging enhanced head-up display system; and the information such as the vehicle position, the driving path, the imported infrared image data and the like is transmitted to surrounding vehicles and a cloud server through a vehicle-end information transmission unit.
Furthermore, the vehicle driving environment data collection unit senses the positions and angles of static and moving objects around the vehicle relative to the vehicle by adopting a laser radar and a 4D imaging radar, detects whether an obstacle exists behind the vehicle by adopting a reverse radar, detects whether obstacles exist at four corners of the vehicle by adopting an angle radar, detects whether an obstacle exists in 5 meters around by adopting an ultrasonic radar, and estimates the distance of the obstacle; the method comprises the following steps that a near-distance forward-looking camera, a middle-distance forward-looking camera and a long-distance forward-looking camera are adopted to detect environment images at different distances, and the environment images are used for an intelligent driving assistance system to identify objects around a vehicle; observing roads and environments around the vehicle by adopting a panoramic camera; sensing an infrared image of the surrounding environment by adopting an infrared imaging camera; the laser radar and the 4D imaging radar are arranged on a bumper or a roof in front of the vehicle, and emit and receive reflected echoes through electromagnetic waves to generate environment point cloud data; the angle radar is installed around the vehicle, the rear radar is installed at the vehicle afterbody, but the forward-looking camera is installed in windshield rear, windscreen wiper clean area, the panoramic camera is installed around the vehicle, infrared imaging camera is installed on vehicle the place ahead bumper or grid.
Furthermore, the vehicle calibration data collection unit also comprises a chassis height sensor used for measuring the distance between the vehicle body and the ground, and the vehicle calibration data collection unit transmits information with the vehicle end information processing unit through the zone controller.
A vehicle environment visual field enhancement method based on the vehicle environment visual field enhancement system with the night vision augmented virtual reality head-up display system comprises the following steps:
s1, after the vehicle is started, synchronizing time with the mobile network server, and judging whether the vehicle is at night currently;
s2, providing the environmental point cloud data, the image, the detected obstacle distance and the angle information collected by the sensor to a vehicle end information processing unit, and providing the rainfall environmental light sensor feedback value to a region controller;
s3, comparing whether the current image and the value accord with the scene influencing the driving visual field through a deep learning neural network; executing S4 if it is determined that there is the driving visual field influence condition, and executing S5 if it is determined that there is no driving visual field influence environment;
s4, reconstructing environmental elements around the vehicle in real time according to the environment data collected by the vehicle-side information processing unit and/or the cloud information processing unit, the calibration data of the vehicle and the iteration of the intelligent algorithm of workers, and determining the environmental elements to be enhanced and displayed; when the surrounding environment elements of the vehicle are reconstructed, the sight line of a driver is synchronously detected, the display position of the reconstructed road environment elements is determined according to the sight line of the driver, and then the reconstructed surrounding environment images of the vehicle are displayed on virtual reality head-up display equipment or a vehicle display screen according to the determined display position;
and S5, ending.
Further, if the current vehicle is a non-intelligent vehicle and the navigation software runs on the vehicle-mounted terminal or the mobile terminal, acquiring the surrounding environment image or the environment element information from the cloud information processing unit according to the real-time position of the current vehicle-mounted terminal or the mobile terminal, and displaying the reconstructed vehicle surrounding environment image and the reconstructed environment element information through the display device of the current vehicle-mounted terminal or the mobile terminal; if the current vehicle is an intelligent vehicle-mounted navigation software user, the perception of the current vehicle to the environment can be enhanced by acquiring surrounding environment images or environment element information from the cloud information processing unit.
The signal transmission between the vehicle with the infrared imaging and intelligent driving assisting functions and the vehicle without the infrared imaging and intelligent driving assisting functions or a road user can be realized by connecting the 4G and 5G with a cloud information processing unit or a navigation software server, and the information is transmitted in a point-to-point communication mode between the 5G, WiFi and the Bluetooth directly and surrounding vehicles.
The communication between the vehicle with the infrared imaging and intelligent driving assistance functions and the intelligent transportation infrastructure establishes network connection through a V2X technology or a cloud information processing unit.
Further, in S3, the scene affecting the driving vision includes a weak light, a strong light, rain, fog, smoke, haze, dust environment, and a scene with pedestrians, automobiles, non-automobiles, animals, non-standard obstacles, intentionally arranged roadblocks, and road damages.
As shown in fig. 2, if there are pedestrian, motor vehicle, non-motor vehicle, animal, non-standard obstacle, intentionally set roadblock, road damage and other obstacle scenes around the vehicle, the obstacle and environmental element to be enhanced and displayed are determined as follows:
step a, calling an image collected by a vehicle-end information processing unit, carrying out scene superposition with virtual reality head-up display equipment, and displaying a vehicle real-time environment image according to the scene superposition;
b, judging that barrier scenes such as pedestrians, motor vehicles, non-motor vehicles, animals, non-standard barriers, intentionally-arranged roadblocks, road damage and the like exist around the surrounding vehicle according to the displayed real-time environment image of the vehicle, if so, executing a step e, and if not, executing a step c;
step c, judging whether the current track of the vehicle is possibly intersected with the obstacle, if so, executing the step d, and if not, executing the step e;
d, displaying the current obstacle category and spatial position prompt information on the virtual reality head-up display equipment, and sending out a prompt tone or directly executing the step e according to the current user prompt tone setting;
and e, ending the step.
As shown in fig. 3, in S4, the reconstructed vehicle environment audio-video display position is determined as follows: step I, acquiring an image through a driver monitoring camera, and outputting the position of the eyes of a driver in the image; step II, acquiring the distance and the angle of the obstacle relative to the vehicle through a laser radar and a millimeter wave radar; step III, extracting contour characteristic lines of the obstacles according to images of the front-looking camera and the infrared camera; step IV, identifying the type of the obstacle through a visual deep learning neural network model; v, acquiring a vehicle height value through a vehicle height sensor; step VI, calculating the imaging angle and position of the virtual reality head-up display equipment with the human eyes and the obstacles in a straight line according to the optical geometry, and calculating the scaling size of the object outline according to the distance; and step VII, displaying the outline of the obstacle on the virtual reality head-up display equipment, and prompting the type and distance information of the obstacle.
As shown in fig. 4, when it is determined that the vehicle is in the night driving state according to the current time at S1, the night vision function is automatically turned on by: step i, point cloud data are collected through a laser radar and a 4D imaging radar; step ii, images collected by the front-view camera and the panoramic camera are provided for the vehicle-end information processing unit, and the rainfall environment light sensor feeds back a measured value to the area controller; step iii, establishing a 3D model based on images and a 3D model based on radar; step iv, comparing whether each object in the 3D model identified by the radar can be found in the 3D model of the image, if so, entering step v, and if not, executing step vii; step v, judging whether the distance of the object in the 3D model is smaller than the maximum value of the visual recognition capability and larger than the minimum value of the visual recognition capability, if so, entering the step vi, and if not, executing the step vii; and step vi, starting the infrared camera, and starting a night vision enhancement function of the virtual reality head-up display device.
Finally, it should be noted that the above embodiments are only used for illustrating the technical solutions of the present invention and not for limiting the technical solutions, and although the present invention has been described in detail by referring to the preferred embodiments, those skilled in the art should understand that modifications or equivalent substitutions to the technical solutions of the present invention can be made without departing from the spirit and scope of the technical solutions, and all the modifications and equivalent substitutions should be covered by the claims of the present invention.

Claims (10)

1. A head-up display system with night vision augmented virtual reality is characterized by comprising a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud end information processing unit, an experience driving computer and virtual reality head-up display equipment;
the vehicle driving environment data collection unit comprises a laser radar and/or an imaging radar for sensing the position and the angle of a static object and a moving object around the vehicle relative to the vehicle, a camera and/or a radar for acquiring and detecting the road environment around the vehicle and the obstacles around the vehicle;
the vehicle calibration data collection unit comprises a driver monitoring camera, a rainfall environment light sensor and a vehicle size and body height data module, wherein the driver monitoring camera is used for detecting and acquiring eye actions and sight line attention points of a driver, and the rainfall environment light sensor is used for measuring the current environment illumination intensity and rainfall;
the vehicle-end information processing unit is used for receiving signals of the vehicle driving environment data unit and the vehicle calibration data collecting unit, reconstructing the surrounding environment of the vehicle according to the received signals and the iteration of an artificial intelligence algorithm, and displaying the reconstructed scene in the air of the connection line of the eyes of the driver and the road environment elements through virtual reality head-up display equipment at the view angle of the driver;
the cloud information processing unit is used for receiving information of the vehicle driving environment data unit, comparing the received corresponding road section environment information with historical data according to the vehicle driving position, updating the road section environment information in real time, and sending the information to a non-intelligent vehicle-mounted navigation software user and a mobile terminal navigation software user behind the vehicle through a low-delay mobile network;
the experience driving computer is used for receiving information of the vehicle end information processing unit and the cloud end information processing unit, is connected with the virtual reality head-up display device and a display screen on the vehicle, and outputs audio and video information.
2. The system of claim 1, further comprising an information transfer unit for transferring information between the vehicle driving environment data collection unit data and the cloud information processing unit, the surrounding mobile terminals or the vehicle.
3. The system as claimed in claim 1 or 2, wherein the vehicle driving environment data collecting unit senses the position and angle of the static and moving objects around the vehicle relative to the vehicle by using a laser radar and a 4D imaging millimeter wave radar, detects whether there is an obstacle behind the vehicle by using a backward radar, detects whether there is an obstacle at four corners of the vehicle by using an angle radar, detects whether there is an obstacle in 5 meters around by using an ultrasonic radar, and estimates the distance to the nearby obstacle; the method comprises the following steps that a medium-distance and long-distance forward-looking camera is adopted to collect environment images at different distances, and the environment images are used for an intelligent driving auxiliary system to identify objects around a vehicle; the method comprises the following steps of (1) collecting road and environment images around a vehicle by adopting a panoramic camera; acquiring an infrared image of the surrounding environment by adopting an infrared imaging camera; the laser radar and the 4D imaging radar are arranged on a bumper or a roof in front of the vehicle, and emit and receive reflected echoes through electromagnetic waves to generate environment point cloud data; the angle radar is installed around the vehicle, the rear radar is installed at the vehicle afterbody, but the forward-looking camera is installed in windshield rear, windscreen wiper clean area, the panoramic camera is installed around the vehicle, infrared imaging camera is installed on vehicle the place ahead bumper or grid.
4. The system as claimed in claim 3, wherein the vehicle calibration data collecting unit further comprises a chassis height sensor for measuring the distance between the vehicle body and the ground, and the vehicle calibration data collecting unit is in communication with the vehicle-end information processing unit through the zone controller.
5. The method for enhancing the environmental visual field of the vehicle with the night vision augmented virtual reality head-up display system according to claim 4 is characterized by comprising the following steps:
s1, after the vehicle is started, synchronizing time with the mobile network server, and judging whether the vehicle is at night currently;
s2, providing the environmental point cloud data, the image, the detected obstacle distance and the angle information collected by the sensor to a vehicle end information processing unit, and providing the rainfall environmental light sensor feedback value to a region controller;
s3, comparing whether the current image and the value accord with the scene influencing the driving visual field through a deep learning neural network; executing S4 if it is determined that there is the driving visual field influence condition, and executing S5 if it is determined that there is no driving visual field influence environment;
s4, reconstructing environmental elements around the vehicle in real time according to the environment data collected by the vehicle-side information processing unit and/or the cloud information processing unit, the calibration data of the vehicle and the iteration of the intelligent algorithm of workers, and determining the environmental elements to be enhanced and displayed; when the surrounding environment elements of the vehicle are reconstructed, the sight line of a driver is synchronously detected, the display position of the reconstructed road environment elements is determined according to the sight line of the driver, and then the reconstructed surrounding environment images of the vehicle are displayed on virtual reality head-up display equipment or a vehicle display screen according to the determined display position;
and S5, ending.
6. The method for enhancing the visual field of the vehicle environment with the night vision augmented virtual reality head-up display system as claimed in claim 5, wherein if the current vehicle is a non-intelligent vehicle and the navigation software is run on the vehicle-mounted terminal or the mobile terminal, the surrounding environment image or the environment element information is obtained from the cloud information processing unit according to the real-time position of the current vehicle-mounted terminal or the mobile terminal, and the reconstructed vehicle surrounding environment image and the reconstructed environment element information are displayed through the display device of the current vehicle-mounted terminal or the mobile terminal; if the current vehicle is an intelligent vehicle-mounted navigation software user, the perception of the current vehicle to the environment can be enhanced by acquiring surrounding environment images or environment element information from the cloud information processing unit.
7. The method of claim 5, wherein the driving field of view is affected by the ambient visual field of the vehicle with the night vision augmented virtual reality heads-up display system in S3, wherein the ambient visual field of the vehicle comprises weak light, strong light, rain, fog, smoke, haze, dust environment, and the ambient visual field of the vehicle comprises pedestrian, motor vehicle, non-motor vehicle, animal, non-standard obstacle, intentionally arranged roadblock, and road damage scene.
8. The method as claimed in claim 7, wherein if there are pedestrian, motor vehicle, non-motor vehicle, animal, nonstandard obstacle, intentionally set road block, road damage, and other obstacle scenes around the vehicle, the obstacle and environmental element to be enhanced are determined as follows:
step a, calling an image collected by a vehicle-end information processing unit, carrying out scene superposition with virtual reality head-up display equipment, and displaying a vehicle real-time environment image according to the scene superposition;
b, judging that barrier scenes such as pedestrians, motor vehicles, non-motor vehicles, animals, non-standard barriers, intentionally-arranged roadblocks, road damage and the like exist around the surrounding vehicle according to the displayed real-time environment image of the vehicle, if so, executing a step e, and if not, executing a step c;
step c, judging whether the current track of the vehicle is possibly intersected with the obstacle, if so, executing the step d, and if not, executing the step e;
d, displaying the current obstacle category and spatial position prompt information on the virtual reality head-up display equipment, and sending out a prompt tone or directly executing the step e according to the current user prompt tone setting;
and e, ending the step.
9. The method for enhancing the visual field of the vehicle environment with the night vision augmented virtual reality head-up display system according to claim 5, wherein in S4, the reconstructed audio-video display position of the vehicle environment is determined by the following method: step I, acquiring an image through a driver monitoring camera, and outputting the position of the eyes of a driver in the image; step II, acquiring the distance and the angle of the obstacle relative to the vehicle through a laser radar and a millimeter wave radar; step III, extracting contour characteristic lines of the obstacles according to images of the front-looking camera and the infrared camera; step IV, identifying the type of the obstacle through a visual deep learning neural network model; v, acquiring a vehicle height value through a vehicle height sensor; step VI, calculating the imaging angle and position of the virtual reality head-up display equipment with the human eyes and the obstacles in a straight line according to the optical geometry, and calculating the scaling size of the object outline according to the distance; and step VII, displaying the outline of the obstacle on the virtual reality head-up display equipment, and prompting the type and distance information of the obstacle.
10. The method of claim 5, wherein in step S1, when the vehicle is determined to be in the night driving state according to the current time, the night vision function is automatically turned on by: step i, point cloud data are collected through a laser radar and a 4D imaging radar; step ii, images collected by the front-view camera and the panoramic camera are provided for the vehicle-end information processing unit, and the rainfall environment light sensor feeds back a measured value to the area controller; step iii, establishing a 3D model based on images and a 3D model based on radar; step iv, comparing whether each object in the 3D model identified by the radar can be found in the 3D model of the image, if so, entering step v, and if not, executing step vii; step v, judging whether the distance of the object in the 3D model is smaller than the maximum value of the visual recognition capability and larger than the minimum value of the visual recognition capability, if so, entering the step vi, and if not, executing the step vii; and step vi, starting the infrared camera, and starting a night vision enhancement function of the virtual reality head-up display device.
CN202111644173.3A 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision Active CN114228491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111644173.3A CN114228491B (en) 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111644173.3A CN114228491B (en) 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision

Publications (2)

Publication Number Publication Date
CN114228491A true CN114228491A (en) 2022-03-25
CN114228491B CN114228491B (en) 2024-05-14

Family

ID=80744412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111644173.3A Active CN114228491B (en) 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision

Country Status (1)

Country Link
CN (1) CN114228491B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115166978A (en) * 2022-07-21 2022-10-11 重庆长安汽车股份有限公司 Display lens, system, method and medium of head-up display system
CN115952570A (en) * 2023-02-07 2023-04-11 江苏泽景汽车电子股份有限公司 HUD simulation method and device and computer readable storage medium
CN116409331A (en) * 2023-04-14 2023-07-11 南京海汇装备科技有限公司 Data analysis processing system and method based on intelligent photoelectric sensing technology
CN116645830A (en) * 2022-09-26 2023-08-25 深圳海冰科技有限公司 Vision enhancement system for assisting vehicle in night curve
WO2024001177A1 (en) * 2022-06-29 2024-01-04 中兴通讯股份有限公司 Visual field enhancement method, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654753A (en) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 Intelligent vehicle-mounted safe driving assistance method and system
US20160280133A1 (en) * 2015-03-23 2016-09-29 Magna Electronics Inc. Vehicle vision system with thermal sensor
CN113525234A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Auxiliary driving system device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160280133A1 (en) * 2015-03-23 2016-09-29 Magna Electronics Inc. Vehicle vision system with thermal sensor
CN105654753A (en) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 Intelligent vehicle-mounted safe driving assistance method and system
CN113525234A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Auxiliary driving system device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邹鹏;谌雨章;蔡必汉;: "基于深度学习的智能车辆辅助驾驶系统设计", 信息与电脑(理论版), no. 11, 15 June 2019 (2019-06-15) *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024001177A1 (en) * 2022-06-29 2024-01-04 中兴通讯股份有限公司 Visual field enhancement method, electronic device and storage medium
CN115166978A (en) * 2022-07-21 2022-10-11 重庆长安汽车股份有限公司 Display lens, system, method and medium of head-up display system
CN115166978B (en) * 2022-07-21 2023-06-16 重庆长安汽车股份有限公司 Display lens, system, method and medium of head-up display system
CN116645830A (en) * 2022-09-26 2023-08-25 深圳海冰科技有限公司 Vision enhancement system for assisting vehicle in night curve
CN116645830B (en) * 2022-09-26 2024-02-13 深圳海冰科技有限公司 Vision enhancement system for assisting vehicle in night curve
CN115952570A (en) * 2023-02-07 2023-04-11 江苏泽景汽车电子股份有限公司 HUD simulation method and device and computer readable storage medium
CN116409331A (en) * 2023-04-14 2023-07-11 南京海汇装备科技有限公司 Data analysis processing system and method based on intelligent photoelectric sensing technology
CN116409331B (en) * 2023-04-14 2023-09-19 南京海汇装备科技有限公司 Data analysis processing system and method based on intelligent photoelectric sensing technology

Also Published As

Publication number Publication date
CN114228491B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN114228491B (en) System and method for enhancing virtual reality head-up display with night vision
AU2021200258B2 (en) Multiple operating modes to expand dynamic range
US10595176B1 (en) Virtual lane lines for connected vehicles
CN111480130B (en) Method for solar-sensing vehicle route selection, vehicle and computing system
US9267808B2 (en) Visual guidance system
US9760782B2 (en) Method for representing objects surrounding a vehicle on the display of a display device
CN111527016B (en) Method and system for controlling the degree of light encountered by an image capture device of an autopilot vehicle
KR20080004835A (en) Apparatus and method for generating a auxiliary information of moving vehicles for driver
US20190135169A1 (en) Vehicle communication system using projected light
US20130021453A1 (en) Autostereoscopic rear-view display system for vehicles
CN111221342A (en) Environment sensing system for automatic driving automobile
CN114492679B (en) Vehicle data processing method and device, electronic equipment and medium
CN113706883B (en) Tunnel section safe driving system and method
CN113343738A (en) Detection method, device and storage medium
CN113884090A (en) Intelligent platform vehicle environment sensing system and data fusion method thereof
CN113126294B (en) Multi-layer imaging system
US11919451B2 (en) Vehicle data display system
CN113246859B (en) Electronic rearview mirror with driving auxiliary system warning function
CN114228705A (en) Electric vehicle early warning system and method
CN113232586A (en) Infrared pedestrian projection display method and system for driving at night
CN111845347A (en) Vehicle driving safety prompting method, vehicle and storage medium
CN218702988U (en) Automobile with a detachable front cover
US20240223882A1 (en) Multiple Operating Modes to Expand Dynamic Range
CN116256747A (en) Electric automobile environment sensing system and method thereof
CN117002507A (en) Vehicle lane change assisting reminding method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant