CN114228491B - System and method for enhancing virtual reality head-up display with night vision - Google Patents

System and method for enhancing virtual reality head-up display with night vision Download PDF

Info

Publication number
CN114228491B
CN114228491B CN202111644173.3A CN202111644173A CN114228491B CN 114228491 B CN114228491 B CN 114228491B CN 202111644173 A CN202111644173 A CN 202111644173A CN 114228491 B CN114228491 B CN 114228491B
Authority
CN
China
Prior art keywords
vehicle
environment
virtual reality
information processing
processing unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111644173.3A
Other languages
Chinese (zh)
Other versions
CN114228491A (en
Inventor
吴仁钢
颜长深
杜先起
谭皓月
张宗全
赵蕾
刘大全
陈斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202111644173.3A priority Critical patent/CN114228491B/en
Publication of CN114228491A publication Critical patent/CN114228491A/en
Application granted granted Critical
Publication of CN114228491B publication Critical patent/CN114228491B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/21Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor using visual output, e.g. blinking lights or matrix displays
    • B60K35/23Head-up displays [HUD]
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K35/00Instruments specially adapted for vehicles; Arrangement of instruments in or on vehicles
    • B60K35/20Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor
    • B60K35/28Output arrangements, i.e. from vehicle to user, associated with vehicle functions or specially adapted therefor characterised by the type of the output information, e.g. video entertainment or vehicle dynamics information; characterised by the purpose of the output information, e.g. for attracting the attention of the driver
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R1/00Optical viewing arrangements; Real-time viewing arrangements for drivers or passengers using optical image capturing systems, e.g. cameras or video systems specially adapted for use in or on vehicles
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/176Camera images
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60KARRANGEMENT OR MOUNTING OF PROPULSION UNITS OR OF TRANSMISSIONS IN VEHICLES; ARRANGEMENT OR MOUNTING OF PLURAL DIVERSE PRIME-MOVERS IN VEHICLES; AUXILIARY DRIVES FOR VEHICLES; INSTRUMENTATION OR DASHBOARDS FOR VEHICLES; ARRANGEMENTS IN CONNECTION WITH COOLING, AIR INTAKE, GAS EXHAUST OR FUEL SUPPLY OF PROPULSION UNITS IN VEHICLES
    • B60K2360/00Indexing scheme associated with groups B60K35/00 or B60K37/00 relating to details of instruments or dashboards
    • B60K2360/16Type of output information
    • B60K2360/179Distances to obstacles or vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/10Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used
    • B60R2300/102Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of camera system used using 360 degree surveillance camera system
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/20Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used
    • B60R2300/207Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the type of display used using multi-purpose displays, e.g. camera image and navigation or video on same display
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60RVEHICLES, VEHICLE FITTINGS, OR VEHICLE PARTS, NOT OTHERWISE PROVIDED FOR
    • B60R2300/00Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle
    • B60R2300/80Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement
    • B60R2300/8093Details of viewing arrangements using cameras and displays, specially adapted for use in a vehicle characterised by the intended use of the viewing arrangement for obstacle warning
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Transportation (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses a night vision augmented virtual reality head-up display system and a method, wherein the system comprises a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud information processing unit, an experience driving computer and virtual reality head-up display equipment; the vehicle-mounted terminal builds a 3D model of the surrounding environment of the vehicle by aggregating information such as environment sensing equipment or sensors, positioning equipment and the like which are arranged on the vehicle, so as to realize the multidimensional sensing of the environment by the vehicle; the functions of sensor data sharing and the like between the vehicle information nodes of the same road section are realized by the autonomous communication function between the vehicle and the cloud information processing unit and between the vehicles, so that the safety of road traffic is further improved. The system improves intelligent driving assistance of the vehicle and environment identification capability of a driver in severe environments such as weak light, strong light, rain, fog, smoke, haze, dust and the like through adding the night vision sensor, and enhances driving safety.

Description

System and method for enhancing virtual reality head-up display with night vision
Technical Field
The invention relates to an automobile driving system, in particular to a night vision augmented virtual reality head-up display system and method.
Background
In the driving process of an automobile, in order to improve the driving safety of the automobile in severe or sudden environments such as weak light, strong light, rain, fog, smoke, haze and flying dust, a night vision function is required to be added on a vehicle driving auxiliary system, a current road three-dimensional model is displayed in the visual field of a driver through a virtual reality head-up display and is attached to the road, and the resolution of the driver to the environment is improved. The chinese patent with 2019101242782 discloses a night vision head-up display device with an eye tracking function, where the night vision head-up display device includes a head-up display base placed in an automobile cab, a display screen, and an infrared camera module installed on a head, the infrared camera module is connected with the head-up display base, the display screen is connected with the head-up display base, and the head-up display base includes a human eye detection camera, a thermal image processing module, a human eye tracking processing module, and a projection module; by combining the head-up display and the infrared night vision lens, a driver can observe an infrared image of the front environment in a dim environment on the premise of keeping the sight not deviating from the driving direction. By detecting the vision change of the driver and fine-adjusting the display screen, the drift condition of the picture can be reduced, and the observer can be helped to observe complete and neutral infrared images.
In the system disclosed above, the road environment and the infrared characteristics cannot be fed back to the vehicle with the intelligent driving assistance system or the non-intelligent driving assistance system running in the rear through the cloud network; technical support cannot be provided for intelligent road traffic safety in severe environments, and a method for automatic control by identifying environmental factors is not explicitly described.
Disclosure of Invention
Aiming at the defects in the prior art, the invention aims to solve the technical problems that: how to provide a system and a method for reconstructing a scene of a surrounding environment of a vehicle, so that a driver obtains an enhanced view in a severe environment or a dangerous environment, and a road model is reconstructed after feedback information of a rear vehicle or receiving road information from other vehicles, and driving safety is improved.
In order to solve the technical problems, the invention adopts the following technical scheme:
the night vision augmented virtual reality head-up display system is characterized by comprising a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud information processing unit, an experience driving computer and virtual reality head-up display equipment;
The vehicle driving environment data collection unit comprises a laser radar and/or an imaging radar for sensing the azimuth and the angle of static and moving objects around the vehicle relative to the vehicle, a camera and/or a radar for acquiring and detecting road environment around the vehicle and obstacles around the vehicle;
The vehicle calibration data collection unit comprises a driver monitoring camera, a rainfall environment light sensor and a vehicle size and body height data module, wherein the driver monitoring camera is used for detecting and acquiring eye actions and sight attention points of a driver, and the rainfall environment light sensor is used for measuring current environment illumination intensity and rainfall;
The vehicle end information processing unit is used for receiving signals of the vehicle driving environment data unit and the vehicle calibration data collecting unit, iterating according to the received signals and the artificial intelligence algorithm, reconstructing the surrounding environment of the vehicle, and displaying the reconstructed scene in the air of the connection line of eyes of a driver and road environment elements through the virtual reality head-up display equipment at the view angle of the driver;
The cloud information processing unit is used for receiving information of the vehicle driving environment data unit, comparing the received corresponding road section environment information with historical data according to the vehicle driving position, updating the road section environment information in real time, and sending the information to a non-intelligent vehicle navigation software user and a mobile terminal navigation software user positioned behind the vehicle through a low-delay mobile network;
The experience driving computer is used for receiving information of the vehicle-end information processing unit and the cloud information processing unit, and is connected with the virtual reality head-up display device and a display screen on the vehicle to output audio and video information. In this way, the cloud information processing unit is used for transmitting important environmental information to surrounding vehicle terminals or mobile terminal navigation programs using networking navigation programs, so that the capability of surrounding vehicle drivers for finding safety risks and avoiding accidents is improved. After the cloud information processing unit is combined with the vehicle-end information processing unit for obtaining, the information obtaining time can be shortened, and a user can be reminded by timely putting a screen. According to the system, environmental data acquired by an infrared imaging camera, a visible light camera, a millimeter wave radar, a 4D imaging radar and a laser radar are combined with calibration data such as vehicle size, vehicle body height data, driver eye positions and sensor angles and artificial intelligence algorithm iteration, a surrounding environment scene of the vehicle is reconstructed, the reconstructed scene is displayed in virtual reality head-up display equipment at the view angle of a driver, and particularly, the infrared imaging camera can make up for the defect that the laser radar is easily influenced by rain, fog, smoke, haze and dust raising environments, and the adaptability of the vehicle in complex and changeable environments is enhanced. The functions of road environment information sharing and the like between anonymous vehicles running on the same road section are achieved through the vehicle and cloud information processing unit, and the cloud information processing unit provides a front infrared imaging camera image and an environment reconstruction model based on the vehicle position so as to meet the requirements of users such as safe driving of vehicles without the infrared imaging camera and the laser radar in the environment of night, strong light, rain, fog, smoke, haze and flying dust. The system can be used for an intelligent driving auxiliary system of the vehicle, provides a front real-time infrared imaging video and a reconstructed road scene for a user in the night, strong light, rain, fog, smoke, haze and dust environment, provides reference advice for the user to drive the vehicle, and improves the capability of surrounding vehicle drivers to find safety risks and avoid accidents.
Further, the vehicle driving environment data collection system further comprises an information transfer unit, wherein the information transfer unit is used for transferring information between the vehicle driving environment data collection unit data and the cloud information processing unit, the surrounding mobile terminals or the vehicle. Therefore, the information transfer unit is used for providing data and images for the vehicle-end information processing unit and the cloud information processing unit to realize communication interaction.
Further, the vehicle driving environment data collection unit adopts a laser radar and a 4D imaging millimeter wave radar to sense the azimuth and the angle of static and moving objects around the vehicle relative to the vehicle, adopts a backward radar to detect whether an obstacle exists behind the vehicle, adopts a corner radar to detect whether the four corners of the vehicle are provided with the obstacle, adopts an ultrasonic radar to detect whether the surrounding 5m is provided with the obstacle, and estimates the distance of the nearby obstacle; the method comprises the steps that medium-distance and long-distance forward-looking cameras are adopted to collect environment images with different distances, and the environment images are used for identifying objects around a vehicle by an intelligent driving auxiliary system; a surrounding camera is adopted to collect road and environment images around the vehicle; an infrared imaging camera is adopted to collect infrared images of surrounding environment; the laser radar and the 4D imaging radar are arranged on a bumper or a roof in front of a vehicle, and environmental point cloud data are generated by transmitting and receiving reflected echoes through electromagnetic waves; the angle radar is installed around the vehicle, the backward radar is installed at the tail of the vehicle, the front-view camera is installed at the rear of the windshield and in a cleanable area of the windscreen wiper, the periphery-view camera is installed around the vehicle, and the infrared imaging camera is installed on a bumper or a grille in front of the vehicle. Therefore, the laser radar and the imaging radar are mutually complemented, the provided vehicle azimuth and angle are more comprehensive, and the provision of vehicle azimuth and angle information is not influenced after one of the laser radar and the imaging radar fails. The angle radar and the reverse radar can detect whether the periphery of the vehicle is provided with an obstacle or not, and complement each other. The adopted front view camera, the peripheral view camera and the infrared imaging camera can provide images of the surrounding environment of the vehicle for the intelligent driving auxiliary system, the visual angle of the intelligent driving auxiliary system is wider than that of a driver, and dangerous early warning in a visual field blind area can be provided for the driver. The ultrasonic radar detects whether an obstacle exists in a range of a few meters around the vehicle by transmitting ultrasonic waves and receiving reflected echoes, and after the ultrasonic radar is linked with information acquired by a camera, the provided information is more accurate and comprehensive.
Further, the vehicle calibration data collection unit further comprises a chassis height sensor for measuring the distance between the vehicle body and the ground, and the vehicle calibration data collection unit is in information transmission with the vehicle end information processing unit through the area controller. In this way, the chassis height sensor is able to measure the distance between the vehicle body and the ground, providing data support for calculating the relative linear position between the driver's eyes, the target road element and the virtual imaging.
A vehicle environment field of view enhancement method based on the system with night vision enhanced virtual reality head-up display as described above, comprising the steps of: s1, after a vehicle is started, synchronizing time with a mobile network server, and judging whether the vehicle is currently at night; s2, providing the environmental point cloud data, the images, the detected obstacle distance and the angle information acquired by the sensors to a vehicle-end information processing unit, and providing the feedback value of a rainfall environmental light sensor to a regional controller; s3, comparing whether the current image and the numerical value accord with the scene affecting the driving vision or not through a deep learning neural network; if it is determined that the driving view condition is affected, S4 is executed, and if it is determined that the driving view environment is not affected, S5 is executed; s4, iterating the intelligent algorithm of the worker according to the environment data, the calibration data and the worker collected by the vehicle-end information processing unit and/or the cloud information processing unit, reconstructing surrounding environment elements of the vehicle in real time, and determining the environment elements to be enhanced and displayed; when reconstructing the surrounding elements of the vehicle, synchronously detecting the sight line of a driver, determining the display position of the reconstructed road surrounding elements according to the sight line of the driver, and then displaying the reconstructed surrounding images of the vehicle on a virtual reality head-up display device or a vehicle display screen according to the determined display position; s5, ending. Therefore, part of vehicles on the road have infrared imaging and automatic driving functions, surrounding road environments can be perceived in night, strong light, rain, fog, smoke, haze and flying dust, the sensor data of an automatic driving automobile are used for modeling again, the built model is displayed in the environment corresponding to the visual angle of a driver through virtual reality head-up display equipment, the usability of the vehicles in severe environments is improved, and the situation that the driving safety is influenced by suddenly appearing environmental factors is avoided.
Further, if the current vehicle is a non-intelligent vehicle, the navigation software runs on the vehicle-mounted terminal or the mobile terminal, the surrounding environment image or the environment element information is obtained from the cloud information processing unit according to the real-time position of the current vehicle-mounted terminal or the mobile terminal, and the reconstructed surrounding environment image and the reconstructed environment element information of the vehicle are displayed through the display equipment of the current vehicle-mounted terminal or the mobile terminal; if the current vehicle is an intelligent vehicle navigation software user, the perception of the current vehicle to the environment can be enhanced by acquiring surrounding environment images or environment element information from the cloud information processing unit. Therefore, the vehicles with the infrared imaging function on the road can acquire the traffic road information uploaded by the vehicles with the infrared imaging function on the periphery through the cloud to acquire prompts and driving suggestions in part of emergency.
Further, in S3, the scene affecting the driving field of view includes night, strong light, rain, fog, smoke, haze, dust environment, pedestrian, non-motor vehicle, animal, obstacle, road damage scene. If pedestrians, non-motor vehicles, animals, obstacles and road damage scenes exist around the vehicle, the display object to be enhanced is determined by adopting the following modes: step a, calling an image acquired by a vehicle end information processing unit, performing scene superposition with a virtual reality head-up display device, and displaying a real-time environment image of a vehicle according to the scene superposition; step b, judging that surrounding vehicles have obstacle scenes such as pedestrians, motor vehicles, non-motor vehicles, animals, non-standard obstacles, deliberately arranged roadblocks, road damages and the like around the surrounding vehicles according to the displayed real-time environment images of the vehicles, if yes, executing the step e, and if not, executing the step c; step c, judging whether the current track of the vehicle possibly intersects with the obstacle, if so, executing step d, and if not, executing step e; step d, displaying the current barrier prompt information on the virtual reality head-up display equipment, and sending out prompt tones or directly executing the step e according to the current user prompt tone setting; and e, ending. In this way, the time of the vehicle after starting is synchronized with the navigation system and the mobile network server, whether the vehicle is at night or not is judged, the environment conditions are mutually verified through the camera and the rain amount environment light sensing data, and the final wire harness effect is controlled according to the degree of the recognized environment affecting the driving vision and the display parameters of the overlapped scene. When the obstacle is detected, the image features are used for identifying the categories of people, wild animals, vehicles, stones and the like, the outline of the obstacle is enhanced and displayed on the virtual reality head-up display device, if the influence on driving is large, prompt tones are sent out according to the settings, and the coping ability of the driver to emergency is improved. When the vehicle runs on a rural road or mountain road without a street lamp at night, poor light causes that a driver judges the surrounding environment clearly, pedestrians and animals cannot be found timely, when a user starts an automatic night infrared imaging enhancement function, the system automatically starts the infrared imaging function at night or when the light is insufficient, acquires surrounding environment infrared imaging pictures and videos, identifies road environment elements, structures, materials and object outlines in the images, combines whole vehicle sensor data to construct a surrounding scene of the vehicle, displays the reconstructed front road scene on virtual reality head-up display equipment, enhances the road environment recognition degree, thereby enhancing the driver confidence and continuing to safely drive the vehicle.
Further, in S4, the reconstructed audio/video display position of the vehicle environment is determined by the following method: step I, acquiring an image through a driver monitoring camera, and outputting the position of eyes of a driver in the image; step II, obtaining the distance and angle of the obstacle relative to the vehicle through a laser radar and a millimeter wave radar; step III, extracting outline characteristic lines of the barrier through images of the front-view camera and the infrared camera; step IV, identifying the type of the obstacle through a visual deep learning neural network model; v, acquiring a vehicle height value through a vehicle height sensor; step VI, calculating imaging angles and positions of virtual reality head-up display equipment with human eyes and obstacles in a straight line according to optical geometry, and calculating the reduction and the enlargement of the outline of an object according to the distance; and VII, displaying the outline of the obstacle on the virtual reality head-up display device, and prompting the type and distance information of the obstacle. Like this, when driver eye position removes in virtual reality new line display device eye box, driver monitoring camera can confirm user's eye position, adjusts virtual reality new line display device projected position in real time for barrier display position keeps at correct angle, guarantees driving safety. The external characteristics and the outline of the obstacle can be obtained by the laser radar and the millimeter wave radar, prompts are provided for a driver, driving safety is improved, and emergency capability of the driver due to fatigue or incapability of distinguishing emergency due to visual angle blind areas is improved.
Further, in S1, when it is determined that the vehicle is in a night driving state according to the current time, the night vision function is automatically turned on by the following manner: step i, acquiring point cloud data through a laser radar and a 4D imaging radar; step ii, the front view camera and the peripheral view camera acquire images and provide the images for the vehicle-end information processing unit, and the rainfall ambient light sensor feeds back measured values to the area controller; step iii, establishing an image-based 3D model and a radar-based 3D model; step iv, comparing whether each object in the radar-identified 3D model can be found in the 3D model of the image, if so, entering step v, and if not, executing step vii; step v, judging whether the distance of the object in the 3D model is smaller than the maximum value of the visual recognition capability and larger than the minimum value of the visual recognition capability, if so, entering a step vi, and if not, executing a step vii; and vi, starting an infrared camera and starting a night vision enhancement function of the virtual reality head-up display device. Thus, when the environment is suddenly changed and the visual resolving power is not as high as that of the laser radar and the millimeter wave radar, the visual field of the driver can be automatically determined to be influenced, the environment sensing capability needs to be enhanced, and then the night vision function is automatically started. By adopting the mode, the vehicle has a certain environment cognition capability, and the vision enhancement service required in the current environment is automatically provided for the user. Unlike a display screen, the user's line of sight does not need to leave the road, and can grasp the road environment under the shielding of environmental factors such as rain, fog, etc.
Compared with the prior art, the night vision augmented virtual reality head-up display system and method provided by the invention have the following advantages:
1. The field of view is enhanced: by reconstructing the road environment from the environmental information detected by the sensor, the virtual reality head-up display device is projected onto the windshield, thereby enhancing the recognition of road environment obstacles in the visual field of the driver. Even in the environment of weak light, strong light, rain, fog, smoke, haze and dust, the visual field condition for running can be obtained.
2. Augmented virtual reality: because the virtual reality head-up display device is adopted for the screen-throwing display mode, the device is different from a display screen, and the user can grasp the road environment shielded by the environmental factors such as rain, fog and the like without leaving the road.
3. Information transfer: the obstacle position data and the infrared imaging characteristics are uploaded to the cloud server, so that data support can be provided for the auxiliary driving functions of other vehicles passing recently, visual field enhancement services can be provided for non-intelligent automobile vehicle navigation software users and mobile terminal navigation software users, driving suggestions are improved, and driving safety is enhanced.
4. Adaptive driver position: when the eye position of the driver moves in the eye box of the virtual reality head-up display device, the eye position of the user can be determined by the driver monitoring camera, and the projection position of the virtual reality head-up display device is adjusted in real time, so that the display position of the obstacle is kept at a correct angle, and the driving safety is ensured.
5. And (3) blind area enhancement: the surrounding camera has the environment visual field around the vehicle, is far wider than the angle seen by a driver, has continuous detection capability, and can provide a danger prompt function in the blind area of the driver or outside the current visual field.
6. Automatically starting a night vision function and actively providing service: when the environment changes and the visual resolving power is lower than that of a laser radar and a millimeter wave radar, the visual field of a driver can be automatically determined to be influenced, the environment sensing capability needs to be enhanced, and then the night vision function is automatically started. The visual data and radar data comparison algorithm enables the vehicle to have a certain environment cognition capability, and automatically provides the vision enhancement service required under the current environment for the user.
Drawings
FIG. 1 is a schematic diagram of a hardware configuration of a head-up display system with night vision augmented virtual reality in an embodiment;
FIG. 2 is a flow chart of determining an enhanced display object in an embodiment;
FIG. 3 is a flow chart of determining a display position of a virtual reality head-up display device in an embodiment;
Fig. 4 is a flowchart of automatically turning on the night vision function of the virtual reality head-up display device under night vision in the embodiment.
Detailed Description
The invention will be further described with reference to the drawings and examples.
Examples:
The embodiment provides a night vision augmented virtual reality head-up display system, which comprises a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud information processing unit, a navigation device, an experience driving computer and virtual reality head-up display equipment;
The vehicle driving environment data collection unit comprises a laser radar and/or an imaging radar for sensing the azimuth and the angle of static and moving objects around the vehicle relative to the vehicle, a camera and/or a radar for acquiring and detecting road environment around the vehicle and obstacles around the vehicle;
The vehicle calibration data collection unit comprises a driver monitoring camera, a rainfall environment light sensor and a vehicle size and body height data module, wherein the driver monitoring camera is used for detecting and acquiring eye actions and sight attention points of a driver, and the rainfall environment light sensor is used for measuring current environment illumination intensity and rainfall;
The vehicle end information processing unit is used for receiving signals of the vehicle driving environment data unit and the vehicle calibration data collecting unit, iterating according to the received signals and the artificial intelligence algorithm, reconstructing the surrounding environment of the vehicle, and displaying the reconstructed scene in the air of the connection line of eyes of a driver and road environment elements through the virtual reality head-up display equipment at the view angle of the driver;
The cloud information processing unit is used for receiving information of the vehicle driving environment data unit, comparing the received corresponding road section environment information with historical data according to the vehicle driving position, updating the road section environment information in real time, and sending the information to a non-intelligent vehicle navigation software user and a mobile terminal navigation software user positioned behind the vehicle through a low-delay mobile network; the cloud information processing unit is further used for sending prompt information to a non-intelligent vehicle navigation software user and a mobile terminal navigation software user positioned behind the vehicle through a low-time-delay mobile network when receiving the infrared image of the road environment influencing driving, and if the environmental factors influencing driving continuously exist at a certain road place, the infrared image is continuously sent to the close user until the environmental condition is improved.
The navigation device is used for processing data of navigation satellites, inertial navigation chips, wheel rotating speed signals and high-precision maps and accurately positioning the movement direction and the position of the vehicle;
the experience driving computer is used for receiving information of the vehicle-end information processing unit and the cloud information processing unit, and is connected with the virtual reality head-up display device and a display screen on the vehicle to output audio and video information.
As shown in fig. 1, two vehicle-end information processing units are provided in this embodiment, and the information acquisition and transmission modes of the two vehicle-end information processing units are the same, and can be mutually communicated, wherein one of the two vehicle-end information processing units is used as a standby, so as to prevent the use of the system from being affected after one of the two vehicle-end information processing units is damaged.
Further, the vehicle driving environment data collection system further comprises an information transfer unit, wherein the information transfer unit is used for transferring information between the vehicle driving environment data collection unit data and the cloud information processing unit. Comprising the following steps: information such as pictures shot by the vehicle-mounted infrared camera, angles and heights of the camera, and obstacle information detected by the vehicle-mounted surrounding camera, the laser radar and the millimeter wave radar. Specifically, when the vehicle is in a short-time severe environment such as night, strong light, rain, fog, smoke, haze, dust and the like, a user can leave a dangerous road section through the head-up display system enhanced by infrared imaging; and the vehicle end information transmission unit is used for transmitting the information such as the vehicle position, the driving path, the imported infrared image data and the like to surrounding vehicles and the cloud server.
Further, the vehicle driving environment data collection unit adopts a laser radar and a 4D imaging radar to sense the azimuth and the angle of static and moving objects around the vehicle relative to the vehicle, adopts a reverse radar to detect whether an obstacle exists behind the vehicle, adopts a corner radar to detect whether the four corners of the vehicle are provided with the obstacle, adopts an ultrasonic radar to detect whether the obstacle exists in 5 meters around, and estimates the distance of the obstacle; detecting environmental images at different distances by adopting near, middle and long-distance forward-looking cameras for identifying objects around the vehicle by using the intelligent driving auxiliary system; observing the road and the environment around the vehicle by adopting a periscope camera; sensing an infrared image of the surrounding environment by adopting an infrared imaging camera; the laser radar and the 4D imaging radar are arranged on a bumper or a roof in front of a vehicle, and environmental point cloud data are generated by transmitting and receiving reflected echoes through electromagnetic waves; the angle radar is installed around the vehicle, the backward radar is installed at the tail of the vehicle, the front-view camera is installed at the rear of the windshield and in a cleanable area of the windscreen wiper, the periphery-view camera is installed around the vehicle, and the infrared imaging camera is installed on a bumper or a grille in front of the vehicle.
Further, the vehicle calibration data collection unit further comprises a chassis height sensor for measuring the distance between the vehicle body and the ground, and the vehicle calibration data collection unit is in information transmission with the vehicle end information processing unit through the area controller.
A vehicle environmental field of view enhancement method based on a night vision augmented virtual reality head-up display system as described above, comprising the steps of:
S1, after a vehicle is started, synchronizing time with a mobile network server, and judging whether the vehicle is currently at night;
S2, providing the environmental point cloud data, the images, the detected obstacle distance and the angle information acquired by the sensors to a vehicle-end information processing unit, and providing the feedback value of a rainfall environmental light sensor to a regional controller;
s3, comparing whether the current image and the numerical value accord with the scene affecting the driving vision or not through a deep learning neural network; if it is determined that the driving view condition is affected, S4 is executed, and if it is determined that the driving view environment is not affected, S5 is executed;
S4, iterating the intelligent algorithm of the worker according to the environment data, the calibration data and the worker collected by the vehicle-end information processing unit and/or the cloud information processing unit, reconstructing surrounding environment elements of the vehicle in real time, and determining the environment elements to be enhanced and displayed; when reconstructing the surrounding elements of the vehicle, synchronously detecting the sight line of a driver, determining the display position of the reconstructed road surrounding elements according to the sight line of the driver, and then displaying the reconstructed surrounding images of the vehicle on a virtual reality head-up display device or a vehicle display screen according to the determined display position;
S5, ending.
Further, if the current vehicle is a non-intelligent vehicle, the navigation software runs on the vehicle-mounted terminal or the mobile terminal, the surrounding environment image or the environment element information is obtained from the cloud information processing unit according to the real-time position of the current vehicle-mounted terminal or the mobile terminal, and the reconstructed surrounding environment image and the reconstructed environment element information of the vehicle are displayed through the display equipment of the current vehicle-mounted terminal or the mobile terminal; if the current vehicle is an intelligent vehicle navigation software user, the perception of the current vehicle to the environment can be enhanced by acquiring surrounding environment images or environment element information from the cloud information processing unit.
The vehicle with the infrared imaging and intelligent driving assistance functions and the vehicle or road user without the infrared imaging and intelligent driving assistance functions can transmit information in a mode that the cloud information processing unit or the navigation software server is connected through 4G and 5G, and 5G, wiFi and Bluetooth directly establish point-to-point communication with surrounding vehicles.
Communication between a vehicle with infrared imaging and intelligent driving assistance functions and an intelligent traffic infrastructure is established through a V2X technology or a cloud information processing unit.
Further, in S3, the scene affecting the driving field of view includes weak light, strong light, rain, fog, smoke, haze, dust environment, pedestrian, motor vehicle, non-motor vehicle, animal, non-standard obstacle, intentionally set roadblock, road damage scene.
As shown in fig. 2, if there are pedestrians, vehicles, non-vehicles, animals, non-standard obstacles, intentionally set roadblocks, road damage and other obstacle scenes around the vehicle, the following manner is adopted to determine the obstacle and environmental elements to be enhanced for display:
Step a, calling an image acquired by a vehicle end information processing unit, performing scene superposition with a virtual reality head-up display device, and displaying a real-time environment image of a vehicle according to the scene superposition;
step b, judging that surrounding vehicles have obstacle scenes such as pedestrians, motor vehicles, non-motor vehicles, animals, non-standard obstacles, deliberately arranged roadblocks, road damages and the like around the surrounding vehicles according to the displayed real-time environment images of the vehicles, if yes, executing the step e, and if not, executing the step c;
step c, judging whether the current track of the vehicle possibly intersects with the obstacle, if so, executing step d, and if not, executing step e;
Step d, displaying the current barrier category and the space position prompt information on the virtual reality head-up display device, and sending out a prompt tone or directly executing the step e according to the current user prompt tone setting;
And e, ending.
As shown in fig. 3, in S4, the reconstructed audio-visual display position of the vehicle environment is determined as follows: step I, acquiring an image through a driver monitoring camera, and outputting the position of eyes of a driver in the image; step II, obtaining the distance and angle of the obstacle relative to the vehicle through a laser radar and a millimeter wave radar; step III, extracting outline characteristic lines of the barrier through images of the front-view camera and the infrared camera; step IV, identifying the type of the obstacle through a visual deep learning neural network model; v, acquiring a vehicle height value through a vehicle height sensor; step VI, calculating imaging angles and positions of virtual reality head-up display equipment with human eyes and obstacles in a straight line according to optical geometry, and calculating the reduction and the enlargement of the outline of an object according to the distance; and VII, displaying the outline of the obstacle on the virtual reality head-up display device, and prompting the type and distance information of the obstacle.
As shown in fig. 4, in S1, when it is determined that the vehicle is in a night driving state according to the current time, the night vision function is automatically turned on by: step i, acquiring point cloud data through a laser radar and a 4D imaging radar; step ii, the front view camera and the peripheral view camera acquire images and provide the images for the vehicle-end information processing unit, and the rainfall ambient light sensor feeds back measured values to the area controller; step iii, establishing an image-based 3D model and a radar-based 3D model; step iv, comparing whether each object in the radar-identified 3D model can be found in the 3D model of the image, if so, entering step v, and if not, executing step vii; step v, judging whether the distance of the object in the 3D model is smaller than the maximum value of the visual recognition capability and larger than the minimum value of the visual recognition capability, if so, entering a step vi, and if not, executing a step vii; and vi, starting an infrared camera and starting a night vision enhancement function of the virtual reality head-up display device.
Finally, it should be noted that the above embodiments are only for illustrating the technical solution of the present invention and not for limiting the technical solution, and although the applicant has described the present invention in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications and equivalents of the technical solution of the present invention can be made without departing from the spirit and scope of the technical solution, and all such modifications and equivalents are intended to be encompassed in the scope of the claims of the present invention.

Claims (10)

1. The night vision augmented virtual reality head-up display system is characterized by comprising a vehicle driving environment data collection unit, a vehicle calibration data collection unit, a vehicle end information processing unit, a cloud information processing unit, an experience driving computer and virtual reality head-up display equipment;
The vehicle driving environment data collection unit comprises a laser radar and/or an imaging radar for sensing the azimuth and the angle of static and moving objects around the vehicle relative to the vehicle, a camera and/or a radar for acquiring and detecting road environment around the vehicle and obstacles around the vehicle;
The vehicle calibration data collection unit comprises a driver monitoring camera, a rainfall environment light sensor and a vehicle size and body height data module, wherein the driver monitoring camera is used for detecting and acquiring eye actions and sight attention points of a driver, and the rainfall environment light sensor is used for measuring current environment illumination intensity and rainfall;
The vehicle end information processing unit is used for receiving signals of the vehicle driving environment data unit and the vehicle calibration data collecting unit, iterating according to the received signals and the artificial intelligence algorithm, reconstructing the surrounding environment of the vehicle, and displaying the reconstructed scene in the air of the connection line of eyes of a driver and road environment elements through the virtual reality head-up display equipment at the view angle of the driver;
The cloud information processing unit is used for receiving information of the vehicle driving environment data unit, comparing the received corresponding road section environment information with historical data according to the vehicle driving position, updating the road section environment information in real time, and sending the information to a non-intelligent vehicle navigation software user and a mobile terminal navigation software user positioned behind the vehicle through a low-delay mobile network;
the experience driving computer is used for receiving information of the vehicle-end information processing unit and the cloud information processing unit, and is connected with the virtual reality head-up display device and a display screen on the vehicle to output audio and video information.
2. The head-up display system with night vision augmented reality according to claim 1, further comprising an information transfer unit for transferring information between the vehicle driving environment data collection unit data and the cloud information processing unit, the surrounding mobile terminal or the vehicle.
3. The head-up display system with night vision augmented virtual reality according to claim 1 or 2, wherein the vehicle driving environment data collection unit senses the orientation and angle of static and moving objects around the vehicle relative to the vehicle using a laser radar and a 4D imaging millimeter wave radar, detects whether there is an obstacle behind the vehicle using a backward radar, detects whether there is an obstacle at four corners of the vehicle using an angle radar, detects whether there is an obstacle within 5 meters around using an ultrasonic radar, and estimates the distance of the near obstacle; the method comprises the steps that medium-distance and long-distance forward-looking cameras are adopted to collect environment images with different distances, and the environment images are used for identifying objects around a vehicle by an intelligent driving auxiliary system; a surrounding camera is adopted to collect road and environment images around the vehicle; an infrared imaging camera is adopted to collect infrared images of surrounding environment; the laser radar and the 4D imaging radar are arranged on a bumper or a roof in front of a vehicle, and environmental point cloud data are generated by transmitting and receiving reflected echoes through electromagnetic waves; the angle radar is installed around the vehicle, the backward radar is installed at the tail of the vehicle, the front-view camera is installed at the rear of the windshield and in a cleanable area of the windscreen wiper, the periphery-view camera is installed around the vehicle, and the infrared imaging camera is installed on a bumper or a grille in front of the vehicle.
4. The head-up display system with night vision augmented virtual reality according to claim 3, wherein the vehicle calibration data collection unit further comprises a chassis height sensor for measuring a distance between a vehicle body and a ground, and the vehicle calibration data collection unit is in information communication with the vehicle end information processing unit through the zone controller.
5. A vehicle environmental field of view enhancement method with night vision augmented virtual reality head-up display system according to claim 4, comprising the steps of:
S1, after a vehicle is started, synchronizing time with a mobile network server, and judging whether the vehicle is currently at night;
S2, providing the environmental point cloud data, the images, the detected obstacle distance and the angle information acquired by the sensors to a vehicle-end information processing unit, and providing the feedback value of a rainfall environmental light sensor to a regional controller;
s3, comparing whether the current image and the numerical value accord with the scene affecting the driving vision or not through a deep learning neural network; if it is determined that the driving view condition is affected, S4 is executed, and if it is determined that the driving view environment is not affected, S5 is executed;
S4, iterating the intelligent algorithm of the worker according to the environment data, the calibration data and the worker collected by the vehicle-end information processing unit and/or the cloud information processing unit, reconstructing surrounding environment elements of the vehicle in real time, and determining the environment elements to be enhanced and displayed; when reconstructing the surrounding elements of the vehicle, synchronously detecting the sight line of a driver, determining the display position of the reconstructed road surrounding elements according to the sight line of the driver, and then displaying the reconstructed surrounding images of the vehicle on a virtual reality head-up display device or a vehicle display screen according to the determined display position;
S5, ending.
6. The vehicle environment visual field enhancement method with the night vision augmented virtual reality head-up display system according to claim 5, wherein if the current vehicle is a non-intelligent vehicle, navigation software is run on the vehicle-mounted terminal or the mobile terminal, the surrounding environment image or the environment element information is obtained from the cloud information processing unit according to the real-time position of the current vehicle-mounted terminal or the mobile terminal, and the reconstructed vehicle surrounding environment image and the reconstructed environment element information are displayed through the display device of the current vehicle-mounted terminal or the display device of the mobile terminal; if the current vehicle is an intelligent vehicle navigation software user, the perception of the current vehicle to the environment can be enhanced by acquiring surrounding environment images or environment element information from the cloud information processing unit.
7. The method for enhancing the visual field of a vehicle environment with a night vision augmented virtual reality head-up display system according to claim 5, wherein in S3, the scene affecting the visual field of driving comprises a dim light, a bright light, a rain, a fog, a smoke, a haze, a dust environment, a pedestrian, a motor vehicle, a non-motor vehicle, an animal, a non-standard obstacle, a deliberately set roadblock, a road damage scene.
8. The method for enhancing the field of view of a vehicle environment with a night vision augmented virtual reality head-up display system according to claim 7, wherein if there are obstacle scenes such as pedestrians, vehicles, non-vehicles, animals, non-standard obstacles, intentionally placed roadblocks, road damages, etc. around the vehicle, the obstacle and the environmental element to be enhanced for display are determined by:
Step a, calling an image acquired by a vehicle end information processing unit, performing scene superposition with a virtual reality head-up display device, and displaying a real-time environment image of a vehicle according to the scene superposition;
step b, judging that surrounding vehicles have obstacle scenes such as pedestrians, motor vehicles, non-motor vehicles, animals, non-standard obstacles, deliberately arranged roadblocks, road damages and the like around the surrounding vehicles according to the displayed real-time environment images of the vehicles, if yes, executing the step e, and if not, executing the step c;
step c, judging whether the current track of the vehicle possibly intersects with the obstacle, if so, executing step d, and if not, executing step e;
Step d, displaying the current barrier category and the space position prompt information on the virtual reality head-up display device, and sending out a prompt tone or directly executing the step e according to the current user prompt tone setting;
And e, ending.
9. The method for vehicle environment view enhancement with night vision augmented virtual reality head-up display system according to claim 5, wherein in S4, the reconstructed vehicle environment audio-video display position is determined by: step I, acquiring an image through a driver monitoring camera, and outputting the position of eyes of a driver in the image; step II, obtaining the distance and angle of the obstacle relative to the vehicle through a laser radar and a millimeter wave radar; step III, extracting outline characteristic lines of the barrier through images of the front-view camera and the infrared camera; step IV, identifying the type of the obstacle through a visual deep learning neural network model; v, acquiring a vehicle height value through a vehicle height sensor; step VI, calculating imaging angles and positions of virtual reality head-up display equipment with human eyes and obstacles in a straight line according to optical geometry, and calculating the reduction and the enlargement of the outline of an object according to the distance; and VII, displaying the outline of the obstacle on the virtual reality head-up display device, and prompting the type and distance information of the obstacle.
10. The method for enhancing the visual field of a vehicle environment with a night vision augmented virtual reality head-up display system according to claim 5, wherein in S1, when it is determined that the vehicle is in a night driving state according to the current time, the night vision function is automatically turned on by: step i, acquiring point cloud data through a laser radar and a 4D imaging radar; step ii, the front view camera and the peripheral view camera acquire images and provide the images for the vehicle-end information processing unit, and the rainfall ambient light sensor feeds back measured values to the area controller; step iii, establishing an image-based 3D model and a radar-based 3D model; step iv, comparing whether each object in the radar-identified 3D model can be found in the 3D model of the image, if so, entering step v, and if not, executing step vii; step v, judging whether the distance of the object in the 3D model is smaller than the maximum value of the visual recognition capability and larger than the minimum value of the visual recognition capability, if so, entering a step vi, and if not, executing a step vii; and vi, starting an infrared camera and starting a night vision enhancement function of the virtual reality head-up display device.
CN202111644173.3A 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision Active CN114228491B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111644173.3A CN114228491B (en) 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111644173.3A CN114228491B (en) 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision

Publications (2)

Publication Number Publication Date
CN114228491A CN114228491A (en) 2022-03-25
CN114228491B true CN114228491B (en) 2024-05-14

Family

ID=80744412

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111644173.3A Active CN114228491B (en) 2021-12-29 2021-12-29 System and method for enhancing virtual reality head-up display with night vision

Country Status (1)

Country Link
CN (1) CN114228491B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117351756A (en) * 2022-06-29 2024-01-05 中兴通讯股份有限公司 Visual field enhancement method, electronic device, and storage medium
CN115166978B (en) * 2022-07-21 2023-06-16 重庆长安汽车股份有限公司 Display lens, system, method and medium of head-up display system
CN116645830B (en) * 2022-09-26 2024-02-13 深圳海冰科技有限公司 Vision enhancement system for assisting vehicle in night curve
CN115952570A (en) * 2023-02-07 2023-04-11 江苏泽景汽车电子股份有限公司 HUD simulation method and device and computer readable storage medium
CN116409331B (en) * 2023-04-14 2023-09-19 南京海汇装备科技有限公司 Data analysis processing system and method based on intelligent photoelectric sensing technology

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654753A (en) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 Intelligent vehicle-mounted safe driving assistance method and system
CN113525234A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Auxiliary driving system device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10023118B2 (en) * 2015-03-23 2018-07-17 Magna Electronics Inc. Vehicle vision system with thermal sensor

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105654753A (en) * 2016-01-08 2016-06-08 北京乐驾科技有限公司 Intelligent vehicle-mounted safe driving assistance method and system
CN113525234A (en) * 2021-07-26 2021-10-22 北京计算机技术及应用研究所 Auxiliary driving system device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于深度学习的智能车辆辅助驾驶系统设计;邹鹏;谌雨章;蔡必汉;;信息与电脑(理论版);20190615(11);全文 *

Also Published As

Publication number Publication date
CN114228491A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN114228491B (en) System and method for enhancing virtual reality head-up display with night vision
AU2021200258B2 (en) Multiple operating modes to expand dynamic range
US10782405B2 (en) Radar for vehicle and vehicle provided therewith
JP6819680B2 (en) Imaging control devices and methods, and vehicles
KR101949358B1 (en) Apparatus for providing around view and Vehicle including the same
EP1961613B1 (en) Driving support method and driving support device
KR102043060B1 (en) Autonomous drive apparatus and vehicle including the same
US9267808B2 (en) Visual guidance system
US20130021453A1 (en) Autostereoscopic rear-view display system for vehicles
US20190135169A1 (en) Vehicle communication system using projected light
CN111221342A (en) Environment sensing system for automatic driving automobile
CN113085896B (en) Auxiliary automatic driving system and method for modern rail cleaning vehicle
CN113706883B (en) Tunnel section safe driving system and method
CN114492679B (en) Vehicle data processing method and device, electronic equipment and medium
CN111086451B (en) Head-up display system, display method and automobile
CN116256747A (en) Electric automobile environment sensing system and method thereof
CN113246859B (en) Electronic rearview mirror with driving auxiliary system warning function
Wu et al. A vision-based collision warning system by surrounding vehicles detection
KR101872477B1 (en) Vehicle
CN209833499U (en) 360-degree panoramic auxiliary visual system suitable for special vehicle
CN113232586A (en) Infrared pedestrian projection display method and system for driving at night
CN211032395U (en) Autonomous vehicle
CN117714654A (en) Projection system based on vehicle and vehicle
CN115489514A (en) Method and system for improving parking space recognition rate and parking capacity in dark light environment
KR20160144645A (en) Apparatus for prividing around view and vehicle including the same

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant