CN116068765A - Visual field blind area display method, device, equipment, vehicle and storage medium - Google Patents

Visual field blind area display method, device, equipment, vehicle and storage medium Download PDF

Info

Publication number
CN116068765A
CN116068765A CN202211515488.2A CN202211515488A CN116068765A CN 116068765 A CN116068765 A CN 116068765A CN 202211515488 A CN202211515488 A CN 202211515488A CN 116068765 A CN116068765 A CN 116068765A
Authority
CN
China
Prior art keywords
pose
preset
blind area
visual field
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211515488.2A
Other languages
Chinese (zh)
Inventor
郭芷铭
施喆晗
马然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo Geely Automobile Research and Development Co Ltd
Original Assignee
Ningbo Geely Automobile Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo Geely Automobile Research and Development Co Ltd filed Critical Ningbo Geely Automobile Research and Development Co Ltd
Priority to CN202211515488.2A priority Critical patent/CN116068765A/en
Publication of CN116068765A publication Critical patent/CN116068765A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0123Head-up displays characterised by optical features comprising devices increasing the field of view
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0141Head-up displays characterised by optical features characterised by the informative content of the display
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B2027/0178Eyeglass type

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Fittings On The Vehicle Exterior For Carrying Loads, And Devices For Holding Or Mounting Articles (AREA)

Abstract

The invention discloses a visual field blind area display method, a visual field blind area display device, visual field blind area display equipment, a vehicle and a storage medium. Firstly, determining the current pose of virtual display equipment worn by a current driving object in a cabin; secondly, determining pose change data of the virtual display equipment between the current pose and a preset reference pose, and then adjusting a reference visual field blind area by utilizing the pose change data to obtain a target visual field blind area corresponding to the current pose; and finally, displaying the live-action data in the target visual field blind area corresponding to the current pose on the virtual display equipment. Based on the visual field blind area display method, the real scene data in the target visual field blind area can be displayed on the virtual reality equipment, so that a driving object can view the external real scene data in the visual field blind area while driving, low-head observation is not needed, and the safety of the vehicle in the driving process is improved.

Description

Visual field blind area display method, device, equipment, vehicle and storage medium
Technical Field
The invention relates to the technical field of virtual reality display, in particular to a visual field blind area display method, a visual field blind area display device, a visual field blind area display equipment, a visual field blind area display vehicle and a visual field blind area storage medium.
Background
With the increase of the number of vehicles, the road conditions of the driving are more and more complex. The driving object needs to acquire more driving peripheral information to reduce the probability of traffic accidents, but the blind area of the vehicle can shield the observation area of the driving object, so that the driving object forms a visual field blind area, and the real scene data of the shielded area needs to be displayed to ensure the driving safety.
In the related art, live-action data of an external environment area of a vehicle is acquired through a 360-degree panoramic camera, and the acquired live-action data is displayed on a central control screen. However, the manner of displaying the acquired live-action data on the central control screen based on the 360-degree panoramic camera requires low-head observation of the driving object, and there is a certain potential safety hazard, so that the blind-spot display method in the related art needs to be improved.
Disclosure of Invention
The embodiments of the present specification aim to solve at least one of the technical problems in the related art to some extent. Therefore, a first object of the embodiments of the present specification is to provide a method, apparatus, device, vehicle, and storage medium for displaying a blind area of view.
The embodiment of the specification provides a method for displaying a blind area of a visual field, which comprises the following steps:
Determining the current pose of virtual display equipment worn by a current driving object in a cabin; the pose change data of the virtual display device are arranged between the current pose and a preset reference pose;
displaying live-action data in a target visual field blind area corresponding to the current pose on the virtual display equipment; the target visual field blind area is obtained by adjusting the reference visual field blind area according to the pose change data; the reference visual field blind area is a visual field blind area generated when a driving object in the cabin observes the external environment of the cabin in a preset sitting posture; the virtual display device worn by the driving object in the preset sitting posture has the preset reference pose.
The embodiment of the present specification provides a blind spot display device, including:
the pose data determining module is used for determining the current pose of the virtual display device worn by the current driving object in the cabin; the pose change data of the virtual display device are arranged between the current pose and a preset reference pose;
the visual field blind area display module is used for displaying live-action data in a target visual field blind area corresponding to the current pose on the virtual display equipment; the target visual field blind area is obtained by adjusting the reference visual field blind area according to the pose change data; the reference visual field blind area is a visual field blind area generated when a driving object in the cabin observes the external environment of the cabin in a preset sitting posture; the virtual display device worn by the driving object in the preset sitting posture has the preset reference pose.
The present specification embodiment provides a visual field blind area display apparatus including: a memory, and one or more processors communicatively coupled to the memory; the memory has stored therein instructions executable by the one or more processors to cause the one or more processors to implement the steps of the method of any of the embodiments described above.
The present description provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method according to any of the above embodiments.
The present description provides a computer program product comprising instructions which, when executed by a processor of a computer device, enable the computer device to perform the steps of the method of any one of the embodiments described above.
In the above-described embodiments, first, the current pose of the virtual display device worn by the current driving object in the cabin is determined; secondly, determining pose change data of the virtual display equipment between the current pose and a preset reference pose, and then adjusting a reference visual field blind area by utilizing the pose change data to obtain a target visual field blind area corresponding to the current pose; and finally, displaying the live-action data in the target visual field blind area corresponding to the current pose on the virtual display equipment. By the visual field blind area display method, the driving object can view the outdoor live-action data shielded by the visual field blind area while driving, low-head observation is not needed, and the safety of the vehicle in the driving process is improved.
Drawings
Fig. 1a is a schematic diagram of an application scenario of a view blind area display method in an embodiment of the present disclosure;
fig. 1b is a schematic flow chart of a cross-view blind area display method in an embodiment of the present disclosure;
FIG. 1c is a schematic view of an at least partial covering in an embodiment of the present disclosure;
FIG. 1d is a schematic illustration of the transparency of a shade in an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of determining a target field of view blind area in an embodiment of the present disclosure;
FIG. 3 is a flow chart of determining relationship data between pose changes and blind field differences in an embodiment of the present disclosure;
fig. 4 is a schematic flow chart of determining a reference field of view blind area in the embodiment of the present specification;
fig. 5 is a schematic flow chart of determining a preset reference pose in the embodiment of the present disclosure;
FIG. 6a is a schematic flow chart of determining relationship data between the pose of eyes and blind zone of vision in the embodiment of the present disclosure;
fig. 6b is a schematic diagram of an eye pose and a blind area of a field of view corresponding to the eye pose in the embodiment of the present disclosure;
FIG. 7a is a schematic flow chart of determining an initial blind area in an embodiment of the present disclosure;
fig. 7b is a schematic diagram of determining a shooting range in the embodiment of the present specification;
FIG. 7c is a schematic diagram of determining an initial blind spot in an embodiment of the present disclosure;
FIG. 8a is a schematic diagram of hardware in an embodiment of the present disclosure;
fig. 8b is a schematic flow chart of a method for displaying a blind spot in a view according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of a blind spot display device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are illustrative and intended to explain the present invention and should not be construed as limiting the invention.
Since the view blind area generally increases the occurrence probability of traffic accidents, it is necessary to display live-action data in the view blind area. In the related art, in the visual field blind area display method, real scene data of an external environment area of a vehicle can be acquired by using a 360-degree panoramic camera, and the real scene data acquired by the 360-degree panoramic camera is displayed on a central control screen; the display screen can be set at the appointed position of the vehicle by utilizing the flexible screen technology so as to display the live-action data acquired by the 360 panoramic camera; the light path can be changed by the optical reflection principle of the optical waveguide, so that the dead zone of the vehicle is transparentized.
However, in the related art, based on the 360 panoramic camera mode, the real scene data blocked behind the dead zone of the vehicle cannot be transmitted through, and the mode of displaying the acquired real scene data on the central control screen requires the low head observation of the driving object, so that the attention of the user is easily dispersed, and the situation in front of the vehicle cannot be checked at the same time. The mode of increasing flexible display screen on the automobile body can destroy the automobile body structure, influences the sturdiness of automobile body, also can cause the influence to the collision factor of safety of vehicle. The technology is not mature and has no obvious effect based on the mode of changing the light path by the optical reflection principle of the optical waveguide. In addition, the flexible display screen is added on the vehicle body, and the optical path is changed based on the optical reflection principle of the optical waveguide, so that the cost of the two modes is high, and the mass production and carrying cannot be realized.
In addition, as AR glasses become popular as a wearable smart device, there are more devices that they can interconnect. Therefore, as the weight, endurance and performance of the AR glasses are improved, the AR glasses can judge the state of the user by tracking the eye gaze track, and the AR glasses have the function of simultaneously seeing the real environment and the virtual image. AR glasses are beginning to be applied in vehicles, and AR glasses can be used to display live-action environment videos of blind areas of view. Through analysis, the inventor finds that on one hand, the real scene of the visual field blind area displayed by the AR glasses in the related technology has higher requirement on chip calculation force; on the other hand, the head of the driver moves during driving, and when the head of the driver moves, the blind area of the driver's field of view changes, whereas the real range area displayed by the AR glasses in the related art is fixed, which does not change with the movement of the head of the driver, it is obvious that the fixed real range area does not coincide with the actual situation. Therefore, the embodiment of the present specification provides a visual field blind area display method, which not only can reduce the calculation requirement for the chip calculation force, but also can update the real field area displayed by the AR glasses along with the movement of the head of the driver and even the eyeball rotation.
Specifically, the embodiment of the present specification provides a method for displaying a blind area of a field of view, first, determining a current pose of a virtual display device worn by a current driving object in a cabin; secondly, determining pose change data of the virtual display equipment between the current pose and a preset reference pose, and then adjusting a reference visual field blind area by utilizing the pose change data to obtain a target visual field blind area corresponding to the current pose; and finally, displaying the live-action data in the target visual field blind area corresponding to the current pose on the virtual display equipment. In the embodiment, the reference view blind area is predetermined, so that the pose change data of the current pose, the current pose and the preset reference pose to be calculated is needed, and the calculation requirement on the chip calculation force is reduced. Further, the pose change data is utilized to adjust the reference visual field blind area, so that the live-action data displayed by the virtual display device can be updated along with the movement of the head of a driver and even the rotation of eyeballs, and the use requirements of actual application scenes are met.
Referring to fig. 1a, the method for displaying a blind area of view provided in the embodiment of the present disclosure may be applied to a vehicle 110 in fig. 1a, or applied to other devices having a function of controlling a vehicle (such as a cloud server 120, a mobile phone terminal 130 in fig. 1a, etc.), where a blind area of the vehicle may be a vehicle a column 140 in fig. 1a, and a virtual display device may be AR glasses 150 worn by a driver in fig. 1 a. The vehicle may be an autonomous vehicle, which may be a vehicle having a partial autonomous function, or may be a vehicle having a full autonomous function, that is, the class of the autonomous of the vehicle may be classified into no automation (L0), driving support (L1), partial automation (L2), conditional automation (L3), high automation (L4), or full automation (L5) with reference to classification standards of the american society of automotive engineers (society of automotive engineers, SAE). The vehicle or other device may implement the blind spot view display method by its contained components (including hardware and software).
It is understood that the vehicle may be a car, truck, motorcycle, bus, recreational vehicle, casino vehicle, construction equipment (such as engineering vehicles), electric car, golf car, train, trolley, etc., which have a dead zone of their own.
Referring to fig. 1b, the embodiment of the present disclosure provides a method for displaying a blind area of a visual field, which may include the following steps:
s110, determining the current pose of the virtual display device worn by the current driving object in the cabin.
The pose change data of the virtual display device are arranged between the current pose and the preset reference pose. The pose change data may be understood as pose difference data between a current pose of the virtual display device and a preset reference pose of the virtual display device. It is also understood that, based on the preset reference pose of the virtual display device, the pose change of the current pose of the virtual display device with respect to the preset reference pose is recorded as pose change data. The driving object may be a driver. When the driving object enters the cabin, the virtual display device is worn. The current pose may be a position and a pose of a virtual display device worn by the driving object in a vehicle coordinate system. The virtual display device may be a smart wearable device such as a headset, an arm-worn device, etc., for example, the virtual display device may be AR glasses.
Specifically, the current driving object enters the cabin and sits on the cabin seat, wearing a virtual display device. In some embodiments, the current pose of the virtual display device may be determined based on the pose of the eyes of the current driving object in the current sitting position, the relative position between the virtual display device and the eyes of the current driving object. In other embodiments, a TOF sensor may also be installed in the cabin, by which the current pose of the virtual display device is measured.
In some embodiments, the eye pose of the driving object in the current sitting position can be obtained through the object monitoring device and is marked as (X 1 ,Y 1 ,Z 1 ) When the virtual display device is worn by the driver, there is a relative position between the virtual display device and the eyes of the driver, and the relative position data of the eyes of the driver and the eye pose (X 1 ,Y 1 ,Z 1 ) The pose of the virtual display device worn by the driving object can be determined, denoted (X) 2 ,Y 2 ,Z 2 ). Conversion of three-dimensional coordinates in space (X) 2 ,Y 2 ,Z 2 ) Is led into a vehicle coordinate system, and the converted pose is recorded as (X) 3 ,Y 3 ,Z 3 ),(X 3 ,Y 3 ,Z 3 ) The current pose of the virtual display device worn by the current driving object is the current pose. As an example, the vehicle coordinate system may be with the upward direction of the front wheel hub as the Z axis, the leftward direction of the front wheel hub as the X axis, and the rearward direction of the front wheel hub as the Y axis.
And S120, displaying the live-action data in the target visual field blind area corresponding to the current pose on the virtual display equipment.
The target visual field blind area is obtained by adjusting a reference visual field blind area according to pose change data, wherein the reference visual field blind area is a visual field blind area generated when a driving object in a cabin observes the external environment of the cabin in a preset sitting posture, and a virtual display device worn by the driving object in the preset sitting posture has a preset reference pose.
The reference vision blind area is a vision blind area generated when a driver in the cabin observes the external environment of the cabin in a preset sitting posture. The virtual display device worn by the driving object in the preset sitting posture has the preset reference pose. In this embodiment, the preset sitting posture may be a posture in which the specified driving object is seated on the cabin seat in the specified posture and the specified line of sight direction. For example, the designated posture may be that the upper body end of the driver is straight, or that the rear shoulder of the driver is abutted against the cabin seatback. The specified viewing direction may be a viewing direction in front of the eyes of the driver, or a viewing direction in which the eyes of the driver view the rear view mirror rightward. The driving object sits on the seat of the cabin in a preset sitting posture, and a visual field blind area generated when the external environment of the cabin is observed is a reference visual field blind area. The driving object wears the virtual display device in the preset sitting posture, and the position of the virtual display device at the moment is recorded as the preset reference position.
In some cases, as the pose of the driving object on the cabin seat changes, the pose of the virtual display device also changes, and the blind area of the field of view generated when the external environment of the cabin is observed also changes, so that the real-scene data required to be presented on the virtual display device also needs to change. Specifically, the reference visual field blind area is adjusted according to pose change data of the virtual display device, and the target visual field blind area is obtained. The image acquisition device arranged outside the cabin can be used for carrying out image acquisition on the target visual field blind area, so as to obtain the real scene data in the target visual field blind area, and the real scene data in the target visual field blind area is displayed on the virtual display equipment.
In some implementations, the virtual display device may be AR glasses. Because the corresponding relation between the pose change data of the virtual display device and the visual field range difference data is preset, the visual field range difference data can be determined based on the pose change data of the virtual display device, and the visual field range difference is utilized to adjust the reference visual field blind area to obtain the target visual field blind area. And performing image processing on the real scene area shielded by the target view blind area through a perception algorithm, introducing the real scene area shielded by the target view blind area subjected to image processing into a vehicle space coordinate system, further performing image deformity processing on the real scene area shielded by the target view blind area, and then splicing virtual scenes and real scene splicing on the real scene area of the target view blind area required to be displayed on the AR glasses and the vehicle external environment which is not shielded by the shielding object, wherein the real scene data in the view blind area are virtual scenes. Referring to fig. 1c, the object monitoring device tracks and identifies the pupil of the driving object, so that a connection area M between the eye 160 of the driving object in the vehicle and the dead zone 170 of the vehicle can be determined, the area M can be regarded as a shielding object, the position of the shielding object can be obtained through the spatial coordinate system in the vehicle, the shielding object in front of the target view dead zone can be subjected to fuzzy processing through a fuzzy algorithm, the area tends to be transparent, and referring to fig. 1d, the effect of making the shielding object transparent can be achieved, the area where the eye of the driving object coincides with the dead zone of the vehicle is displayed, and the rest of the area does not display the driving object which is equivalent to wearing transparent glasses for driving the vehicle.
In the visual field blind area display method, the current pose of the virtual display device worn by the current driving object in the cabin is determined, and the real-scene data in the target visual field blind area corresponding to the current pose is displayed on the virtual display device. The technical problem that the video data of the blind area of the visual field is observed by the low head of the driving object is solved by wearing the virtual display device by the current driving object, and the safety of the vehicle in the driving process is improved. Further, on one hand, the real scene data displayed by the virtual display device can be updated along with the movement of the head of the driver and even the rotation of eyeballs, so that the use requirements of actual application scenes are met; on the other hand, the virtual display device accurately displays the real scene data in the target vision blind area corresponding to the current pose, but not the 360-degree panoramic video outside the cabin, so that the computational power resources consumed for processing the video data are reduced.
In some embodiments, referring to fig. 2, the method for determining the target field of view blind area may include the following steps:
s210, determining visual field range difference data corresponding to the current pose according to the pose change data of the virtual display device.
And S220, adjusting the reference vision blind area according to the vision range difference data corresponding to the current pose to obtain the target vision blind area.
The visual field range difference data may be visual field blind area change data determined according to change data between a current pose of the virtual display device and a preset reference pose.
In some cases, the preset sitting position of the driving object corresponds to the preset eye pose of the driving object. The current sitting posture of the current driving object corresponds to the current eye pose of the current driving object. The pose difference between the preset eye pose and the current eye pose may result in field of view range difference data. In order to reduce occupied computing power resources, a reference visual field blind area is determined in advance according to a preset sitting posture of a driving object, the reference visual field blind area is adjusted by utilizing visual field difference data, and a target visual field blind area corresponding to the current eye pose of the current driving object is determined.
Specifically, when the current eye pose of the current driving object changes (such as eye rotation and head rotation), the current pose of the virtual display device changes relative to the preset reference pose of the virtual display device. The visual field range difference data corresponding to the pose change data of different poses of the virtual display device relative to the preset reference pose can be predetermined through a blind area recognition technology. Accordingly, the visual field range difference data corresponding to the current pose can be determined according to the pose change data of the virtual display device. Further, the reference vision blind area is adjusted by utilizing vision field range difference data corresponding to the current pose, and the target vision blind area is obtained.
In some implementations, a current pose (X 3 ,Y 3 ,Z 3 ) Current pose (X) 3 ,Y 3 ,Z 3 ) The pose change data between the reference pose and the preset reference pose is recorded as (X) 0 ,Y 0 ,Z 0 ). The blind area of the visual field generated when the driver observes the external environment of the cabin in a preset sitting posture is a reference blind area of visual field and is marked as R1. Wherein, the preset sitting posture, the preset reference pose and the reference vision blind area can be data set for the vehicle before leaving the factory or according to drivingThe height, facial features and other data information of the object. The relation between the pose change data and the blind area change data can obtain that the pose change data of the driving object is (X) 0 ,Y 0 ,Z 0 ) And (3) carrying out blind area adjustment of the visual field range difference data delta beta on the basis of the reference visual field blind area R1, wherein the blind area obtained after adjustment is the target visual field blind area.
In the visual field blind area display method, the reference visual field blind area is adjusted by utilizing the visual field range difference data to obtain the target visual field blind area, the shooting range of the image acquisition device outside the cabin can be accurately set based on the adjusted reference visual field blind area, 360-degree shooting of the environment technology outside the cabin is not needed, and the generation of video data is reduced; and the consumed computational power resource is reduced when the image stitching fusion is performed by using the live-action data of the target field of view blind area to generate the display data of the virtual display device.
In some embodiments, determining, according to pose change data of the virtual display device, field of view range difference data corresponding to the current pose includes: and matching the relation data between the pose change and the vision blind area difference according to the pose change data of the virtual display equipment to obtain the vision range difference data corresponding to the current pose.
Specifically, since each pose of the virtual display device has a corresponding visual field blind area, and pose change exists between the pose of the virtual display device and a preset reference pose, visual field blind area difference exists between the visual field blind area corresponding to the pose of the virtual display device and a reference visual field blind area, and the relationship data between the pose change and the visual field blind area difference can be generated. Further, the pose change data of the virtual display device is used as input data, matching can be performed according to the relation data between the pose change and the vision blind area difference, and the obtained result is vision range difference data corresponding to the current pose.
In the visual field blind area display method, the relation data between the pose change and the visual field blind area difference are matched according to the pose change data of the virtual display device, so that the visual field range difference data corresponding to the current pose is obtained, and a data base is provided for accurately acquiring the target visual field blind area.
In some embodiments, referring to fig. 3, the determining manner of the relationship data between the pose change and the blind field difference may include the following steps:
s310, acquiring a plurality of preset poses of the virtual display device, and using preset vision blind areas corresponding to the preset poses.
S320, determining preset pose change data between the preset pose and a preset reference pose and preset visual field difference data between the preset visual field blind area and the reference visual field blind area according to a preset pose and a preset visual field blind area corresponding to the preset pose.
S330, generating relation data between the pose change and the vision blind area difference based on the preset pose change data and the corresponding preset vision difference data.
When a driving object enters the cabin, calibrating the pose of the eyes of the driving object in a preset sitting posture as the origin of a space coordinate system, and establishing a space coordinate system in the vehicle to determine the current pose of the eyes of the driving object. The driver has a relative position data between the virtual display device and the eyes of the driver when the virtual display device is worn by the driver. The pose of the virtual display device can be obtained through the pose of the eyes of the driving object and the relative position data between the virtual display device and the eyes of the driving object. Through three-dimensional coordinate conversion, the pose of the virtual display device can be converted into a vehicle coordinate system, and the pose of the virtual display device in the vehicle coordinate system is a preset pose of the virtual display device.
Specifically, when the preset pose of the virtual display device is changed, the preset visual field blind area is also changed. G 0 Can be characterized as a preset reference pose of a virtual display device, the preset reference pose G of the virtual display device 0 The corresponding reference vision blind area is the area beta 0 ,G 1 Can be characterized as a preset pose of the virtual display device, the preset pose of the virtual display deviceGesture G 1 The corresponding preset vision blind area is the area beta 1 ,G 2 Can be characterized as a further preset pose of the virtual display device, preset pose G of the virtual display device 2 The corresponding preset vision blind area is the area beta 2 ,G 3 Can be characterized as another preset pose of the virtual display device, preset pose G of the virtual display device 3 The corresponding preset vision blind area is the area beta 3 . Wherein G is 1 And G 0 With preset pose change data delta G 1 ,G 2 And G 0 With preset pose change data delta G 2 ,G 3 And G 0 With a predetermined position change data DeltaG 3 Region beta 1 Blind zone beta from reference field of view 0 With a predetermined field of view difference data Deltabeta therebetween 1 Region beta 2 Blind zone beta from reference field of view 0 With a predetermined field of view difference data Deltabeta therebetween 2 Region beta 3 Blind zone beta from reference field of view 0 With a predetermined field of view difference data Deltabeta therebetween 3 . Based on delta beta 1 、ΔG 1 、Δβ 2 、ΔG 2 、Δβ 3 、ΔG 3 Fitting is performed to generate relationship data Δβ=f (Δg) between the pose change and the blind field difference.
In the visual field blind area display method, one preset pose of the virtual display device corresponds to one preset visual field blind area, the relation data between pose change and visual field blind area difference can be finally generated by acquiring a plurality of preset poses and the preset visual field blind area corresponding to each preset pose and generating the relation data between preset pose change data between each preset pose and preset reference pose and the preset visual field difference data between the preset visual field blind area and the reference visual field blind area under the pose, and a data basis is provided for accurately acquiring the visual field range difference data.
In some embodiments, referring to fig. 4, the determining method of the reference field blind area may include the following steps:
s410, acquiring the preset eye pose of the driving object in the preset sitting posture.
Specifically, the visual tracking algorithm can be used for determining the eye pose of the driving object in the position, and the algorithm can be deployed on the domain control of the vehicle, the object monitoring equipment of the vehicle and a cloud server in communication connection with the vehicle. If the vehicle is deployed on the cloud end device, the vehicle uploads the acquired image to the cloud end server, and the cloud end server determines the pose of eyes based on the received image.
For example, the eye pose of the driving object in the present position may be determined by the object monitoring device. The object monitoring device may employ a DMS system (Driver Monitor System, driving fatigue detection system) that may be used to implement tracking, expression recognition, gesture recognition, and dangerous motion recognition. The object monitoring device may also employ an OMS system (Occupancy Monitoring System, in-vehicle personnel monitoring system). It should be noted that, whether it is a DMS system or an OMS system, it may be described as a system composed of a camera and a controller for monitoring the face of a driver in a cabin.
S420, performing matching processing in relation data of the eye pose and a visual field blind zone based on the preset eye pose, and obtaining a preset visual field blind zone corresponding to the preset eye pose.
S430, taking a preset visual field blind area corresponding to the preset eye pose as the reference visual field blind area.
The reference visual field blind area may be stored in a storage unit of the vehicle before shipment. The driving object enters the cabin, the sitting posture of the body of the driving object facing the front direction can be used as a preset sitting posture, and the eye pose under the preset sitting posture can be obtained through the object monitoring equipment and used as the preset eye pose. And because the relation data of the eye pose and the blind zone of the visual field are arranged in advance, the relation data of the eye pose and the blind zone of the visual field can be used for carrying out matching calculation, so that the preset visual field blind zone under the preset eye pose is obtained, and the preset visual field blind zone can be used as a reference visual field blind zone. Before the vehicle leaves the factory, the staff can store the reference vision blind area and the preset eye pose which is changed into the vehicle coordinate system through three-dimensional coordinate conversion in the space into a storage unit of the vehicle. The reference field blind area and the preset eye pose can be fixed in the subsequent data processing process, and the subsequent data processing can be performed on the basis of the reference field blind area and the preset eye pose information.
In still other embodiments, when a driving object enters the cabin, the object monitoring device acquires the eye pose of the driving object, adjusts the preset eye pose which is already input into the vehicle through the acquired eye pose, changes the adjusted eye pose into information in the vehicle coordinate system through three-dimensional coordinate conversion in space and inputs the information into the vehicle as a new preset eye pose, and according to the new preset eye pose, acquires a corresponding vision blind area which is input into the vehicle as a new reference vision blind area.
In the visual field blind area display method, firstly, the preset eye pose of the driving object under the preset pose is obtained, and further, the preset visual field blind area can be obtained based on the relation data of the preset eye pose between the eye pose and the visual field blind area, and the preset visual field blind area is used as the reference visual field blind area. Through the reference visual field blind area of the preset eye pose, when the eyes of a driving object rotate or the head rotates, the reference visual field blind area can be adjusted, and the target visual field blind area which is consistent with the actual situation is obtained.
In some embodiments, referring to fig. 5, the eyes of the driving object have relative position data with the virtual display device worn by the same driving object; the method for determining the preset reference pose of the virtual display device in the preset sitting posture may include the following steps:
S510, determining the preset eye pose of the driving object in the preset sitting posture.
S520, determining the preset reference pose according to the preset eye pose and the relative position data.
Specifically, the preset eye pose B of the driving object in the preset sitting position is obtained through the object monitoring equipment 0 . Eyes and deficiency of driving objectWith a relative position data DeltaG between the devices to be displayed 0 According to the preset eye pose B 0 And relative position data ΔG 0 A correspondence g=f (B 0 ,ΔG 0 ) And obtaining the pose G of the virtual display device. And converting the pose G of the virtual display device into a vehicle coordinate system through three-dimensional coordinate conversion in the space, wherein the pose of the virtual reality device at the moment is a preset reference pose.
In the visual field blind area display method, the preset reference pose of the virtual display device is determined through the preset eye pose and the relative position data between the eyes of the driving object and the virtual display device worn by the same driving object. By determining the preset reference pose, when the eyes of the driving object rotate or the head rotates, pose change data between the eyes and the preset reference pose can be obtained, and data support is provided for generating relation data between pose change and vision blind area difference.
In some embodiments, referring to fig. 6a, the method for determining the relationship data between the pose of the eye and the blind interval of the visual field may include the following steps:
s610, acquiring a plurality of initial eye pose of eyes of a driving object, and using initial blind areas of vision corresponding to the initial eye pose.
S620, generating relation data of the eye pose and a blind zone of the visual field based on the initial eye pose and the corresponding blind zone of the initial visual field.
Specifically, referring to FIG. 6B, B in FIG. 6B 1 、B 2 、B 3 、B 4 Can be characterized as an initial eye pose of a driving object, B 1 、B 2 、B 3 、B 4 Corresponding initial vision blind areas are respectively beta 1 、β 2 、β 3 、β 4 Based on B 1 、B 2 、B 3 、B 4 、β 1 、β 2 、β 3 、β 4 Fitting may be performed to generate relationship data β=f (B) between the initial eye pose and the initial field of view blind zone.
In the method for displaying the blind area of the visual field, firstly, the initial pose of the eyes of the driving object and the initial blind area of the visual field corresponding to the initial pose of the eyes are obtained, and further, the relation data of the pose of the eyes and the blind area of the visual field can be generated through a plurality of groups of initial pose of the eyes and the initial blind area data of the visual field corresponding to the initial pose of the eyes. And providing a data base for accurately acquiring a preset reference blind area.
In some embodiments, referring to fig. 7a, the driving object has a first boundary position and a second boundary position in the cabin, where the first boundary position and the second boundary position can be observed in the external environment of the cabin, and the steps of obtaining a plurality of initial eye positions of eyes of the driving object, so as to obtain initial blind areas of vision corresponding to the initial eye positions may include:
S710, determining the shooting range of an image acquisition device arranged outside the cabin.
Referring to fig. 7b, the shooting range is located between a first line of sight 702 and a second line of sight 704, the first line of sight 702 is a line of sight of the preset driving object at the first boundary position E1, which is close to the second boundary position E2, the second line of sight 704 is a line of sight of the preset driving object at the second boundary position E2, which is close to the first boundary position E1, and an area surrounded by the first line of sight and the second line of sight is a shooting range of the image acquisition device outside the cabin.
Specifically, the preset driving object can obtain more surrounding information of the driving vehicle by moving the position of the preset driving object or by swinging the upper body in the process of driving the vehicle. However, as a part of the vehicle forms a shielding object, such as an a pillar, the region observed by the preset driving object is shielded by the shielding object, and a blind area of the visual field is formed for the preset driving object. When the preset driving object is left moved to the leftmost position in the cabin, the upper body of the preset driving object is swung leftwards, so that the upper body of the preset driving object is positioned at the maximum angle reached by the left swing, and then the head is rotated, so that the head of the preset driving object is positioned at the maximum angle reached by the left rotation, and the eye pose of the preset driving object is recorded as a first boundary position. When the preset driving object is moved to the right in the cabin to the leftmost position which can be reached by the preset driving object, the upper body of the preset driving object is swung to the right, so that the upper body of the preset driving object is positioned at the maximum angle which can be reached by the right swing, and then the head is rotated, so that the head of the preset driving object is positioned at the maximum angle which can be reached by the right swing, and the eye pose of the preset driving object is recorded as a second boundary position. The visual field of a person is generally about 60 degrees from left to right in the binocular viewing area, and therefore, the line of sight toward the shield at the first boundary position near the second boundary position is denoted as the first line of sight, the line of sight toward the shield at the second boundary position near the first boundary position is denoted as the second line of sight, and the area enclosed by the first line of sight and the second line of sight is denoted as α in fig. 7 b. When the vehicle runs on the road, the surrounding scene can be shot through the image acquisition device arranged outside the cabin, the external environment image of the vehicle is obtained, and the shielded external environment image area alpha of the vehicle is the shooting range.
S720, determining a plurality of initial eye positions by moving eyes of the driving object for a plurality of times between the first boundary position and the second boundary position.
S730, determining an initial blind area of the field of view corresponding to the initial eye pose in the shooting range according to the initial eye pose.
Specifically, when the driver moves the upper body in the cabin, the eyes of the driver can move between the first boundary position and the second boundary position, and the eye pose corresponding to the position is acquired through each position and is recorded as the initial eye pose. Each initial eye pose will form an initial field of view blind zone corresponding thereto, which should be a subset of the area α.
Illustratively, referring to FIG. 7c, point R is offset upward 635mm as an eyepoint, the intersection of the horizontal plane and the A-pillar is determined by eyepoint as a horizontal plane, and the innermost and outermost opaque regions of the intersection in the above steps are intersected by the eyepoint on the plane. The two wire clamping angles are the A column obstacle angles. So that an initial field of view blind area can be determined.
In the above-described blind view area display method, first, the photographing range of the image pickup device installed outside the cabin may be determined based on the first line of sight and the second line of sight, and a plurality of initial eye positions may be determined by moving the eyes of the driving object a plurality of times between the first boundary position and the second boundary position, and further, the initial blind view area corresponding to one initial eye position may be determined within the photographing range based on the one initial eye position. By determining the shooting range and the initial view blind area, the real scene data area required to be shot by the image acquisition device arranged outside the cabin can be reduced, 360-degree panoramic data are not required to be shot, and the generation of video data and the consumption of storage space are reduced.
In some embodiments, the shade is at least part of a vehicle a pillar.
For example, with continued reference to fig. 1c, the rectangular frame 170 in fig. 1c may be regarded as an a pillar of the vehicle, and when the eye 160 of the driver falls on the a pillar of the vehicle, the shielding object is an area M formed by intersecting the eye line with the rectangular frame in fig. 1c, and the area M is a part of the rectangular frame, so the shielding object may be regarded as at least a part of the a pillar of the vehicle.
In the above visual field blind area display method, the shielding object is at least part of the A column of the vehicle. By determining the occlusion object, live-action data to be displayed on the virtual display device can be determined, and the computational power resources consumed by the chip for processing the video data are reduced.
The embodiment of the present disclosure further provides a method for displaying a blind area of a visual field, referring to fig. 8a, for example, the virtual reality device may be AR glasses, and when the vehicle-mounted host is connected to the AR glasses, the vehicle-mounted host and the AR glasses are connected through a BLE low power consumption mode. The vehicle-mounted host can be a master device, the AR glasses can be slave devices, and the domain controller and the AR glasses are directly connected through a Wi-Fi Direct or Wi-Fi AP mode to realize transmission of streaming media information. After the communication connection is established between the AR glasses and the domain controller of the vehicle, the environment of the target vision blind area can be presented on the AR glasses, and the AR glasses are connected with the vehicle-mounted host computer and combined with the space algorithm, so that the screen in the AR glasses is fixed in the displayable space of the AR glasses. The in-vehicle object monitoring device and the out-vehicle image acquisition device can be devices comprising fish eye cameras, and can be used for carrying out information transmission with the vehicle-mounted host and the AR glasses.
Referring to fig. 8b, the method for displaying a blind field of view may include the following steps:
s802, determining a shooting range of an image acquisition device installed outside the cabin.
Wherein the driving object has a first boundary position and a second boundary position within the cabin where the environment outside the cabin can be observed. The shooting range is located between a first sight line and a second sight line, the first sight line is a sight line of the preset driving object, which is projected to the shielding object at the first boundary position, and is close to the second boundary position, and the second sight line is a sight line of the preset driving object, which is projected to the shielding object at the second boundary position, and is close to the first boundary position. In some embodiments, the shade is at least part of a vehicle a pillar.
S804, between the first boundary position and the second boundary position, a plurality of initial eye positions are determined by moving the eyes of the driving object a plurality of times.
S806, acquiring a plurality of initial eye pose of eyes of a driving object, and using initial blind areas of vision corresponding to the initial eye pose.
S808, generating relation data of the eye pose and a blind zone of the visual field based on the initial eye pose and the corresponding blind zone of the initial visual field.
S810, acquiring a preset eye pose of a driving object in the preset sitting posture.
And S812, performing matching processing in the relation data of the eye pose and the blind zone of the visual field based on the preset eye pose, and obtaining a preset blind zone of the visual field corresponding to the preset eye pose.
S814, taking a preset visual field blind area corresponding to the preset eye pose as the reference visual field blind area.
Wherein, the eyes of the driving object and the virtual display device worn by the same driving object have relative position data.
S816, under the preset sitting posture, determining the preset eye pose of the driving object under the preset sitting posture.
S818, determining the preset reference pose according to the preset eye pose and the relative position data.
S820, acquiring a plurality of preset poses of the virtual display device, and using preset vision blind areas corresponding to the preset poses.
S822, determining preset pose change data between the preset pose and a preset reference pose and preset visual field difference data between the preset visual field blind area and the reference visual field blind area according to a preset pose and a preset visual field blind area corresponding to the preset pose.
S824, generating relation data between the pose change and the vision blind area difference based on the preset pose change data and the corresponding preset vision difference data.
S826, determining the current pose of the virtual display device worn by the current driving object in the cabin.
And pose change data of the virtual display equipment are arranged between the current pose and a preset reference pose.
And S828, matching the relation data between the pose change and the vision blind area difference according to the pose change data of the virtual display equipment to obtain vision range difference data corresponding to the current pose.
And S830, adjusting the reference vision blind area according to the vision range difference data corresponding to the current pose to obtain the target vision blind area.
S832, displaying live-action data in a target visual field blind area corresponding to the current pose on the virtual display equipment.
Referring to fig. 9, the embodiment of the present disclosure provides a blind spot view display device 900, where the blind spot view display device 900 includes: a pose data determining module 910 and a view blind area displaying module 920.
The pose data determining module 910 determines a current pose of a virtual display device worn by a current driving object in the cabin; the pose change data of the virtual display device are arranged between the current pose and a preset reference pose;
The visual field blind area display module 920 displays live-action data in the target visual field blind area corresponding to the current pose on the virtual display device; the target visual field blind area is obtained by adjusting the reference visual field blind area according to the pose change data; the reference visual field blind area is a visual field blind area generated when a driving object in the cabin observes the external environment of the cabin in a preset sitting posture; the virtual display device worn by the driving object in the preset sitting posture has the preset reference pose.
For a specific description of the blind spot display device, reference may be made to the description of the blind spot display method hereinabove, and the description thereof will not be repeated here.
The embodiment of the present specification provides a blind view area display device, including a memory storing a computer program and a processor implementing the steps of the method described in any one of the embodiments above when the processor executes the computer program.
The present description embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the method described in any of the above embodiments.
An embodiment of the present specification provides a computer program product comprising instructions which, when executed by a processor of a computer device, enable the computer device to perform the steps of the method of any one of the embodiments above.
It should be noted that the logic and/or steps represented in the flowcharts or otherwise described herein, for example, may be considered as a ordered listing of executable instructions for implementing logical functions, and may be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device. More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). In addition, the computer readable medium may even be paper or other suitable medium on which the program is printed, as the program may be electronically captured, via, for instance, optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner, if necessary, and then stored in a computer memory.

Claims (13)

1. A method for displaying a blind area of a field of view, the method comprising:
determining the current pose of virtual display equipment worn by a current driving object in a cabin; the pose change data of the virtual display device are arranged between the current pose and a preset reference pose;
displaying live-action data in a target visual field blind area corresponding to the current pose on the virtual display equipment; the target visual field blind area is obtained by adjusting the reference visual field blind area according to the pose change data; the reference visual field blind area is a visual field blind area generated when a driving object in the cabin observes the external environment of the cabin in a preset sitting posture; the virtual display device worn by the driving object in the preset sitting posture has the preset reference pose.
2. The method according to claim 1, wherein the determining the target field of view blind area includes:
determining visual field range difference data corresponding to the current pose according to pose change data of the virtual display equipment;
and adjusting the reference vision blind area according to the vision field range difference data corresponding to the current pose to obtain the target vision blind area.
3. The method according to claim 2, wherein the determining, according to the pose change data of the virtual display device, the field-of-view range difference data corresponding to the current pose includes:
and matching the relation data between the pose change and the vision blind area difference according to the pose change data of the virtual display equipment to obtain the vision range difference data corresponding to the current pose.
4. A method according to claim 3, wherein the determination of the relationship data between the pose change and the blind field difference comprises:
acquiring a plurality of preset poses of the virtual display equipment, and using preset vision blind areas corresponding to the preset poses;
determining preset pose change data between the preset pose and a preset reference pose and preset visual field difference data between the preset visual field blind area and the reference visual field blind area according to the preset pose and the corresponding preset visual field blind area;
and generating relation data between the pose change and the vision blind area difference based on the preset pose change data and the corresponding preset vision difference data.
5. The method according to claim 1, wherein the determining means of the reference field of view blind area includes;
acquiring a preset eye pose of a driving object in the preset sitting posture;
performing matching processing in relation data between the eye pose and a visual field blind zone based on the preset eye pose to obtain a preset visual field blind zone corresponding to the preset eye pose;
and taking a preset visual field blind area corresponding to the preset eye pose as the reference visual field blind area.
6. The method of claim 1, wherein the eyes of the subject and the virtual display device worn by the same subject have relative position data therebetween; and under the preset sitting posture, determining the preset reference pose of the virtual display device comprises the following steps:
determining a preset eye pose of a driving object in the preset sitting posture;
and determining the preset reference pose according to the preset eye pose and the relative position data.
7. The method according to claim 5, wherein the determining the relationship data between the pose of the eye and the blind interval of the visual field comprises:
acquiring a plurality of initial eye pose of eyes of a driving object, and using initial vision blind areas corresponding to the initial eye pose;
And generating the relation data of the eye pose and the blind zone of the visual field based on the initial eye pose and the corresponding blind zone of the initial visual field.
8. The method of claim 7, wherein the subject has a first boundary position and a second boundary position within the cabin where the environment outside the cabin can be observed; the obtaining a plurality of initial eye pose of eyes of a driving object, with an initial field blind area corresponding to the initial eye pose, includes:
determining a shooting range of an image acquisition device installed outside the cabin; the shooting range is located between a first sight line and a second sight line, the first sight line is a sight line of a preset driving object, which is projected to a shielding object at the first boundary position, and is close to the second boundary position, and the second sight line is a sight line of the preset driving object, which is projected to the shielding object at the second boundary position and is close to the first boundary position;
determining a plurality of initial eye positions by moving the eyes of the driving object a plurality of times between the first boundary position and the second boundary position;
and aiming at one initial eye pose, determining an initial blind area of a field of view corresponding to the one initial eye pose in the shooting range.
9. The method of claim 8, wherein the shroud is at least part of a vehicle a pillar.
10. A vision blind area display device, characterized in that the device comprises:
the pose data determining module is used for determining the current pose of the virtual display device worn by the current driving object in the cabin; the pose change data of the virtual display device are arranged between the current pose and a preset reference pose;
the visual field blind area display module is used for displaying live-action data in a target visual field blind area corresponding to the current pose on the virtual display equipment; the target visual field blind area is obtained by adjusting the reference visual field blind area according to the pose change data; the reference visual field blind area is a visual field blind area generated when a driving object in the cabin observes the external environment of the cabin in a preset sitting posture; the virtual display device worn by the driving object in the preset sitting posture has the preset reference pose.
11. A blind spot display device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 9.
13. A vehicle comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any one of claims 1 to 9 when the computer program is executed.
CN202211515488.2A 2022-11-29 2022-11-29 Visual field blind area display method, device, equipment, vehicle and storage medium Pending CN116068765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211515488.2A CN116068765A (en) 2022-11-29 2022-11-29 Visual field blind area display method, device, equipment, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211515488.2A CN116068765A (en) 2022-11-29 2022-11-29 Visual field blind area display method, device, equipment, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN116068765A true CN116068765A (en) 2023-05-05

Family

ID=86175885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211515488.2A Pending CN116068765A (en) 2022-11-29 2022-11-29 Visual field blind area display method, device, equipment, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN116068765A (en)

Similar Documents

Publication Publication Date Title
CN110187855B (en) Intelligent adjusting method for near-eye display equipment for avoiding blocking sight line by holographic image
US20160297362A1 (en) Vehicle exterior side-camera systems and methods
CN104883554B (en) The method and system of live video is shown by virtually having an X-rayed instrument cluster
JP5387763B2 (en) Video processing apparatus, video processing method, and video processing program
US8704882B2 (en) Simulated head mounted display system and method
WO2018066695A1 (en) In-vehicle display control device
CN107914707A (en) Anti-collision warning method, system, vehicular rear mirror and storage medium
JP5874920B2 (en) Monitoring device for vehicle surroundings
JP6410987B2 (en) Driving support device, driving support method, and driving support program
KR20160071070A (en) Wearable glass, control method thereof and vehicle control system
EP4339938A1 (en) Projection method and apparatus, and vehicle and ar-hud
US11626028B2 (en) System and method for providing vehicle function guidance and virtual test-driving experience based on augmented reality content
Badgujar et al. Driver gaze tracking and eyes off the road detection
US11507184B2 (en) Gaze tracking apparatus and systems
JP2017056909A (en) Vehicular image display device
US20210392318A1 (en) Gaze tracking apparatus and systems
CN111923835A (en) Vehicle-based rearview display method
CN116068765A (en) Visual field blind area display method, device, equipment, vehicle and storage medium
CN110796116A (en) Multi-panel display system, vehicle with multi-panel display system and display method
JP6234701B2 (en) Ambient monitoring device for vehicles
CN216034093U (en) Projection system and automobile
US11747897B2 (en) Data processing apparatus and method of using gaze data to generate images
CN115018942A (en) Method and apparatus for image display of vehicle
CN113507559A (en) Intelligent camera shooting method and system applied to vehicle and vehicle
EP3340013A1 (en) A gazed object identification module, a system for implementing gaze translucency, and a related method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination