CN117193530A - Intelligent cabin immersive user experience method and system based on virtual reality technology - Google Patents

Intelligent cabin immersive user experience method and system based on virtual reality technology Download PDF

Info

Publication number
CN117193530A
CN117193530A CN202311131303.2A CN202311131303A CN117193530A CN 117193530 A CN117193530 A CN 117193530A CN 202311131303 A CN202311131303 A CN 202311131303A CN 117193530 A CN117193530 A CN 117193530A
Authority
CN
China
Prior art keywords
real
time
pose
picture
cabin
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311131303.2A
Other languages
Chinese (zh)
Inventor
唐平
张丽
伍文琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Douples Technology Co ltd
Original Assignee
Shenzhen Douples Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Douples Technology Co ltd filed Critical Shenzhen Douples Technology Co ltd
Priority to CN202311131303.2A priority Critical patent/CN117193530A/en
Publication of CN117193530A publication Critical patent/CN117193530A/en
Pending legal-status Critical Current

Links

Landscapes

  • Instrument Panels (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention provides an intelligent cabin immersive user experience method and system based on a virtual reality technology, which belong to the technical field of intelligent vehicles, and the method comprises the following steps: determining the real-time visual field pose of a driver; generating an intra-cabin head display basic picture at the current moment based on an intra-cabin video of the intelligent cabin, a real-time body pose and a real-time view field pose of a driver; based on the real-time visual field pose of a driver, performing visual angle conversion and selection on a real-time vehicle exterior environment video of a current vehicle, and generating an exterior head display basic picture at the current moment; superposing the cabin head display basic picture and the vehicle exterior head display basic picture to obtain a virtual reality head display picture; based on personalized driving control data in the intelligent cockpit, intelligently marking the virtual reality head display picture, obtaining a complete virtual reality head display picture and displaying the complete virtual reality head display picture; to improve the immersive user experience of the intelligent cabin based on virtual reality technology.

Description

Intelligent cabin immersive user experience method and system based on virtual reality technology
Technical Field
The invention relates to the technical field of intelligent vehicles, in particular to an intelligent cabin immersive user experience method and system based on a virtual reality technology.
Background
At present, along with the development of automobile intelligent control technology and virtual reality technology, a lot of intelligent head display equipment used in the intelligent cabin of a vehicle gradually appear, and the driving experience and the visual experience of a driver can be greatly improved by the product.
However, the existing virtual reality technology applied to the intelligent cabin only displays the virtual display image to be displayed to the head display device so as to assist the driving process of the driver, and therefore, compared with the scene actually seen by naked eyes, the reduction degree of the visual effect of the image displayed in the head display device is poor, and the intelligent combination between the driving control data in the intelligent cabin and the head display image displayed in the virtual reality device does not exist, so that the visual experience of the intelligent cabin is to be improved. For example, chinese patent publication No. CN115195637a, entitled "an intelligent cabin system based on multimode interaction and virtual reality technology", discloses an intelligent cabin system based on multimode interaction and virtual reality technology, which includes a central processing unit, a voice and environment detection system, an in-vehicle state vision monitoring system, an in-vehicle hardware early warning and optimizing system, an out-cabin sensing system, and an in-vehicle data management system, wherein the voice and environment detection system, the in-vehicle state vision monitoring system, the in-vehicle hardware early warning and optimizing system, the out-cabin sensing system, and the in-vehicle data management system are electrically connected with the central processing unit through a CAN bus, and the voice and environment detection system, the in-vehicle state vision monitoring system, the in-vehicle hardware early warning and optimizing system, the out-cabin sensing system, and the in-vehicle data management system are in-vehicle interactive communication with the central processing unit based on the result analyzed by the artificial intelligent model. However, this patent does not solve the problem that the degree of restoration of the visual effect of the picture displayed in the head display device is poor with respect to the scene actually seen by the naked eye, nor does there exist intelligent combinations between the driving manipulation data in the intelligent cabin and the head display picture displayed in the virtual reality device, resulting in that the visual experience thereof is to be improved.
Therefore, the invention provides an intelligent cabin immersion type user experience method and system based on a virtual reality technology.
Disclosure of Invention
The invention provides an intelligent cabin immersive user experience method and system based on a virtual reality technology, which are used for reasonably determining the real-time visual field pose of a driver by utilizing the real-time body pose of the driver, accurately selecting in-vehicle and out-of-vehicle scene models and generating pictures based on the real-time visual field pose, accurately generating an in-cabin head display basic picture and an out-of-vehicle head display basic picture, and generating a virtual reality head display picture by superposition of the in-cabin head display basic picture and the out-of-vehicle head display basic picture, so that the reduction degree of the visual effect of the picture displayed in head display equipment is better, and intelligently marking the head display picture based on personalized driving control data, namely, the intelligent combination between the driving control data in the intelligent cabin and the head display picture displayed in the virtual reality equipment is realized, and further the driving experience and the visual experience of the intelligent cabin are further improved.
The invention provides an intelligent cabin immersive user experience method based on a virtual reality technology, which comprises the following steps:
s1: determining the real-time visual field pose of the driver based on the real-time body pose of the driver in the intelligent cabin;
s2: generating an in-cabin head display basic picture at the current moment based on the in-cabin video of the intelligent cabin of the current vehicle, the real-time body position and the real-time view field position of a driver in the intelligent cabin;
s3: based on the real-time visual field pose of a driver, performing visual angle conversion and selection on a real-time vehicle exterior environment video of a current vehicle, and generating an exterior head display basic picture at the current moment;
s4: superposing the cabin head display basic picture and the vehicle exterior head display basic picture to obtain a virtual reality head display picture;
s5: based on personalized driving control data in the intelligent cockpit, intelligently marking the virtual reality head display picture to obtain a complete virtual reality head display picture;
s6: and transmitting the complete picture of the virtual reality head display to the virtual reality head display equipment for display.
Preferably, S1: based on the real-time body pose of the driver in the intelligent cockpit, determining the real-time field pose of view of the driver comprises:
s101: extracting the real-time head pose and the real-time eye pose of the driver from the real-time body pose of the driver in the intelligent cabin;
s102: based on the real-time head pose and the real-time eye pose, a real-time field pose of the driver is determined.
Preferably, S102: based on the real-time head pose and the real-time eye pose, determining the real-time field pose of view of the driver includes:
generating a real-time standard view field pose of the head pose based on the real-time head pose and the standard eye pose;
and carrying out azimuth correction on the real-time standard visual field pose based on the real-time eye pose to obtain the real-time visual field pose of the driver.
Preferably, S2: based on the real-time body position and the real-time view field position of the driver in the intelligent cabin of the current vehicle, the method for generating the in-cabin head display basic picture at the current moment comprises the following steps:
building an in-cabin display model based on an in-cabin video of the intelligent cabin;
based on the real-time body position and posture of the driver in the intelligent cabin, building a body display model of the driver; combining the in-cabin display model and the body display model to obtain an in-cabin panoramic display model;
selecting a model of the panoramic display model in the cabin based on the real-time visual field pose of the driver to obtain a model in the visual field range;
and generating and combining partition pictures based on the real-time visual field pose of the driver and the model within the visual field range to obtain the head display basic picture in the cabin.
Preferably, the generating and merging of the partition picture based on the real-time view pose of the driver and the model within the view range to obtain the basic picture of the head display in the cabin comprises the following steps:
extracting a real-time pupil position and posture of a driver from the real-time eye position and posture;
based on the real-time pupil pose, determining a real-time transverse comfortable view angle range, a real-time transverse edge view angle range, a real-time longitudinal comfortable view angle range, a real-time longitudinal edge view angle range, a real-time comfortable depth of field and a real-time edge depth of field of a driver;
based on a real-time transverse comfortable visual angle range, a real-time transverse edge visual angle range, a real-time longitudinal comfortable visual angle range, a real-time longitudinal edge visual angle range, a real-time comfortable depth of field and a real-time edge depth of field of a driver, carrying out detailed division marking on the real-time visual field pose, and obtaining the real-time comfortable visual field pose and the real-time edge visual field pose of the driver;
model partitioning is carried out on the model in the view field range based on the real-time comfortable view field pose and the real-time edge view field pose of the driver, and a model in the comfortable view field range and a model in the edge view field range are obtained;
and generating and combining partition pictures based on the model in the comfortable view field range and the model in the edge view field range to obtain the head display basic picture in the cabin.
Preferably, the generating and merging of the partition picture based on the model in the comfortable view field range and the model in the edge view field range to obtain the head display basic picture in the cabin comprises the following steps:
generating a non-perspective picture of the model in the comfortable view field range based on the real-time comfortable view field pose and the preset comfortable view field display parameters, and obtaining a picture in the comfortable view field;
generating a non-perspective picture of the internal model of the edge view field range based on the real-time edge view field pose and preset edge view field display parameters, and obtaining a picture in the edge view field;
and smoothly splicing the images in the comfortable view field and the images in the edge view field to obtain the basic images of the head display in the cabin.
Preferably, the frame in the comfortable view field and the frame in the edge view field are smoothly spliced to obtain a basic frame of the head display in the cabin, which comprises the following steps:
determining a color three attribute value of an outer edge pixel point, a color three attribute value of a secondary outer edge pixel point of a picture in a comfortable view field, and a color three attribute value of an inner edge pixel point and a color three attribute value of a secondary inner edge pixel point of the picture in the edge view field;
calculating a spliced adjacent gradient threshold value of each color attribute based on an adjacent gradient threshold value of each color attribute of the picture in the comfortable view field and an adjacent gradient threshold value of each color attribute of the picture in the edge view field;
and resetting the color three attribute values of the outer edge pixel points, the color three attribute values of the secondary outer edge pixel points of the frame in the comfortable view field, the color three attribute values of the inner edge pixel points and the color three attribute values of the secondary inner edge pixel points of the frame in the edge view field based on the spliced adjacent gradual change threshold values of each color attribute to obtain the head display basic frame in the cabin.
Preferably, S3: based on the real-time visual field pose of the driver, performing visual angle conversion and selection on the real-time vehicle exterior environment video of the current vehicle to generate an exterior head display basic picture at the current moment, wherein the method comprises the following steps:
building a model based on a real-time vehicle external environment video of the current vehicle to obtain an external vehicle environment space model;
selecting a space model of an external vehicle environment model based on the real-time view field pose to obtain an internal vehicle external view field model;
and generating a non-perspective picture of the model in the external view field based on the real-time view pose and the preset view display parameters, and obtaining an external head display basic picture at the current moment.
Preferably, S5: based on the individualized driving control data in the intelligent cockpit, the intelligent marking is carried out on the virtual reality head display picture, the complete picture of the virtual reality head display is obtained, and the intelligent marking method comprises the following steps:
extracting real-time driving control data capable of overlapping display data items from the personalized driving control data in the intelligent cockpit;
converting the display form of the real-time driving control data to obtain display data capable of overlapping display data items;
and marking the display data in the complete picture of the virtual reality head display to obtain the complete picture of the virtual reality head display.
The invention provides an intelligent cabin immersive user experience system based on a virtual reality technology, which comprises:
the first pose determining module is used for determining the real-time visual field pose of the driver based on the real-time body pose of the driver in the intelligent cockpit;
the first picture generation module is used for generating an in-cabin head display basic picture at the current moment based on the in-cabin video of the intelligent cabin of the current vehicle, the real-time body pose and the real-time view field pose of the driver in the intelligent cabin;
the second picture generation module is used for carrying out visual angle conversion and selection on the real-time vehicle external environment video of the current vehicle based on the real-time visual field pose of the driver to generate an external head display basic picture at the current moment;
the head display picture superposition module is used for superposing the head display basic picture in the cabin and the head display basic picture outside the vehicle to obtain a virtual reality head display picture;
the data intelligent marking module is used for intelligently marking the virtual reality head display picture based on the personalized driving control data in the intelligent cockpit to obtain a complete picture of the virtual reality head display;
and the head display picture display module is used for transmitting the complete picture of the virtual reality head display to the virtual reality head display equipment for display.
The invention has the beneficial effects different from the prior art that: the real-time visual field pose of the driver is reasonably determined by utilizing the real-time body pose of the driver, the accurate selection and the image generation of the scene models in and out of the vehicle are based on the real-time visual field pose, the in-cabin head display basic image and the out-of-vehicle head display basic image are accurately generated, and the virtual reality head display image is generated through superposition of the in-cabin head display basic image and the out-of-vehicle head display basic image, so that the reduction degree of the visual effect of the image displayed in the head display device is better, the head display image is intelligently marked and displayed based on the personalized driving control data, namely, the intelligent combination between the driving control data in the intelligent seat cabin and the head display image displayed in the virtual reality device is realized, and the driving experience and the visual experience are further improved.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a flowchart of an intelligent cabin immersion type user experience method based on a virtual reality technology in an embodiment of the invention;
FIG. 2 is a flowchart of another method for immersive user experience of an intelligent cockpit based on virtual reality technology in an embodiment of the present invention;
fig. 3 is a schematic diagram of an intelligent cabin immersion type user experience system based on a virtual reality technology in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1:
the invention provides an intelligent cabin immersive user experience method based on a virtual reality technology, which comprises the following steps of:
s1: based on the real-time body pose of the driver in the intelligent cabin (namely, pose data of the body of the driver acquired in real time), determining the real-time view field pose of the driver (namely, coordinate data of a space range corresponding to the real-time view field of the driver);
s2: generating an in-cabin head display basic picture at the current moment (namely, a picture which can be seen by a driver and comprises a local structure in the intelligent cockpit and a part of body part structure thereof under the condition of not wearing virtual reality head display equipment in the real-time body position and the real-time field position of the driver in the intelligent cockpit) based on an in-cabin video of the current vehicle intelligent cockpit (namely, a video recorded with all visible structures in the current vehicle intelligent cockpit) and the real-time body position and the real-time field position of the driver in the intelligent cockpit, wherein the virtual reality head display equipment is a head-mounted display, namely, a display for displaying the virtual reality picture to the user;
s3: based on the real-time view pose of the driver, performing view angle conversion (namely, generating an external frame which can be seen under the real-time view pose of the driver based on the real-time vehicle external environment video) and selection (namely, selecting a part of scene video in the video which can be seen under the real-time view pose of the driver from the real-time vehicle external environment video) on a real-time vehicle external environment video of the current vehicle (namely, generating an external frame which can be seen under the real-time view pose of the driver and without wearing virtual reality head display equipment) at the current moment, and generating an external frame (namely, a frame which can be seen under the real-time view pose of the driver and comprises a part of scene outside the current vehicle);
reasonably determining the real-time visual field pose of the driver by using the real-time body pose of the driver, and accurately generating an in-cabin head display basic picture and an out-of-vehicle head display basic picture based on the accurate selection and picture generation of the in-vehicle and out-of-vehicle scene models by the real-time visual field pose;
s4: superposing the in-cabin head display basic picture and the out-vehicle head display basic picture to obtain a virtual reality head display picture (namely, a picture which can be seen by a driver and comprises a local structure in the intelligent seat cabin and a body part structure of the driver and a part of a scene outside the current vehicle under the real-time visual field pose and without wearing virtual reality head display equipment);
the virtual reality head display picture is generated through superposition of the virtual reality head display picture and the virtual reality head display picture, so that the reduction degree of the visual effect of the picture displayed in the head display equipment is better;
s5: based on the personalized driving control data in the intelligent cockpit (namely, the data related to vehicle driving input or output by intelligent control equipment in the intelligent cockpit, such as navigation route searched by a user in an intelligent cockpit navigator, and the like), the virtual reality head display picture is intelligently marked to obtain a complete virtual reality head display picture (namely, a new picture obtained after the personalized driving control data is marked on the virtual reality head display picture);
s6: and transmitting the complete picture of the virtual reality head display to the virtual reality head display equipment for display.
The head display pictures are intelligently marked and displayed based on the personalized driving control data, so that intelligent combination between the driving control data in the intelligent cabin and the head display pictures displayed in the virtual reality equipment is realized, and driving experience and visual experience of the head display pictures are further improved.
Example 2:
based on example 1, S1: based on the real-time body pose of the driver in the intelligent cockpit, determining the real-time field pose of view of the driver, referring to fig. 2, includes:
s101: extracting a real-time head pose (namely pose data representing the current head pose of the driver) and a real-time eye pose (namely pose data representing the eyeball pose of the driver) of the driver from the real-time body pose of the driver in the intelligent cabin, wherein the real-time eye pose also comprises pose data of pupil pose, and pupil diffusion degree and the like can be determined based on the pose data of the pupil pose;
s102: based on the real-time head pose and the real-time eye pose, a real-time field pose of the driver is determined.
The real-time visual field pose of the driver can be accurately determined through the real-time head pose and the real-time eye pose which are extracted from the real-time body pose of the driver.
Example 3:
based on example 2, S102: based on the real-time head pose and the real-time eye pose, determining the real-time field pose of view of the driver includes:
based on the real-time head pose and the standard eye pose (namely the eye pose of a preset known corresponding field pose, for example, the eye pose of a person when staring at a position just in front of the eye pose by five meters), generating the real-time standard field pose of the head pose (namely the angular point position of the field angle is determined based on the real-time head pose), taking the principle that the angular point position of the field angle in the standard field pose corresponding to the standard eye pose is overlapped with the angular point position of the field angle is determined based on the real-time head pose, and superposing the standard field pose corresponding to the standard eye pose and the real-time head pose to obtain the real-time standard field pose;
and carrying out azimuth correction on the real-time standard view field pose based on the real-time eye pose to obtain the real-time view field pose of the driver (namely carrying out azimuth correction on the real-time standard view field pose based on the azimuth deviation of the real-time eye pose relative to the standard eye pose to obtain the view field pose of the driver, for example, if the deviation angle of the sight direction in the real-time eye pose relative to the sight direction in the standard eye pose is a, the sight direction of the real-time standard view field pose is deviated by an angle a to obtain a new sight direction, and the real-time standard view field pose is deviated according to the new sight direction to obtain the real-time view field pose).
Generating a current real-time head pose and a view field pose under the standard eye pose based on the standard eye pose and the real-time head pose, carrying out azimuth correction on the determined view field pose based on the real-time eye pose of the driver, and accurately determining the real-time view field pose of the driver by the sight line.
Example 4:
based on example 1, S2: based on the real-time body position and the real-time view field position of the driver in the intelligent cabin of the current vehicle, the method for generating the in-cabin head display basic picture at the current moment comprises the following steps:
based on the video in the cabin of the intelligent cabin, building an in-cabin display model (namely, a model capable of restoring the three-dimensional size and display effect (such as color, texture of the surface of the structure and the like) of all structures in the intelligent cabin);
based on the real-time body position and posture of the driver in the intelligent cabin, building a body display model of the driver (namely, a model capable of restoring the three-dimensional size and visual effect (such as skin color, structural surface texture and the like) of a body structure of the driver in the intelligent cabin);
combining the cabin display model and the body display model to obtain a cabin panoramic display model (namely, a model capable of restoring the three-dimensional size and display effect of all structures in the intelligent cabin and the three-dimensional size and visual effect of the body structure of a driver in the intelligent cabin);
selecting a model of the panoramic display model in the cabin based on the real-time view pose of the driver to obtain a model in the view range (namely, a model corresponding to a part of a scene or a structure which can be seen by the driver in the panoramic display model in the cabin under the real-time view pose of the driver);
and generating and combining partition pictures based on the real-time visual field pose of the driver and the model within the visual field range to obtain the head display basic picture in the cabin.
The method comprises the steps of combining an in-cabin display model built based on an in-cabin video and a real-time body pose with a body display model, realizing the model representation building of an in-cabin structure in an intelligent cockpit and a real-time human body pose of a driver, selecting the built dream based on the real-time visual field pose, obtaining the model representing the in-cabin structure and the body part which can be seen by naked eyes under the current pose of the driver, generating and combining partition pictures based on the selected partial dream, so that the generated in-cabin head display basic picture can highly restore a virtual reality picture containing the in-cabin real-time form which can be seen by the driver under the current pose without wearing virtual reality head display equipment, and namely realizing the high restoration of the in-cabin real-time form.
Example 5:
on the basis of embodiment 1, generating and merging partition pictures based on the real-time view pose of the driver and the model within the view range to obtain an in-cabin head display basic picture, wherein the method comprises the following steps:
extracting a real-time pupil position and posture of a driver (namely, pose data representing the pupil posture of the driver in the current state, and determining the pupil expansion degree and the like based on the pose data of the pupil posture) from the real-time eye position and posture;
determining a real-time transverse comfort view angle range (i.e. a real-time comfort view angle representing a driver in a transverse direction, for example, a range of 60 degrees centered on a viewing direction corresponding to the real-time pupil position pose), a real-time transverse edge view angle range (i.e. a real-time edge view angle representing the driver in the transverse direction, for example, a range remaining than the real-time transverse comfort view angle range in a 120-degree range centered on a viewing direction corresponding to the real-time pupil position pose), a real-time longitudinal comfort view angle range (i.e. a real-time comfort view angle representing the driver in a longitudinal direction, for example, a real-time comfort view angle in a longitudinal direction, for example, a range of 55 degrees centered on a viewing direction corresponding to the real-time pupil position pose), a real-time longitudinal edge view angle range (i.e. a range representing a longitudinal edge view angle of the driver in a longitudinal direction, for example, a range of 135 degrees centered on a viewing direction corresponding to the real-time pupil position pose), a projection view angle (i.e. a range representing a depth of view of the driver in a left eye of the longitudinal direction, for example, a depth of view in a left eye of the depth of view in a range of the longitudinal direction) based on a real-time pupil position pose (i.e. a real-time pupil position pose);
based on a real-time transverse comfortable visual angle range, a real-time transverse edge visual angle range, a real-time longitudinal comfortable visual angle range, a real-time longitudinal edge visual angle range, a real-time comfortable depth of field and a real-time edge depth of field of a driver, carrying out detailed division marking on the real-time visual field pose, and obtaining the real-time comfortable visual field pose and the real-time edge visual field pose of the driver;
model partitioning is carried out on the model in the view field range based on the real-time comfortable view field pose and the real-time edge view field pose of the driver, and a model in the comfortable view field range and a model in the edge view field range are obtained;
and generating and combining partition pictures based on the model in the comfortable view field range and the model in the edge view field range to obtain the head display basic picture in the cabin.
In this embodiment, the difference between the comfortable viewing angle or the comfortable viewing field and the marginal viewing angle or the marginal viewing field is due to the imaging deviation in the eyeball caused by the difference of the distance from the pupil position, so that the imaging effects of the scenes in different viewing field ranges in the human eyes are different, the imaging effect of the scenes in the general comfortable viewing field in the human eyes is better and clearer, and otherwise, the imaging effect of the scenes in the marginal viewing field in the human eyes is reduced compared with that of the comfortable viewing field.
Based on the real-time pupil pose, a corresponding relation table is searched, range parameters of a comfortable view field and range parameters of an edge view field are determined, the comfortable view field pose and the edge view field pose are determined based on the range parameters of the comfortable view field and the range parameters of the edge view field, model selection and picture generation and combination are performed on models in the view field range based on the comfortable view field pose and the edge view field pose, and further through accurate determination of the comfortable view field and the edge view field of a driver, the generated basic picture of the head display in the cabin is more restored to the visual effect of naked eyes.
Example 6:
on the basis of embodiment 5, generating and merging partition pictures based on the model in the comfortable view field range and the model in the edge view field range to obtain an in-cabin head-up display basic picture, wherein the method comprises the following steps of:
generating a non-perspective picture of the model in the comfortable view field range (namely, generating a picture of a model surface structure in the comfortable view field range seen by a driver in the real-time comfortable view field pose) based on the real-time comfortable view field pose and preset comfortable view field display parameters (namely, the display parameters of a scene in the preset comfortable view field in a cabin head display basic picture, contrast, chromaticity and the like), and obtaining a picture in the comfortable view field (namely, restoring the scene picture in the comfortable view field seen by the driver in the current real-time comfortable view field pose);
generating a non-perspective picture of the model in the range of the edge view field (namely, generating a picture of the surface structure of the model in the range of the edge view field seen by a driver in the position of the real-time edge view field) based on the position and the position of the real-time edge view field and preset display parameters of the edge view field (namely, the display parameters of the scene in the preset edge view field in a basic picture of the head display in the cabin, such as contrast, chromaticity and the like), and obtaining the picture in the edge view field (namely, restoring the scene picture in the edge view field seen by the driver in the current position of the real-time edge view field);
and smoothly splicing the images in the comfortable view field and the images in the edge view field to obtain the basic images of the head display in the cabin.
The real-time comfortable view field pose and the real-time edge view field pose are determined and combined with preset comfortable view field display parameters and edge view field display parameters, non-perspective picture generation is carried out on the model in the comfortable view field range and the model in the edge view field range respectively, the partitioned reduction of the scene picture seen by the driver in the current view field is achieved, and the restored picture is smoothly spliced, so that the obtained cabin head display basic picture is closer to the visual effect seen by naked eyes.
Example 7:
on the basis of example 6, the frame in the comfortable view field and the frame in the edge view field are smoothly spliced to obtain a frame in the cabin head display base frame, which comprises:
determining color three attribute values of outer edge pixel points (namely, pixel points of the outermost circle of the picture in the comfortable view field), color three attribute values of secondary outer edge pixel points (namely, pixel points of the circle adjacent to the outer edge pixel points in the picture in the comfortable view field), color three attribute values of inner edge pixel points (namely, pixel points of the circle adjacent to the inner edge pixel points of the picture in the edge view field) and color three attribute values of secondary inner edge pixel points (namely, pixel points of the circle adjacent to the inner edge pixel points of the picture in the edge view field);
calculating a spliced adjacent gradient threshold value of each color attribute (namely, an average value of the adjacent gradient threshold value of the picture in the comfort view field and the adjacent gradient threshold value of the picture in the edge view field) based on the adjacent gradient threshold value of each color attribute of the picture in the comfort view field (namely, the average value of the color attribute values between all pixel points in the comfort view field or the picture in the edge view field and the adjacent pixel points in the preset direction (such as the horizontal right);
and resetting the color three attribute values of the outer edge pixel points, the color three attribute values of the secondary outer edge pixel points and the color three attribute values of the inner edge pixel points and the color three attribute values of the secondary inner edge pixel points of the frame in the comfortable view field based on the splicing adjacent gradient threshold value of each color attribute (namely, enabling the difference value of the color attribute values between the outer edge pixel points, the secondary outer edge pixel points, the inner edge pixel points and the secondary inner edge pixel points obtained after resetting and the adjacent pixel points in the preset direction not to exceed the splicing adjacent gradient threshold value of the corresponding color attribute) to obtain the head display basic frame in the cabin.
Through resetting the three attribute values of the colors of the outer edge pixel point, the secondary outer edge pixel point, the inner edge pixel point and the secondary inner edge pixel point of the picture in the comfortable view field, the color attribute difference value of the spliced adjacent pixel points meets the splicing adjacent gradual change threshold value, and smooth splicing of the picture in the comfortable view field and the picture in the edge view field is completed.
Example 8:
based on example 1, S3: based on the real-time visual field pose of the driver, performing visual angle conversion and selection on the real-time vehicle exterior environment video of the current vehicle to generate an exterior head display basic picture at the current moment, wherein the method comprises the following steps:
model building is carried out based on a real-time vehicle external environment video of the current vehicle, and an external vehicle environment space model (namely, a model representing the three-dimensional size of a scene contained in the real-time vehicle external environment video) is obtained;
the method comprises the steps of selecting a space model of an external vehicle environment model based on a real-time view pose, and obtaining an internal vehicle exterior view model (namely a model representing an external vehicle environment scene which a driver can see under the current real-time view pose);
and generating a non-perspective picture of the external visual field internal model (namely, generating a picture of the surface structure of the external visual field internal model seen by a driver under the real-time visual field pose) based on the real-time visual field pose and preset visual field display parameters (display parameters of a display picture of the external scene), and obtaining an external head display basic picture at the current moment.
And based on the real-time visual field pose and the real-time external environment video of the current vehicle, finishing the high restoration of the visual picture of the external scene seen by the driver.
Example 9:
based on example 1, S5: based on the individualized driving control data in the intelligent cockpit, the intelligent marking is carried out on the virtual reality head display picture, the complete picture of the virtual reality head display is obtained, and the intelligent marking method comprises the following steps:
extracting real-time driving control data (namely specific data of the superimposable display data items in the personalized driving control data, such as guide data of a currently displayed navigation guide route) of superimposable display data items (namely the personalized driving control data items which can be superimposed into a complete picture of a virtual reality head display, such as navigation guide route, distance between adjacent vehicles and the like) from the personalized driving control data in the intelligent cockpit;
converting the display form of the real-time driving control data (for example, converting the guide data of the navigation guide route into a guide arrow picture component) to obtain display data of the stackable display data item (namely, the real-time driving control data converted into the data form which can be superimposed into a complete picture of the virtual reality head display);
and marking the display data in the complete picture of the virtual reality head display to obtain the complete picture of the virtual reality head display.
The head display pictures are intelligently marked and displayed based on the personalized driving control data, so that intelligent combination between the driving control data in the intelligent cabin and the head display pictures displayed in the virtual reality equipment is realized, and driving experience and visual experience of the head display pictures are further improved.
Example 10:
the invention provides an intelligent cabin immersive user experience system based on a virtual reality technology, and referring to fig. 3, the intelligent cabin immersive user experience system comprises:
the first pose determining module is used for determining the real-time visual field pose of the driver based on the real-time body pose of the driver in the intelligent cockpit;
the first picture generation module is used for generating an in-cabin head display basic picture at the current moment based on the in-cabin video of the intelligent cabin of the current vehicle, the real-time body pose and the real-time view field pose of the driver in the intelligent cabin;
the second picture generation module is used for carrying out visual angle conversion and selection on the real-time vehicle external environment video of the current vehicle based on the real-time visual field pose of the driver to generate an external head display basic picture at the current moment;
the head display picture superposition module is used for superposing the head display basic picture in the cabin and the head display basic picture outside the vehicle to obtain a virtual reality head display picture;
the data intelligent marking module is used for intelligently marking the virtual reality head display picture based on the personalized driving control data in the intelligent cockpit to obtain a complete picture of the virtual reality head display;
and the head display picture display module is used for transmitting the complete picture of the virtual reality head display to the virtual reality head display equipment for display.
The real-time visual field pose of the driver is reasonably determined by utilizing the real-time body pose of the driver, the accurate selection and the image generation of the scene models in and out of the vehicle are based on the real-time visual field pose, the in-cabin head display basic image and the out-of-vehicle head display basic image are accurately generated, and the virtual reality head display image is generated through superposition of the in-cabin head display basic image and the out-of-vehicle head display basic image, so that the reduction degree of the visual effect of the image displayed in the head display device is better, the head display image is intelligently marked and displayed based on the personalized driving control data, namely, the intelligent combination between the driving control data in the intelligent seat cabin and the head display image displayed in the virtual reality device is realized, and the driving experience and the visual experience are further improved.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. The intelligent cabin immersive user experience method based on the virtual reality technology is characterized by comprising the following steps of:
s1: determining the real-time visual field pose of the driver based on the real-time body pose of the driver in the intelligent cabin;
s2: generating an in-cabin head display basic picture at the current moment based on the in-cabin video of the intelligent cabin of the current vehicle, the real-time body position and the real-time view field position of a driver in the intelligent cabin;
s3: based on the real-time visual field pose of a driver, performing visual angle conversion and selection on a real-time vehicle exterior environment video of a current vehicle, and generating an exterior head display basic picture at the current moment;
s4: superposing the cabin head display basic picture and the vehicle exterior head display basic picture to obtain a virtual reality head display picture;
s5: based on personalized driving control data in the intelligent cockpit, intelligently marking the virtual reality head display picture to obtain a complete virtual reality head display picture;
s6: and transmitting the complete picture of the virtual reality head display to the virtual reality head display equipment for display.
2. The virtual reality technology-based intelligent cockpit immersive user experience method of claim 1, wherein S1: based on the real-time body pose of the driver in the intelligent cockpit, determining the real-time field pose of view of the driver comprises:
s101: extracting the real-time head pose and the real-time eye pose of the driver from the real-time body pose of the driver in the intelligent cabin;
s102: based on the real-time head pose and the real-time eye pose, a real-time field pose of the driver is determined.
3. The virtual reality technology-based intelligent cockpit immersive user experience method of claim 2, wherein S102: based on the real-time head pose and the real-time eye pose, determining the real-time field pose of view of the driver includes:
generating a real-time standard view field pose of the head pose based on the real-time head pose and the standard eye pose;
and carrying out azimuth correction on the real-time standard visual field pose based on the real-time eye pose to obtain the real-time visual field pose of the driver.
4. The virtual reality technology-based intelligent cockpit immersive user experience method of claim 1, wherein S2: based on the real-time body position and the real-time view field position of the driver in the intelligent cabin of the current vehicle, the method for generating the in-cabin head display basic picture at the current moment comprises the following steps:
building an in-cabin display model based on an in-cabin video of the intelligent cabin;
based on the real-time body position and posture of the driver in the intelligent cabin, building a body display model of the driver;
combining the in-cabin display model and the body display model to obtain an in-cabin panoramic display model;
selecting a model of the panoramic display model in the cabin based on the real-time visual field pose of the driver to obtain a model in the visual field range;
and generating and combining partition pictures based on the real-time visual field pose of the driver and the model within the visual field range to obtain the head display basic picture in the cabin.
5. The virtual reality technology-based intelligent cabin immersion type user experience method according to claim 1, wherein the generating and merging of the partition pictures based on the real-time view pose of the driver and the model within the view range to obtain the cabin head display basic picture comprises the following steps:
extracting a real-time pupil position and posture of a driver from the real-time eye position and posture;
based on the real-time pupil pose, determining a real-time transverse comfortable view angle range, a real-time transverse edge view angle range, a real-time longitudinal comfortable view angle range, a real-time longitudinal edge view angle range, a real-time comfortable depth of field and a real-time edge depth of field of a driver;
based on a real-time transverse comfortable visual angle range, a real-time transverse edge visual angle range, a real-time longitudinal comfortable visual angle range, a real-time longitudinal edge visual angle range, a real-time comfortable depth of field and a real-time edge depth of field of a driver, carrying out detailed division marking on the real-time visual field pose, and obtaining the real-time comfortable visual field pose and the real-time edge visual field pose of the driver;
model partitioning is carried out on the model in the view field range based on the real-time comfortable view field pose and the real-time edge view field pose of the driver, and a model in the comfortable view field range and a model in the edge view field range are obtained;
and generating and combining partition pictures based on the model in the comfortable view field range and the model in the edge view field range to obtain the head display basic picture in the cabin.
6. The virtual reality technology-based intelligent cabin immersion type user experience method according to claim 5, wherein generating and merging partition pictures based on a comfort field in-range model and an edge field in-range model to obtain an in-cabin head-up display basic picture comprises:
generating a non-perspective picture of the model in the comfortable view field range based on the real-time comfortable view field pose and the preset comfortable view field display parameters, and obtaining a picture in the comfortable view field;
generating a non-perspective picture of the internal model of the edge view field range based on the real-time edge view field pose and preset edge view field display parameters, and obtaining a picture in the edge view field;
and smoothly splicing the images in the comfortable view field and the images in the edge view field to obtain the basic images of the head display in the cabin.
7. The virtual reality technology-based intelligent cabin immersion user experience method of claim 6, wherein smoothly stitching the frames in the comfort field of view and the frames in the edge field of view to obtain an in-cabin head display base frame, comprising:
determining a color three attribute value of an outer edge pixel point, a color three attribute value of a secondary outer edge pixel point of a picture in a comfortable view field, and a color three attribute value of an inner edge pixel point and a color three attribute value of a secondary inner edge pixel point of the picture in the edge view field;
calculating a spliced adjacent gradient threshold value of each color attribute based on an adjacent gradient threshold value of each color attribute of the picture in the comfortable view field and an adjacent gradient threshold value of each color attribute of the picture in the edge view field;
and resetting the color three attribute values of the outer edge pixel points, the color three attribute values of the secondary outer edge pixel points of the frame in the comfortable view field, the color three attribute values of the inner edge pixel points and the color three attribute values of the secondary inner edge pixel points of the frame in the edge view field based on the spliced adjacent gradual change threshold values of each color attribute to obtain the head display basic frame in the cabin.
8. The virtual reality technology-based intelligent cockpit immersive user experience method of claim 1, wherein S3: based on the real-time visual field pose of the driver, performing visual angle conversion and selection on the real-time vehicle exterior environment video of the current vehicle to generate an exterior head display basic picture at the current moment, wherein the method comprises the following steps:
building a model based on a real-time vehicle external environment video of the current vehicle to obtain an external vehicle environment space model;
selecting a space model of an external vehicle environment model based on the real-time view field pose to obtain an internal vehicle external view field model;
and generating a non-perspective picture of the model in the external view field based on the real-time view pose and the preset view display parameters, and obtaining an external head display basic picture at the current moment.
9. The virtual reality technology-based intelligent cockpit immersive user experience method of claim 1, wherein S5: based on the individualized driving control data in the intelligent cockpit, the intelligent marking is carried out on the virtual reality head display picture, the complete picture of the virtual reality head display is obtained, and the intelligent marking method comprises the following steps:
extracting real-time driving control data capable of overlapping display data items from the personalized driving control data in the intelligent cockpit;
converting the display form of the real-time driving control data to obtain display data capable of overlapping display data items;
and marking the display data in the complete picture of the virtual reality head display to obtain the complete picture of the virtual reality head display.
10. Intelligent cabin immersive user experience system based on virtual reality technology, which is characterized by comprising:
the first pose determining module is used for determining the real-time visual field pose of the driver based on the real-time body pose of the driver in the intelligent cockpit;
the first picture generation module is used for generating an in-cabin head display basic picture at the current moment based on the in-cabin video of the intelligent cabin of the current vehicle, the real-time body pose and the real-time view field pose of the driver in the intelligent cabin;
the second picture generation module is used for carrying out visual angle conversion and selection on the real-time vehicle external environment video of the current vehicle based on the real-time visual field pose of the driver to generate an external head display basic picture at the current moment;
the head display picture superposition module is used for superposing the head display basic picture in the cabin and the head display basic picture outside the vehicle to obtain a virtual reality head display picture;
the data intelligent marking module is used for intelligently marking the virtual reality head display picture based on the personalized driving control data in the intelligent cockpit to obtain a complete picture of the virtual reality head display;
and the head display picture display module is used for transmitting the complete picture of the virtual reality head display to the virtual reality head display equipment for display.
CN202311131303.2A 2023-09-04 2023-09-04 Intelligent cabin immersive user experience method and system based on virtual reality technology Pending CN117193530A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311131303.2A CN117193530A (en) 2023-09-04 2023-09-04 Intelligent cabin immersive user experience method and system based on virtual reality technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311131303.2A CN117193530A (en) 2023-09-04 2023-09-04 Intelligent cabin immersive user experience method and system based on virtual reality technology

Publications (1)

Publication Number Publication Date
CN117193530A true CN117193530A (en) 2023-12-08

Family

ID=88995459

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311131303.2A Pending CN117193530A (en) 2023-09-04 2023-09-04 Intelligent cabin immersive user experience method and system based on virtual reality technology

Country Status (1)

Country Link
CN (1) CN117193530A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320820A (en) * 2015-12-02 2016-02-10 上海航空电器有限公司 Rapid cockpit design system and method based on immersive virtual reality platform
CN113306491A (en) * 2021-06-17 2021-08-27 深圳普捷利科技有限公司 Intelligent cabin system based on real-time streaming media
US20220004254A1 (en) * 2020-07-01 2022-01-06 The Salty Quilted Gentlemen, LLC Methods and systems for providing an immersive virtual reality experience
CN114625247A (en) * 2022-02-15 2022-06-14 广州小鹏汽车科技有限公司 Scene display system and method based on virtual reality and vehicle
US11562550B1 (en) * 2021-10-06 2023-01-24 Qualcomm Incorporated Vehicle and mobile device interface for vehicle occupant assistance
CN116110270A (en) * 2023-02-10 2023-05-12 深圳市邦康工业机器人科技有限公司 Multi-degree-of-freedom driving simulator based on mixed reality

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105320820A (en) * 2015-12-02 2016-02-10 上海航空电器有限公司 Rapid cockpit design system and method based on immersive virtual reality platform
US20220004254A1 (en) * 2020-07-01 2022-01-06 The Salty Quilted Gentlemen, LLC Methods and systems for providing an immersive virtual reality experience
CN113306491A (en) * 2021-06-17 2021-08-27 深圳普捷利科技有限公司 Intelligent cabin system based on real-time streaming media
US11562550B1 (en) * 2021-10-06 2023-01-24 Qualcomm Incorporated Vehicle and mobile device interface for vehicle occupant assistance
CN114625247A (en) * 2022-02-15 2022-06-14 广州小鹏汽车科技有限公司 Scene display system and method based on virtual reality and vehicle
CN116110270A (en) * 2023-02-10 2023-05-12 深圳市邦康工业机器人科技有限公司 Multi-degree-of-freedom driving simulator based on mixed reality

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王晓卫等: "基于虚拟现实的无人机智能监控系统综述", 飞航导弹, 2 March 2020 (2020-03-02), pages 26 *

Similar Documents

Publication Publication Date Title
US11914147B2 (en) Image generation apparatus and image generation method using frequency lower than display frame rate
Azuma A survey of augmented reality
US20160267720A1 (en) Pleasant and Realistic Virtual/Augmented/Mixed Reality Experience
US11854171B2 (en) Compensation for deformation in head mounted display systems
US20160307374A1 (en) Method and system for providing information associated with a view of a real environment superimposed with a virtual object
CN105432078B (en) Binocular gaze imaging method and equipment
CN105704479A (en) Interpupillary distance measuring method and system for 3D display system and display device
JPH1165431A (en) Device and system for car navigation with scenery label
CN110488979A (en) A kind of automobile showing system based on augmented reality
US20130265331A1 (en) Virtual Reality Telescopic Observation System of Intelligent Electronic Device and Method Thereof
CN114401414A (en) Immersive live broadcast information display method and system and information push method
CN109764888A (en) Display system and display methods
CN117916706A (en) Method for operating smart glasses in a motor vehicle during driving, correspondingly operable smart glasses and motor vehicle
CN105814604B (en) For providing location information or mobile message with the method and system of at least one function for controlling vehicle
JPH07200870A (en) Stereoscopic three-dimensional image generator
KR102490465B1 (en) Method and apparatus for rear view using augmented reality camera
CN108983963B (en) Vehicle virtual reality system model establishing method and system
US10567744B1 (en) Camera-based display method and system for simulators
KR101947372B1 (en) Method of providing position corrected images to a head mount display and method of displaying position corrected images to a head mount display, and a head mount display for displaying the position corrected images
CN117193530A (en) Intelligent cabin immersive user experience method and system based on virtual reality technology
US11521297B2 (en) Method and device for presenting AR information based on video communication technology
CN107784693B (en) Information processing method and device
CN115223231A (en) Sight direction detection method and device
CA3018454C (en) Camera-based display method and system for simulators
CN115457220B (en) Simulator multi-screen visual simulation method based on dynamic viewpoint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination