CN113221381B - Design method of virtual reality multi-view fusion model - Google Patents

Design method of virtual reality multi-view fusion model Download PDF

Info

Publication number
CN113221381B
CN113221381B CN202110609017.7A CN202110609017A CN113221381B CN 113221381 B CN113221381 B CN 113221381B CN 202110609017 A CN202110609017 A CN 202110609017A CN 113221381 B CN113221381 B CN 113221381B
Authority
CN
China
Prior art keywords
view
auxiliary
fusion
visual angle
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110609017.7A
Other languages
Chinese (zh)
Other versions
CN113221381A (en
Inventor
姚寿文
栗丽辉
王瑀
胡子然
兰泽令
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202110609017.7A priority Critical patent/CN113221381B/en
Publication of CN113221381A publication Critical patent/CN113221381A/en
Application granted granted Critical
Publication of CN113221381B publication Critical patent/CN113221381B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F30/00Computer-aided design [CAD]
    • G06F30/20Design optimisation, verification or simulation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The invention discloses a design method of a virtual reality multi-view fusion model, which comprises the following steps: acquiring the type of a visual angle according to the interaction task, and obtaining a main visual angle, an auxiliary visual angle, a configuration mode and a fusion method between the main visual angle and the auxiliary visual angle; and constructing a multi-view fusion model according to the configuration mode and the fusion method, wherein the multi-view fusion model is used for obtaining a main view image and an auxiliary view image, and fusing the auxiliary view image into the main view image to obtain the multi-view fusion image. According to the multi-view fusion model design method, a user can successively determine the configuration mode of the main view and the auxiliary view and the fusion method of the auxiliary view according to different requirements of an interaction task on the spatial perception and the interaction precision of the user and the requirements of the auxiliary view on the information richness, the intuition and the user intervention degree, so that a proper multi-view fusion model is determined, the spatial information of the surrounding environment is provided for the user, and the correctness of the limb movement posture and the naturalness of interaction are ensured.

Description

Design method of virtual reality multi-view fusion model
Technical Field
The invention relates to the field of virtual reality multi-view fusion, in particular to a design method of a virtual reality multi-view fusion model.
Background
The high-power density integrated transmission device is compact and complex in structure, good man-machine efficiency is the key for improving the assembly quality of products, and the high-power density integrated transmission device is difficult to consider in the conceptual design stage. The virtual reality technology has immersion and interactivity, and human-computer ergonomics evaluation with more substituted feeling and authenticity can be performed. However, in the current virtual reality system, a user cannot perceive spatial information outside a visual field, and it is difficult to ensure real-time correctness of assembly posture simulation in virtual assembly, so that only static evaluation of fixed actions can be realized.
In order to ensure the simulation correctness of the assembly posture of the human-computer work efficiency evaluation, the virtual reality environment needs to have a whole body collision feedback function. While current haptic feedback devices that provide collision feedback to a user apply physical stimuli directly to the user's body to simulate haptic sensations, the devices are expensive and have limitations. For example, wearable devices can limit or interfere with the natural movement of a worker. Mechanical arm-based haptic feedback devices can only apply force at a single point and cannot provide full body haptic feedback. The vibrotactile feedback package can limit the movement of the user and reduce the gesture simulation precision.
The current research shows that the first person visual angle 1PP is suitable for a hand fine interaction task, provides better hand and arm position perception for a user, and has higher interaction accuracy and efficiency. The defects are that the visual range of the visual field of the user is narrow, the whole body movement posture cannot be seen, and the correctness of the whole body posture in the assembly simulation is difficult to ensure. The third person says that the visual angle 3PP expands the spatial information perception range of the user, and the user can perceive the spatial position relation between the user and the surrounding environment and observe the activity of the virtual human body in the environment. The defects are that the method does not conform to the natural interaction habit of human, the observation of the assembly area of hands and arms is limited, and the accuracy and efficiency of interaction are reduced. The combination of the first person perspective 1PP and the third person perspective 3PP helps to improve the interaction performance of the user in the virtual environment. However, the influence of the fusion of the first person perspective 1PP and the third person perspective 3PP on the promotion of the perception of a full-body collision and the control of the virtual human motion in the virtual environment is not verified.
Disclosure of Invention
The invention aims to provide a design method of a virtual reality multi-view fusion model, which aims to solve the problems in the prior art, so that a user can combine the accuracy and the high efficiency of interactive operation with the perception of the spatial position relationship between the user and the surrounding environment in a virtual environment, and the real-time performance and the accuracy of assembly posture simulation in virtual assembly are improved.
In order to achieve the purpose, the invention provides the following scheme: the invention provides a design method of a virtual reality multi-view fusion model, which comprises the following steps:
acquiring a visual angle type according to an interaction task, and obtaining a main visual angle, an auxiliary visual angle, a configuration mode and a fusion method between the main visual angle and the auxiliary visual angle;
and constructing a multi-view fusion model according to the configuration mode and the fusion method, wherein the multi-view fusion model is used for obtaining a main view image and an auxiliary view image, and fusing the auxiliary view image into the main view image to obtain a multi-view fusion image.
Preferably, in the process of acquiring the view types, the view types at least include a main view type and an auxiliary view type, and the main view type and the auxiliary view type respectively include a first-person view or a third-person view.
Preferably, in the process of obtaining the configuration mode, the main view type is set as a first person view, and the auxiliary view type is set as a third person view.
Preferably, in the process of obtaining the configuration mode, the main view type is set as a third person view, and the auxiliary view type is set as a first person view.
Preferably, according to the configuration mode, the fusion method is obtained, and is used for representing that the third person perspective view is fused into the first person perspective view through different image fusion modes, where the image fusion modes at least include a handheld picture-in-picture mode, a head-up display picture-in-picture mode, and a handheld micro-world mode.
Preferably, according to the configuration mode, the fusion method is obtained, and is used for representing that the first person perspective is fused into the third person perspective in different image fusion manners, where the image fusion manners at least include a handheld picture-in-picture manner and a head-up display picture-in-picture manner.
Preferably, in the process of obtaining the fusion method, the main viewing angle adopts a first virtual camera aligned with the head of the virtual person; the auxiliary visual angle camera is fixed behind the center of the virtual scene and faces the virtual human.
Preferably, in the process of obtaining the fusion method, the main view camera is fixed behind the center of the virtual scene, faces the virtual human, and rotates along with the rotation of the HMD; the auxiliary visual angle camera adopts a virtual camera fixed on the head of a virtual person.
Preferably, the first virtual camera has a field angle of 110 °, and the auxiliary viewing angle camera has a field angle of 60 °.
Preferably, the main view angle camera has a view angle of 110 °, and the auxiliary view angle camera has a view angle of 60 °.
The invention discloses the following technical effects:
according to the design method of the virtual reality multi-view fusion model, provided by the invention, multiple views are fused, so that a user can observe the fine interaction of hands through the first person view and can also perceive the spatial position relation between the body and the surrounding environment and the whole body movement state by means of the third person view, the accuracy and intuition of the assembly operation are ensured, the spatial information of the surrounding environment is provided for the user, and the correctness of the body movement posture and the naturalness of the interaction are ensured. The multi-view fusion model improves the whole-body collision perception, the virtual human motion control and the intuition of the user in the virtual environment, provides more space information and collision details of the region of interest outside the field of view for the user, enables the user to understand the spatial position relationship between the limbs and the barriers of the user more easily, and can observe hand interaction operation more intuitively and clearly. When the assembly task of a compact assembly space is carried out in a virtual reality environment, a user can visually observe a hand interaction area through the multi-view fusion model, the accuracy and the efficiency of the completion of the assembly task are guaranteed, the whole body movement posture is favorably observed, the spatial position relation between the body of the user and surrounding parts is sensed, the interpenetration of the body and the parts is reduced, and the simulation accuracy of the assembly posture is guaranteed.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a main view configuration mode and an auxiliary view configuration mode applicable to different requirements of an interaction task on user spatial perception and interaction precision in an embodiment of the present invention;
FIG. 2 is a diagram illustrating the distribution of different auxiliary view fusion methods in three dimensions of information richness, intuition and user intervention level in an embodiment of the present invention;
fig. 3 is a schematic diagram of a 1PP camera setting in an embodiment of the present invention, where (a) is a 1PP modeling schematic diagram, and (b) is a picture observed by a user under 1 PP;
fig. 4 is a schematic diagram of a 3PP camera setting in an embodiment of the present invention, where (a) is a schematic diagram of modeling a 3PP, and (b) is a picture observed by a user under the 3 PP;
fig. 5 is a schematic diagram of a configuration mode of a main and auxiliary viewing angles in an embodiment of the present invention, in which (a) a schematic diagram of a 1PP and a 3PP camera, (b) a schematic diagram of a main viewing angle and an auxiliary viewing angle;
FIG. 6 is a schematic view of the 1PP and 3PP cameras and two cameras according to the embodiment of the present invention;
FIG. 7 is a diagram of the HH PIP model in accordance with an embodiment of the present invention;
FIG. 8 is a diagram of a HUD PIP model according to an embodiment of the present invention;
FIG. 9 is a WIM model diagram in accordance with an embodiment of the present invention;
FIG. 10 is a schematic diagram of five multi-view fusion models according to an embodiment of the present invention;
FIG. 11 is a pictorial illustration of the gear assembly task for the front gearbox of an integrated transmission in accordance with an embodiment of the present invention, (a) a user assembling gears in the original system, and (b) a user assembling gears in the modified system;
FIG. 12 is a schematic diagram of an image of a user standing up to prepare to grab a shaft according to an embodiment of the present invention, (a) a schematic diagram of the user grabbing in the original system, and (b) a schematic diagram of the user grabbing in the improved system;
fig. 13 is a schematic flow chart of a design method of a virtual reality multi-view fusion model according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The invention provides a design method of a virtual reality multi-view fusion model, which refers to fig. 13 and comprises the following steps:
and acquiring the type of the visual angle according to the interaction task, and obtaining a main visual angle, an auxiliary visual angle, a configuration mode between the main visual angle and the auxiliary visual angle and a fusion method.
As shown in FIG. 1, applicable primary and secondary view configuration modes are provided for different requirements of an interaction task on user spatial perception and interaction precision. The horizontal axis represents the operation precision requirement of the interaction task on the user, the left side represents that the task has low operation precision requirement, and the right side represents that the task has high operation precision requirement. The vertical axis represents the spatial perception requirement of the interaction task on the user, the upward representation of the task more requires the perception of the user on the global spatial information, and the downward representation of the task more requires the perception of the user on the information in the adjacent space.
The information richness, intuition and the degree of user intervention are different for different auxiliary visual angle fusion methods. As shown in fig. 2, the distribution of the fusion method of different auxiliary views in three dimensions of information richness, intuition and user intervention degree. The information richness degree refers to the completeness degree of various information which can be provided for a user by an auxiliary view, and the information includes spatial information, collision information, state information of a virtual person and the like.
In the present embodiment, the viewing angle types include a first person viewing angle 1PP and a third person viewing angle 3 PP. The first person perspective view 1PP can provide better perception capability to the near-field space and higher operation precision for the user, and the third person perspective view 3PP can provide better global space perception capability to the user but cannot ensure the operation precision. Therefore, according to different requirements of the interaction task on user spatial perception and interaction precision, the following main and auxiliary view angle configuration modes exist:
(1) when the interactive task has higher requirements on the near-field space perception capability and higher requirements on the operation precision, 1PP is adopted as an observation visual angle;
(2) when the interactive task has certain requirement on the global space perception capability and has higher requirement on the operation precision, the method adopts
Figure BDA0003094805650000071
As a viewing angle;
(3) when the interactive task has higher requirement on the global space perception capability and higher requirement on the operation precision, the method adopts
Figure BDA0003094805650000072
As a viewing angle;
(4) when the interactive task has higher requirement on the global spatial perception capability but has lower requirement on the operation precision, adopting 3PP as an observation visual angle;
(5) when the interactive task has higher requirement on the global space perception capability and has certain requirement on the operation precision, the method adopts
Figure BDA0003094805650000073
As a viewing angle;
(6) and when the interactive task has no high requirements on the global spatial perception capability and the operation precision, adopting 1PP or 3PP as an observation visual angle.
The first person perspective 1PP is a general viewing perspective of the current virtual reality device, and conforms to the natural viewing habit of the user. Motion pose data of the Head of the wearer is obtained based on a Head-motion tracking sensor of a Head Mounted Display (HMD), and the position of a viewing camera in the virtual environment is modified so that the angle of view of the camera in the virtual environment coincides with the Head orientation of the operator. The binocular display screen on the HMD projects the image of the virtual environment onto the retina of the user, and the stereoscopic effect of the virtual environment is generated based on the binocular parallax principle. By this technique, the operator can observe the virtual environment with 1PP that conforms to the human observation habit.
The present embodiment utilizes an HTC Vive family of HMD devices to enable operator head motion capture and virtual scene presentation. The 32 infrared sensors on the Vive HMD are arranged on the HMD in a special layout mode so as to ensure that the HMD can receive positioning signals sent by the Vive base station from various angles. The Vive base station is two infrared emitters erected in the real environment, and can emit infrared signals to the user activity space according to a certain frequency, and the receiving signals of different infrared sensors have slight difference at any moment. And calculating displacement and rotation data of the HMD according to the signal receiving schedule of the infrared sensor and the position of the HMD, and updating the pose data of the main observation camera in the virtual environment. When the virtual environment is built, the main observing camera is overlapped with the head node of the virtual person in the virtual environment, so that the motion of the main observing camera is consistent with the motion of the head of the virtual person, and the virtual environment is rendered by the main observing camera through the virtual person 1 PP. Finally, the virtual environment rendered by the main viewing camera is displayed to the user by utilizing binocular stereoscopic display of the HMD, and the user can observe the virtual environment through a 1PP viewing angle with Self-Avatar perception (Self-Avatar).
In this embodiment, the assembly task of the high power density integrated transmission has higher requirements on the operation precision and the global spatial perception capability, and the first person viewing angle 1PP (for short, 1PP) and the third person viewing angle 3PP (for short, 3PP) need to be mutually configured, so that (1) is selected to use 1PP as the main viewing angle and 3PP as the auxiliary viewing angle (for short, 3PP as the auxiliary viewing angle)
Figure BDA0003094805650000081
) Or (2) 3PP is used as the main viewing angle and 1PP is used as the auxiliary viewing angle (abbreviated as "viewing angle")
Figure BDA0003094805650000082
)。
The main and auxiliary view configuration mode determines the presentation mode of the 1PP and the 3PP on the interactive interface. The multi-view fusion model provided in the embodiment can enable the user to pass through the multi-view fusion model at the same timeThe 1PP and 3PP observe the virtual environment as shown in FIG. 5. In the interactive interface shown in fig. 5(b), the primary viewing angle occupies most of the user's field of view, and the secondary viewing angle is integrated into the display interface through a suitable fusion method. By means of permutation and combination, scenes observed by the 1PP virtual camera and the 3PP virtual camera shown in the figure 5(a) are respectively presented in the main view angle and the auxiliary view angle shown in the figure 5(b), and two main and auxiliary view angle configuration modes are generated, namely
Figure BDA0003094805650000091
Figure BDA0003094805650000092
Fig. 6 is a 1PP camera and a 3PP camera built in a virtual environment, and the viewing angles observed by both cameras.
In that
Figure BDA0003094805650000093
In the configuration mode, the main visual angle is realized by a virtual camera with a visual angle of 110 degrees, which is aligned with the head of the virtual person, and the position and the rotation of the virtual camera are changed along with the movement of the HMD; the auxiliary visual angle camera is fixed 4 meters behind and 3 meters above the center of the scene, faces the virtual human and has a visual angle of 60 degrees.
In that
Figure BDA0003094805650000094
In the configuration mode, the main view camera is fixed to
Figure BDA0003094805650000095
The auxiliary view camera in the configuration mode is at the same position, and rotates with the rotation of the HMD,
Figure BDA0003094805650000096
the auxiliary view camera in (1) is a virtual camera fixed on the head of a virtual person,
Figure BDA0003094805650000097
main viewing angle and auxiliaryThe angle of view of the angle of view respectively
Figure BDA0003094805650000098
The same in the configuration mode.
In order to better combine the advantages of the main and auxiliary viewing angles, avoid the auxiliary viewing angle from interfering with the normal interaction of the main viewing angle, and provide sufficient information, a fusion method of the auxiliary viewing angle image interface and the main viewing angle image interface needs to be determined according to the determined configuration mode.
The auxiliary picture presentation modes include Picture In Picture (PIP) and miniaturised World (WIM). PIP is a two-dimensional auxiliary picture presentation method that renders a scene observed by a virtual camera in a picture-in-picture form in a user field of view through an HMD. The WIM is a three-dimensional auxiliary picture presentation mode, is equivalent to a miniature copy of a virtual reality environment, and can freely adjust the observation angle by rotating the WIM model by hands.
The hand-held (HH) manipulation mode is to anchor the presentation mode and the hand, and the user can adjust the position of the hand by hand, adjust the distance between the hand and the eye, and select whether to place the hand in the visual field according to the requirement. A Head-up display (HUD) manipulation mode is to fix a presentation mode at a certain position in a field of view, and the HUD is stationary with respect to the Head of the user and is always presented in the field of view of the user.
In this embodiment, based on the presentation mode and the manipulation mode, three auxiliary view angle fusion methods are established, which are a handheld picture-in-picture (HH PIP for short), a head-up display picture-in-picture (HUD PIP for short), and a handheld micro world (WIM for short).
For the three auxiliary visual angle fusion methods of the HUD PIP, the HH PIP and the WIM, the information displayed by the HUD PIP is minimum, because the HUD PIP is fixed at a certain position in the visual field of the user and is static relative to the head of the user, the display positions of the auxiliary visual angle in the left and right visual fields can be switched, and the distance between the auxiliary visual angle and the eyes cannot be adjusted according to the requirement. The information displayed by the HHPIP is moderate because the HHPIP is anchored to the hand, so that the user can switch the HHPIP between left and right hands, adjust the position of the HHPIP by hands, such as adjusting the distance between the HHPIP and the eyes, and select whether to place the HHPIP in the visual field according to needs. The WIM can display most information, and because the WIM is in a three-dimensional auxiliary picture presentation mode, a user can adjust an observation angle in a wrist rotating mode to acquire environment information in a three-dimensional space.
Referring to fig. 7, an HH PIP model built in a virtual environment presents an image interface. In that
Figure BDA0003094805650000101
And
Figure BDA0003094805650000102
in the observation mode, HH PIP is established on the hand joint point of the avatar and on the hand-held controller, respectively. HH PIP always faces the user with the initial size set to 200mm x 150mm, with the initial position 200mm above the user's hand.
Referring to fig. 8, an image interface presented by a HUD PIP model built in a virtual reality environment. Unlike the HH PIP, the HUD PIP is always fixed in position, with its initial size set to 200mm × 150mm, with the initial position shifted 150mm to the left and 500mm to the front relative to the main viewing camera.
Referring to FIG. 9, an image interface presented by a WIM model built in a virtual reality environment. The most different from the PIP fusion method is that PIP is a two-dimensional presentation form, and WIM is a three-dimensional presentation form. The WIM is established on the hand of the user, and the observation angle can be freely adjusted by rotating the WIM model by the hand of the user.
Two main and auxiliary view angle configuration modes are constructed:
Figure BDA0003094805650000111
and
Figure BDA0003094805650000112
and three auxiliary view fusion methods: HHPIP, HUD PIP, WIM, among which WIM method only fuses 3PP, 1PP cannot be expressed in WIM model. Therefore, the following five multi-view fusion models are established in the virtual reality environment:
(1)1PP is used as main visual angle, and HH PIP method is used for fusingIn-3 PP auxiliary visual angle (abbreviation)
Figure BDA0003094805650000113
);
(2)1PP is used as a main visual angle, and a 3PP auxiliary visual angle (short for short) is fused by a HUD PIP method
Figure BDA0003094805650000114
);
(3)1PP is used as a main visual angle, and a WIM method is used for fusing a 3PP auxiliary visual angle (short for short)
Figure BDA0003094805650000115
);
(4)3PP is used as main visual angle, and 1PP auxiliary visual angle (short for short) is fused by HH PIP method
Figure BDA0003094805650000116
);
(5)3PP is used as a main visual angle, and a HUD PIP method is used for fusing 1PP auxiliary visual angle (short for short)
Figure BDA0003094805650000117
)。
The modeling results of the five multi-view fusion models are shown in fig. 10.
The three auxiliary visual angle fusion methods of HHPIP, HUD PIP and WIM have respective advantages. Because the HH PIP is anchored to the hand, the user can adjust his position by hand, e.g., adjust his distance to the eyes, and also select whether to place it in the field of view, as desired, which provides the advantages of flexibility, convenience, and high adjustability. Because the HUD PIP is fixed at a position in the user's field of view, always appears in the user's field of view, and is stationary relative to the user's head, the user's attention consumption is less, the cognitive load is low, and it is easy to observe at any time. Because the WIM is a three-dimensional auxiliary picture presentation mode, a user can change an observation angle of view in a wrist rotating mode, and therefore the WIM has the advantages of being strong in intuition, flexible in maneuverability and free in adjustment of the observation angle.
Figure BDA0003094805650000121
3PP in (1) auxiliary Angle of View and
Figure BDA0003094805650000122
the 1PP in (1) has its own advantages.
Figure BDA0003094805650000123
The 3PP auxiliary visual angle can effectively present spatial information outside the main visual angle, improves perception of a user on the overall spatial information, is helpful for helping the user estimate the distance between objects in the virtual reality environment, helps the user locate the position of a target, and helps the user locate the position of the user in the virtual reality environment;
Figure BDA0003094805650000124
the auxiliary visual angle of the middle 1PP can help to present spatial information outside the main visual angle, and the perception of a user on information in an adjacent space is improved.
The five multi-view fusion model experiments show that: the auxiliary visual angle improves the collision perception and the intuition of the user, the performance of the user in obstacle avoidance and object grabbing is improved, and the user prefers to avoid the obstacle and grab the object by using an observation mode with the auxiliary visual angle. The auxiliary visual angle provides more space information and collision details of the region of interest outside the visual field for the user, and hand interaction operation can be displayed more intuitively.
Figure BDA0003094805650000125
The system provides stronger intuition and collision perception in obstacle avoidance and object grabbing, and has higher system usability. Because the WIM is a three-dimensional auxiliary visual angle fusion method, a user can freely change an observation angle through a natural mode of rotating a wrist, more spatial information details are provided, the user can more easily understand the spatial position relation between the limbs and the barriers of the user, and the hand interaction area can be observed more clearly.
The invention provides a fusion method of three auxiliary visual angles, namely HH PIP, HUD PIP and WIM. Three fusion methods represent three degrees of freedom of auxiliary view control. The user of the HUD PIP method may be minimally intrusive but may cause continuous interference with the main view. The WIM method provides the most detail of the environment information and the auxiliary view has the most flexible control strategy (position and viewing angle of the auxiliary view), but these advantages will bring higher cognitive load to the user. The HH PIP method is controllable to a degree between HUD PIP and WIM.
WIM is the favorite auxiliary visual angle fusion mode of the user, and the WIM method has higher intuition and collision perception than HH PIP and HUD PIP. The user can freely adjust the viewing angle of the WIM through rotating the wrist, the manipulation flexibility is stronger, and therefore limb movement and object grabbing can be better controlled.
For the primary and secondary view configuration modes,
Figure BDA0003094805650000131
ratio of
Figure BDA0003094805650000132
And is more natural. In that
Figure BDA0003094805650000133
In the middle, the user can adapt to the observation interface more quickly, avoid obstacles more naturally and grab objects. In addition, the
Figure BDA0003094805650000134
In (3), the user takes longer to adapt to the viewing interface.
Taking a gear assembly task of a front transmission case of a certain integrated transmission device as an example, gear assembly is respectively performed in a virtual reality system (hereinafter referred to as an "improved system") in which a multi-view fusion model is established and a virtual reality system (hereinafter referred to as an "original system") in which the multi-view fusion model is not established.
Fig. 11 illustrates a user assembling gears in the original system and the improved system, respectively, wherein the left image in each sub-image is a virtual reality scene observed by the user from a global perspective, and the right image is a virtual reality scene observed by the user in an HMD (Head Mounted Display). As shown in fig. (a), in the original system, the user only observes at the first person's angle of view, and cannot observe whether the head, limbs, etc. of the user penetrate through the front transmission case, and cannot observe whether the assembling posture is correct, and the user tends to assemble in a more comfortable and possibly incorrect posture, so that the bending degree of the trunk is insufficient when the user performs the gear assembling operation, the head penetrates through the front transmission case, and the posture is obviously unreasonable. As shown in the drawing (b), the user can observe the whole body assembly posture through the WIM auxiliary view angle of the multi-view fusion model in the improved system, perceive the relative position relation of the limbs and the front transmission case, and ensure that the task is completed in the correct posture, so that the bending degree of the trunk of the user is larger than that in the drawing (a), the head of the user does not penetrate through the case body, and the assembly posture is reasonable.
Fig. 12 shows the user after completing the assembly of the idler gear, standing up and going to a work station to grasp the idler shaft. Wherein the left image in each sub-image is a virtual reality scene observed by an observer at a global viewing angle, and the right image is a virtual reality scene observed by a user in the HMD. In the original system shown in the figure (a), the user gets up and gets back, the visual angle of the user faces to the workbench, the front transmission box is not in the visual field range, the legs of the user are obviously penetrated through the front transmission box, and the posture is obviously unreasonable. In the improved system shown in the diagram (b), the user can observe the whole body movement posture of the user from the WIM auxiliary visual angle of the multi-visual-angle fusion model, and the situation that the limbs of the user penetrate through the box body when the current transmission box is not in the visual field range of the transmission box is avoided.
In the description of the present invention, it is to be understood that the terms "longitudinal", "lateral", "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, are merely for convenience of description of the present invention, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention.
The above-described embodiments are merely illustrative of the preferred embodiments of the present invention, and do not limit the scope of the present invention, and various modifications and improvements of the technical solutions of the present invention can be made by those skilled in the art without departing from the spirit of the present invention, and the technical solutions of the present invention are within the scope of the present invention defined by the claims.

Claims (5)

1. A design method of a virtual reality multi-view fusion model is characterized by comprising the following steps: the method comprises the following steps:
acquiring visual angle types according to an interaction task, and obtaining a main visual angle, an auxiliary visual angle, a configuration mode between the main visual angle and the auxiliary visual angle and a fusion method, wherein the interaction task comprises operation precision requirements on a user and space perception requirements on the user, the fusion method is distributed on three dimensions of information richness, intuitiveness and user intervention degree under different auxiliary visual angles, and the information richness refers to the completeness of various types of information which can be provided for the user by the auxiliary visual angles and comprises space information, collision information and state information of a virtual person;
constructing a multi-view fusion model according to the configuration mode and the fusion method, wherein the multi-view fusion model is used for obtaining a main view image and an auxiliary view image, fusing the auxiliary view image into the main view image and obtaining a multi-view fusion image,
in the process of acquiring the view types, the view types at least include a main view type and an auxiliary view type, and the main view type and the auxiliary view type respectively include a first person view or a third person view,
in the process of obtaining the configuration mode, setting the main view type as a first person view, setting the auxiliary view type as a third person view, and according to the configuration mode, obtaining the fusion method for representing that the third person view is fused into the first person view in different image fusion modes, wherein the image fusion modes at least comprise a handheld picture-in-picture mode, a head-up display picture-in-picture mode and a handheld micro-shrinking world mode;
in the process of obtaining the configuration mode, setting the main view type as a third person view, setting the auxiliary view type as a first person view, and according to the configuration mode, obtaining the fusion method, which is used for representing that the first person view is fused into the third person view through different image fusion modes, wherein the image fusion modes at least include a handheld picture-in-picture mode and a head-up display picture-in-picture mode;
according to different requirements of the interaction task on user space perception and operation precision, the configuration mode comprises the following steps:
when the interactive task has high requirements on the near-field space perception capability and high requirements on the operation precision, the first human-scale viewing angle 1PP is adopted as an observation viewing angle;
when the interaction task has a certain requirement on the global space perception capability and has a higher requirement on the operation precision, the method adopts
Figure 126827DEST_PATH_IMAGE001
As a viewing angle, wherein
Figure 591438DEST_PATH_IMAGE001
1PP is used as a main visual angle, and 3PP is used as an auxiliary visual angle;
when the interactive task has higher requirement on the global space perception capability and has higher requirement on the operation precision, the method adopts
Figure DEST_PATH_IMAGE002
As a viewing angle, among others,
Figure 433492DEST_PATH_IMAGE002
1PP is taken as a main visual angle, and a 3PP auxiliary visual angle is fused by a three-dimensional WIM method;
when the interactive task has higher requirement on the global spatial perception capability but has lower requirement on the operation precision, adopting the third person named viewing angle 3PP as an observation viewing angle;
when the interactive task has higher requirement on the global space perception capability and has certain requirement on the operation precision, the method adopts
Figure DEST_PATH_IMAGE003
As a viewing angle, wherein
Figure 635891DEST_PATH_IMAGE003
3PP is taken as a main visual angle, and 1PP is taken as an auxiliary visual angle;
and when the interactive task has no higher requirements on the global spatial perception capability and the operation precision, adopting the first person perspective 1PP or the third person perspective 3PP as an observation perspective.
2. The design method of the virtual reality multi-view fusion model according to claim 1, characterized in that: in the process of obtaining the fusion method, the main visual angle adopts a first virtual camera aligned with the head of the virtual person; the auxiliary visual angle camera is fixed behind the center of the virtual scene and faces the virtual human.
3. The method for designing the virtual reality multi-view fusion model according to claim 2, wherein: in the process of obtaining the fusion method, the main visual angle camera is fixed behind the center of the virtual scene, faces the virtual human, and rotates along with the rotation of the HMD; the auxiliary visual angle camera adopts a virtual camera fixed on the head of a virtual person.
4. The method for designing the virtual reality multi-view fusion model according to claim 2, wherein: the field angle of the first virtual camera is 110 degrees, and the field angle of the auxiliary viewing angle camera is 60 degrees.
5. The design method of the virtual reality multi-view fusion model according to claim 3, characterized in that: the field angle of the main viewing angle camera is 110 degrees, and the field angle of the auxiliary viewing angle camera is 60 degrees.
CN202110609017.7A 2021-06-01 2021-06-01 Design method of virtual reality multi-view fusion model Active CN113221381B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110609017.7A CN113221381B (en) 2021-06-01 2021-06-01 Design method of virtual reality multi-view fusion model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110609017.7A CN113221381B (en) 2021-06-01 2021-06-01 Design method of virtual reality multi-view fusion model

Publications (2)

Publication Number Publication Date
CN113221381A CN113221381A (en) 2021-08-06
CN113221381B true CN113221381B (en) 2022-03-08

Family

ID=77082140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110609017.7A Active CN113221381B (en) 2021-06-01 2021-06-01 Design method of virtual reality multi-view fusion model

Country Status (1)

Country Link
CN (1) CN113221381B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114518825A (en) * 2022-02-14 2022-05-20 广州塔普鱼网络科技有限公司 XR (X-ray diffraction) technology-based man-machine interaction method and system
CN115639976B (en) * 2022-10-28 2024-01-30 深圳市数聚能源科技有限公司 Multi-mode multi-angle synchronous display method and system for virtual reality content

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111420402A (en) * 2020-03-18 2020-07-17 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, terminal and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111773711A (en) * 2020-07-27 2020-10-16 网易(杭州)网络有限公司 Game visual angle control method and device, storage medium and electronic device

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111420402A (en) * 2020-03-18 2020-07-17 腾讯科技(深圳)有限公司 Virtual environment picture display method, device, terminal and storage medium

Also Published As

Publication number Publication date
CN113221381A (en) 2021-08-06

Similar Documents

Publication Publication Date Title
US20220080303A1 (en) Spatially-correlated human-machine interface
US11173392B2 (en) Spatially-correlated human-machine interface
EP3365724B1 (en) Selecting virtual objects in a three-dimensional space
Anthes et al. State of the art of virtual reality technology
CN105354820B (en) Adjust the method and device of virtual reality image
CN103793049B (en) virtual reality display system
CN103744518B (en) Stereo interaction method and display device thereof and system
JP6027747B2 (en) Multi-display human machine interface with spatial correlation
Chung et al. Exploring virtual worlds with head-mounted displays
CN113221381B (en) Design method of virtual reality multi-view fusion model
US20050264558A1 (en) Multi-plane horizontal perspective hands-on simulator
JP4413203B2 (en) Image presentation device
CN114641251A (en) Surgical virtual reality user interface
JPH0749744A (en) Head mounting type display input device
CN102665589A (en) Patient-side surgeon interface for a minimally invasive, teleoperated surgical instrument
TW201401224A (en) System and method for performing three-dimensional motion by two-dimensional character
KR20130097014A (en) Expanded 3d stereoscopic display system
CN105138130B (en) Strange land is the same as information interchange indicating means and system in scene
JP4397217B2 (en) Image generation system, image generation method, program, and information storage medium
US8307295B2 (en) Method for controlling a computer generated or physical character based on visual focus
JP2020031413A (en) Display device, mobile body, mobile body control system, manufacturing method for them, and image display method
CN219302988U (en) Augmented reality device
Kelley Exploring Virtual Worlds with Head-Mounted Displays
Joonatan Evaluation of Intuitive VR-based HRI for Simulated Industrial Robots

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant