CN116610213A - Interactive display method and device in virtual reality, electronic equipment and storage medium - Google Patents

Interactive display method and device in virtual reality, electronic equipment and storage medium Download PDF

Info

Publication number
CN116610213A
CN116610213A CN202310525051.5A CN202310525051A CN116610213A CN 116610213 A CN116610213 A CN 116610213A CN 202310525051 A CN202310525051 A CN 202310525051A CN 116610213 A CN116610213 A CN 116610213A
Authority
CN
China
Prior art keywords
image
eye
control
scene
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310525051.5A
Other languages
Chinese (zh)
Inventor
李沛伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
You Can See Beijing Technology Co ltd AS
Original Assignee
You Can See Beijing Technology Co ltd AS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by You Can See Beijing Technology Co ltd AS filed Critical You Can See Beijing Technology Co ltd AS
Priority to CN202310525051.5A priority Critical patent/CN116610213A/en
Publication of CN116610213A publication Critical patent/CN116610213A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F13/00Video games, i.e. games using an electronically generated display having two or more dimensions
    • A63F13/50Controlling the output signals based on the game progress
    • A63F13/52Controlling the output signals based on the game progress involving aspects of the displayed game scene
    • A63F13/525Changing parameters of virtual cameras
    • A63F13/5252Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20228Disparity calculation for image-based rendering

Abstract

The embodiment of the disclosure discloses an interactive display method, an interactive display device, electronic equipment and a storage medium in virtual reality, wherein the method comprises the following steps: acquiring images of a preset scene by two virtual cameras at a first preset position to obtain a sedum aizoon empty box; wherein the two virtual cameras have the same camera parameters; acquiring images of the control icons through the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image; respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image; and rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of the preset scene comprising the control icon.

Description

Interactive display method and device in virtual reality, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of virtual display, in particular to an interactive display method, an interactive display device, electronic equipment and a storage medium in virtual reality.
Background
At present, regarding VR technology, there are digital VR technology implemented by adopting a digital modeling manner, and also there are real-scene VR technology in which real-scene shooting is performed as a material, and the digital VR technology is implemented by putting two horizontal virtual cameras into a built digital scene, and providing viewing angles of left and right eyes of a person in the scene by the two virtual cameras. At present, virtual space tour in VR devices mainly comprises rendering of a space box: the two eyepieces of the VR device render the same scene so that the images seen by the left and right eyes of the person are identical, and the human brain can still restore a scene with a scale but slight distortion due to the monocular depth estimation capability of the human brain.
Disclosure of Invention
The present disclosure has been made in order to solve the above technical problems. The embodiment of the disclosure provides an interactive display method, an interactive display device, electronic equipment and a storage medium in virtual reality.
According to an aspect of the embodiments of the present disclosure, there is provided an interactive display method in virtual reality, including:
acquiring images of a preset scene by two virtual cameras at a first preset position to obtain a sedum aizoon empty box; wherein the two virtual cameras have the same camera parameters;
Acquiring images of the control icons through the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image;
respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image;
and rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of the preset scene comprising the control icon.
Optionally, the two virtual cameras include a left eye virtual camera and a right eye virtual camera;
the image acquisition is carried out on a preset scene through two virtual cameras at a first preset position to obtain a scene sky box, and the method comprises the following steps:
and performing image acquisition rendering on a preset scene at the same position through the left-eye virtual camera and the right-eye virtual camera to obtain a rhodiola empty box.
Optionally, before the image acquisition is performed on the control icon by the two virtual cameras at the second preset position to obtain the left eye control image and the right eye control image, the method further includes:
adjusting the translation distance between the left-eye virtual camera and the right-eye virtual camera to be a preset distance;
The image acquisition is performed on the control icon by the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image, which comprises the following steps:
and respectively carrying out image acquisition on the control icons based on the left-eye virtual camera and the right-eye virtual camera after the translation distance is adjusted, so as to obtain the left-eye control image and the right-eye control image.
Optionally, before the image acquisition is performed on the control icon by the two virtual cameras at the second preset position to obtain the left eye control image and the right eye control image, the method further includes:
acquiring real position information of a controller in a world coordinate system; the control icon is an icon which corresponds to the controller and has a preset shape;
adding the control icon corresponding to the controller in a blank scene based on the real position information; wherein the blank scene corresponds to the preset scene, but the scene sky box is not rendered;
the image acquisition is performed on the control icon by the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image, which comprises the following steps:
And acquiring images of the blank scene added with the control icon through the two virtual cameras at the second preset position, so as to obtain a left eye control image and a right eye control image.
Optionally, the adding the control icon corresponding to the controller in the blank scene based on the real position information includes:
determining an icon position of the control icon in the blank scene based on the real position information;
and adding the control icon into the blank scene according to the icon position.
Optionally, after the image acquisition is performed on the preset scene by the two virtual cameras at the first preset position to obtain the scene sky box, the method further includes:
storing the scene sky box into a first buffer zone of a memory; wherein, the information cached in the first buffer area is not displayed;
the step of respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image, and then further comprises the steps of:
storing the left eye image and the right eye image in the first buffer; wherein the left eye image and the right eye image correspond to a left eye region and a right eye region in the first buffer, respectively.
Optionally, the memory further comprises a second buffer;
the rendering the left eye image to a left view screen of the head-mounted display device, and the rendering the right eye image to a right view screen of the head-mounted display device, to realize the exhibition of a preset scene including control icons, includes:
swapping the left eye image stored in the left eye region in the first buffer to a left eye region in the second buffer;
swapping the right eye image stored in the right eye region in the first buffer to a right eye region in the second buffer;
and rendering the left-eye image in the left-eye area of the second buffer area to a left-view screen of the head-mounted display device, and rendering the right-eye image in the right-eye area of the second buffer area to a right-view screen of the head-mounted display device, so as to realize the display of a preset scene comprising control icons.
According to another aspect of the embodiments of the present disclosure, there is provided an interactive display device in virtual reality, including:
the first image acquisition module is used for acquiring images of a preset scene through two virtual cameras at a first preset position to obtain a rhodiola empty box; wherein the two virtual cameras have the same camera parameters;
The second image acquisition module is used for acquiring images of the control icons through the two virtual cameras at a second preset position to obtain a left eye control image and a right eye control image;
the image superposition module is used for superposing the left eye control image and the right eye control image into the scene sky box respectively to obtain a left eye image and a right eye image;
and the image rendering module is used for rendering the left-eye image to a left-view screen of the head-mounted display device and rendering the right-eye image to a right-view screen of the head-mounted display device, so that display of the preset scene comprising the control icon is realized.
Optionally, the two virtual cameras include a left eye virtual camera and a right eye virtual camera;
the first image acquisition module is specifically configured to perform image acquisition and rendering on a preset scene through the left-eye virtual camera and the right-eye virtual camera at the same position to obtain a sedum aizoon empty box.
Optionally, the apparatus further comprises:
the distance adjusting module is used for adjusting the translation distance between the left-eye virtual camera and the right-eye virtual camera to be a preset distance;
the second image acquisition module is specifically configured to acquire images of the control icons based on the left-eye virtual camera and the right-eye virtual camera after the translation distance is adjusted, so as to obtain the left-eye control image and the right-eye control image.
Optionally, the apparatus further comprises:
the position acquisition module is used for acquiring the real position information of the controller in a world coordinate system; the control icon is an icon which corresponds to the controller and has a preset shape;
the icon adding module is used for adding the control icon corresponding to the controller into a blank scene based on the real position information; wherein the blank scene corresponds to the preset scene, but the scene sky box is not rendered;
the second image acquisition module is specifically configured to acquire an image of a blank scene added with the control icon through the two virtual cameras at a second preset position, so as to obtain a left eye control image and a right eye control image.
Optionally, the icon adding module is specifically configured to determine an icon position of the control icon in the blank scene based on the real position information; and adding the control icon into the blank scene according to the icon position.
Optionally, the apparatus further comprises:
the buffer module is used for storing the scene sky box into a first buffer zone of a memory; wherein, the information cached in the first buffer area is not displayed; storing the left eye image and the right eye image in the first buffer; wherein the left eye image and the right eye image correspond to a left eye region and a right eye region in the first buffer, respectively.
Optionally, the memory further comprises a second buffer;
the image rendering module is specifically configured to swap the left-eye image stored in the left-eye region in the first buffer to the left-eye region in the second buffer; swapping the right eye image stored in the right eye region in the first buffer to a right eye region in the second buffer; and rendering the left-eye image in the left-eye area of the second buffer area to a left-view screen of the head-mounted display device, and rendering the right-eye image in the right-eye area of the second buffer area to a right-view screen of the head-mounted display device, so as to realize the display of a preset scene comprising control icons.
According to still another aspect of the embodiments of the present disclosure, there is provided an electronic device including:
a memory for storing a computer program product;
and a processor, configured to execute the computer program product stored in the memory, where the computer program product is executed to implement the interactive display method in virtual reality according to any one of the foregoing embodiments.
According to yet another aspect of the embodiments of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the interactive display method in virtual reality according to any of the embodiments described above.
According to yet another aspect of the disclosed embodiments, there is provided a computer program product comprising computer program instructions which, when executed by a processor, implement the interactive display method in virtual reality according to any of the embodiments described above.
Based on the interactive display method, the device, the electronic equipment and the storage medium in the virtual reality provided by the embodiment of the disclosure, image acquisition is performed on a preset scene through two virtual cameras at a first preset position, so as to obtain a sedum aizoon empty box; wherein the two virtual cameras have the same camera parameters; acquiring images of the control icons through the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image; respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image; rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of a preset scene comprising control icons; according to the embodiment, the virtual cameras at different positions are respectively adopted for image acquisition aiming at the preset scene and the interactive control icon, so that a user can perceive that the control icon has a real physical space scale while watching a parallax-free sky box virtual scene through the head-mounted display device, the purpose of interaction between the user and the virtual scene is achieved, and the interaction experience of the user is improved.
The technical scheme of the present disclosure is described in further detail below through the accompanying drawings and examples.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description, serve to explain the principles of the disclosure.
The disclosure may be more clearly understood from the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a flow chart of an interactive display method in virtual reality provided by an exemplary embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of an interactive display device in virtual reality according to an exemplary embodiment of the present disclosure;
fig. 3 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It should be apparent that the described embodiments are only some of the embodiments of the present disclosure and not all of the embodiments of the present disclosure, and that the present disclosure is not limited by the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless it is specifically stated otherwise.
It will be appreciated by those of skill in the art that the terms "first," "second," etc. in embodiments of the present disclosure are used merely to distinguish between different steps, devices or modules, etc., and do not represent any particular technical meaning nor necessarily logical order between them.
It should also be understood that in embodiments of the present disclosure, "plurality" may refer to two or more, and "at least one" may refer to one, two or more.
It should also be appreciated that any component, data, or structure referred to in the presently disclosed embodiments may be generally understood as one or more without explicit limitation or the contrary in the context.
In addition, the term "and/or" in this disclosure is merely an association relationship describing an association object, and indicates that three relationships may exist, for example, a and/or B may indicate: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the front and rear association objects are an or relationship. The data referred to in this disclosure may include unstructured data, such as text, images, video, and the like, as well as structured data.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and that the same or similar features may be referred to each other, and for brevity, will not be described in detail.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
Embodiments of the present disclosure may be applicable to electronic devices such as terminal devices, computer systems, servers, etc., which may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with the terminal device, computer system, server, or other electronic device include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, network personal computers, minicomputer systems, mainframe computer systems, and distributed cloud computing technology environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc., that perform particular tasks or implement particular abstract data types. The computer system/server may be implemented in a distributed cloud computing environment in which tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computing system storage media including memory storage devices.
Exemplary method
Fig. 1 is a flowchart of an interactive display method in virtual reality according to an exemplary embodiment of the present disclosure. The embodiment can be applied to an electronic device, as shown in fig. 1, and includes the following steps:
and 102, acquiring images of a preset scene by two virtual cameras at a first preset position to obtain the sedum aizoon empty box.
Wherein the two virtual cameras have the same camera parameters. Camera parameters may include, but are not limited to: field of View (FOV), etc.; in the image formed by the visual angle display device, the human eye can observe the included angle between the edge of the part and the central connecting line of the pupil of the human eye, including a horizontal visual angle, a vertical visual angle and a diagonal visual angle.
In this embodiment, a method of rendering a space box in a common manner may be used for rendering a preset scene, so as to obtain a space box with no parallax for both eyes of a user; specifically, the two virtual cameras may be disposed at the same position, that is, the first preset position is the same position, so that the two virtual cameras collect images without parallax.
The stereoscopic picture of a Virtual Reality (VR) scene includes a left-eye picture image and a right-eye picture image together in two images. The left eye picture image is displayed in a left eye display screen region of the head-mounted virtual reality display device and the right eye picture image is displayed in a right eye display screen region of the head-mounted virtual reality display device. Three-dimensional scene stereograph drawing technology is introduced in many documents, for example, various virtual stereo camera models used for drawing stereograph are introduced in the paper "parallax visualization adjustment method design and implementation for three-dimensional stereograph production" published in "computer aided design and graphics journal" 29 volume 7 in 2017. The virtual stereo camera includes a left-eye virtual camera and a right-eye virtual camera (two virtual cameras in this embodiment).
And 104, acquiring images of the control icons through two virtual cameras at a second preset position to obtain a left eye control image and a right eye control image.
In an embodiment, the second preset position is different from the first preset position, optionally, a certain distance (for example, a pupil distance of human eyes, etc.) exists between the two virtual cameras at the second preset position, and because the two virtual cameras have a certain distance, the obtained left eye control image and the right eye control image are not identical, a certain parallax exists, and a control icon with a real space scale can be obtained based on the left eye control image and the right eye control image.
And 106, respectively superposing the left eye control image and the right eye control image into the sedum aizoon empty box to obtain a left eye image and a right eye image.
Optionally, since the purpose of the controller is to interact with the preset scene, after the left eye control image and the right eye control image are obtained, the left eye control image and the right eye control image are respectively overlapped with the scene sky box, so as to achieve the purpose that the user interacts in the preset scene through the controller.
Step 108, rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of the preset scene including the control icon.
In this embodiment, VR display is implemented by a head-mounted display device, which may include, but is not limited to, glasses or helmets having two visual screens, etc.; by rendering a left eye image in a left view screen and rendering a right view image on a right view screen, the situation that when a user views, the preset scene in the obtained display image has no parallax, and the control icon has parallax, so that the position of the controller is not distorted (has a real scale), and the space position of the controller can be perceived when the user interacts in the preset scene is realized; the method solves the problem that when the control icons are obtained through the same rendering method, in the virtual scene, the hands (controllers) of the user are seen to be far away than the body of the user, and the user is as if the body of the user is stretched, so that bad user experience is caused.
According to the interactive display method in the virtual reality, which is provided by the embodiment of the disclosure, the image acquisition is carried out on the preset scene through the two virtual cameras at the first preset position, so that the sedum aizoon empty box is obtained; wherein the two virtual cameras have the same camera parameters; acquiring images of the control icons through the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image; respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image; rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of a preset scene comprising control icons; according to the embodiment, the virtual cameras at different positions are respectively adopted for image acquisition aiming at the preset scene and the interactive control icon, so that a user can perceive that the control icon has a real physical space scale while watching a parallax-free sky box virtual scene through the head-mounted display device, the purpose of interaction between the user and the virtual scene is achieved, and the interaction experience of the user is improved.
In some alternative embodiments, the two virtual cameras include a left-eye virtual camera and a right-eye virtual camera;
step 102 may include:
and performing image acquisition rendering on a preset scene at the same position through the left-eye virtual camera and the right-eye virtual camera to obtain the sedum aizoon empty box.
In this embodiment, a rendering sky box scheme is adopted for the preset scene, the preset scene obtained by rendering is parallax-free, and a preset scene with a scale but slight distortion can be restored based on the monocular depth estimation capability of the human brain.
In some alternative embodiments, before performing step 104, it may further include:
and adjusting the translation distance between the left-eye virtual camera and the right-eye virtual camera to be a preset distance.
Alternatively, the preset distance may be a pupil distance between left and right eyes of the human eye, and may be set according to actual situations, for example, the preset distance may be determined by measuring a pupil distance of a current user, or may be set to a commonly used pupil distance (for example, about 60 mm) determined by big data statistics; or the preset distance is set according to the user.
In this embodiment, step 104 may include:
and respectively carrying out image acquisition on the control icons based on the left-eye virtual camera and the right-eye virtual camera after the translation distance is adjusted, so as to obtain a left-eye control image and a right-eye control image.
In this embodiment, when the control image is acquired, since there is a preset distance between the left-eye virtual camera and the right-eye virtual camera, the obtained left-eye control image and right-eye control image have parallax, and since the preset distance is related to the pupil distance of human eyes, the parallax accords with the habit of viewing objects of human eyes, and when the control image is rendered on the left-eye screen and the right-eye screen corresponding to human eyes, the control image can be directly viewed as human eyes, so as to realize the feeling of actually observing the controller.
In some alternative embodiments, before performing step 104, it may further include:
the real position information of the controller in the world coordinate system is acquired.
The control icon is an icon which corresponds to the controller and has a preset shape; alternatively, the control image is an avatar projected into the preset space by the controller, and the control icon may be any shape, for example, an arrow shape, a palm shape, or the like.
Adding a control icon corresponding to the controller into the blank scene based on the real position information; the blank scene corresponds to a preset scene, but the sedum aizoon empty box is not rendered.
In this embodiment, step 104 may include:
and acquiring images of the blank scene added with the control icon through two virtual cameras at a second preset position to obtain a left eye control image and a right eye control image.
In this embodiment, a user may implement interaction in the virtual display through the controller, where the controller may be a user's hand or a device with a control function held by the user, and the embodiment does not limit a form of the controller, and only needs to be movable and controlled in a visual range of the user to implement interaction with the virtual display; alternatively, the real position information of the controller in the world coordinate system can be obtained through an online virtual reality simulation application platform (WEBXR), from which the position of the control icon in the blank scene can be determined.
Optionally, adding a control icon corresponding to the controller in the blank scene based on the real position information includes:
determining an icon position of the control icon in the blank scene based on the real position information;
and adding the control icon into the blank scene according to the icon position.
In this embodiment, the real position information corresponds to the world coordinate system, the blank scene (the same coordinate system as the preset scene) corresponds to the scene coordinate system (which may be the camera coordinate system corresponding to the camera in the head-mounted display device), so that the process of determining the icon position is the process of converting the coordinate system, and after determining the scene coordinate system corresponding to the blank scene, the icon position can be determined by processing through the rotation matrix and the translation matrix; after the icon position is determined, a control icon with a preset shape is added into a blank scene, so that a user feels that the control icon is consistent with a controller controlled by the user, and the user experience is improved.
In some alternative embodiments, after step 102, it may further include:
storing the scene sky box in a first buffer area of a memory; wherein the information cached in the first buffer area is not displayed;
in this embodiment, after step 106, it may further include:
storing the left eye image and the right eye image in a first buffer; wherein the left eye image and the right eye image correspond to the left eye region and the right eye region in the first buffer, respectively.
In this embodiment, the first buffer may be a back buffer, which may be specified in advance, for example, by default set by WebGL (full scale Web Graphics Library, a 3D drawing protocol); the information stored in the rear buffer area is not displayed (hidden), and the empty box of the sedum aizoon can be generated in advance and stored in the rear buffer area in the embodiment; when the image superposition is required to be executed, a scene sky box can be called to execute superposition, and the left eye image and the right eye image which are superposed are respectively stored in a left eye area and a right eye area in a rear buffer area; or storing the left eye control image and the right eye control image into a rear buffer area, finishing image superposition in the rear buffer area, and storing the superposed left eye image and right eye image into a left eye area and a right eye area in the rear buffer area respectively; the left eye image and the right eye image are respectively stored through the left eye area and the right eye area, so that the images can be directly rendered when being rendered, the left view screen (for example, the left lens of the VR glasses) of the head-mounted visual device is rendered through the images in the left eye area, and the right view screen (for example, the right lens of the VR glasses) of the head-mounted visual device is rendered through the images in the right eye area.
Optionally, the memory further comprises a second buffer;
step 108 may include:
exchanging the left eye image stored in the left eye region in the first buffer to the left eye region in the second buffer;
exchanging the right-eye image stored in the right-eye region in the first buffer to the right-eye region in the second buffer;
rendering the left eye image in the left eye region of the second buffer area to a left view screen of the head-mounted display device, and rendering the right eye image in the right eye region of the second buffer area to a right view screen of the head-mounted display device, so as to realize the display of the preset scene comprising the control icon.
In this embodiment, when the first buffer area (the rear buffer area) is set, a corresponding second buffer area (the front buffer area) is also set, and the image in the front buffer area can be directly rendered onto the visual screen of the head-mounted display device; the user can obtain the scene sky box without parallax and the control icon with parallax by combining images displayed by the left view screen and the right view screen, so that the control icon is controlled by the user under the view angle of the user, and the corresponding user is controlled by the controller without distortion, and better VR interaction is realized.
Any of the interactive display methods in virtual reality provided by the embodiments of the present disclosure may be performed by any suitable device having data processing capabilities, including, but not limited to: terminal equipment, servers, etc. Alternatively, any of the interactive display methods in virtual reality provided by the embodiments of the present disclosure may be executed by a processor, such as the processor executing any of the interactive display methods in virtual reality mentioned by the embodiments of the present disclosure by invoking corresponding instructions stored in a memory. And will not be described in detail below.
Exemplary apparatus
Fig. 2 is a schematic structural diagram of an interactive display device in virtual reality according to an exemplary embodiment of the present disclosure. As shown in fig. 2, the apparatus provided in this embodiment includes:
the first image acquisition module 21 is configured to acquire images of a preset scene by using two virtual cameras at a first preset position, so as to obtain a sedum aizoon empty box.
Wherein the two virtual cameras have the same camera parameters.
The second image acquisition module 22 is configured to acquire a left eye control image and a right eye control image by performing image acquisition on the control icon by using two virtual cameras at a second preset position.
The image superposition module 23 is configured to superimpose the left-eye control image and the right-eye control image into the sedum aizoon empty box, so as to obtain a left-eye image and a right-eye image.
The image rendering module 24 is configured to render a left-eye image to a left-view screen of the head-mounted display device, and render a right-eye image to a right-view screen of the head-mounted display device, so as to implement presentation of a preset scene including a control icon.
According to the interactive display device in virtual reality, which is provided by the embodiment of the disclosure, image acquisition is performed on a preset scene through two virtual cameras at a first preset position, so that a sedum aizoon empty box is obtained; wherein the two virtual cameras have the same camera parameters; acquiring images of the control icons through the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image; respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image; rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of a preset scene comprising control icons; according to the embodiment, the virtual cameras at different positions are respectively adopted for image acquisition aiming at the preset scene and the interactive control icon, so that a user can perceive that the control icon has a real physical space scale while watching a parallax-free sky box virtual scene through the head-mounted display device, the purpose of interaction between the user and the virtual scene is achieved, and the interaction experience of the user is improved.
Optionally, the two virtual cameras include a left eye virtual camera and a right eye virtual camera;
the first image acquisition module 21 is specifically configured to perform image acquisition and rendering on a preset scene through the left-eye virtual camera and the right-eye virtual camera at the same position, so as to obtain a sedum aizoon empty box.
In some optional embodiments, the apparatus provided in this embodiment further includes:
the distance adjusting module is used for adjusting the translation distance between the left-eye virtual camera and the right-eye virtual camera to be a preset distance;
the second image acquisition module 22 is specifically configured to acquire images of the control icons based on the left-eye virtual camera and the right-eye virtual camera after the translation distance is adjusted, so as to obtain a left-eye control image and a right-eye control image.
In some optional embodiments, the apparatus provided in this embodiment further includes:
the position acquisition module is used for acquiring the real position information of the controller in a world coordinate system; the control icon is an icon which corresponds to the controller and has a preset shape;
the icon adding module is used for adding the control icon corresponding to the controller into the blank scene based on the real position information; wherein the blank scene corresponds to a preset scene, but the sedum aizoon blank box is not rendered;
The second image acquisition module 22 is specifically configured to acquire, through two virtual cameras at a second preset position, an image of a blank scene added with a control icon, so as to obtain a left eye control image and a right eye control image.
Optionally, the icon adding module is specifically configured to determine an icon position of the control icon in the blank scene based on the real position information; and adding the control icon into the blank scene according to the icon position.
In some optional embodiments, the apparatus provided in this embodiment further includes:
the buffer module is used for storing the scene sky box into a first buffer zone of the memory; wherein the information cached in the first buffer area is not displayed; storing the left eye image and the right eye image in a first buffer; wherein the left eye image and the right eye image correspond to the left eye region and the right eye region in the first buffer, respectively.
Optionally, the memory further comprises a second buffer;
an image rendering module 24, specifically configured to swap the left eye image stored in the left eye region in the first buffer to the left eye region in the second buffer; exchanging the right-eye image stored in the right-eye region in the first buffer to the right-eye region in the second buffer; rendering the left eye image in the left eye region of the second buffer area to a left view screen of the head-mounted display device, and rendering the right eye image in the right eye region of the second buffer area to a right view screen of the head-mounted display device, so as to realize the display of the preset scene comprising the control icon.
Exemplary electronic device
Next, an electronic device according to an embodiment of the present disclosure is described with reference to fig. 3. The electronic device may be either or both of the first device and the second device, or a stand-alone device independent thereof, which may communicate with the first device and the second device to receive the acquired input signals therefrom.
Fig. 3 illustrates a block diagram of an electronic device according to an embodiment of the present disclosure.
As shown in fig. 3, the electronic device includes one or more processors and memory.
The processor may be a Central Processing Unit (CPU) or other form of processing unit having data processing and/or instruction execution capabilities, and may control other components in the electronic device to perform the desired functions.
The memory may store one or more computer program products, which may include various forms of computer-readable storage media, such as volatile memory and/or nonvolatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program products may be stored on the computer readable storage medium that can be run by a processor to implement the interactive display method in virtual reality and/or other desired functions of the various embodiments of the present disclosure described above.
In one example, the electronic device may further include: input devices and output devices, which are interconnected by a bus system and/or other forms of connection mechanisms (not shown).
In addition, the input device may include, for example, a keyboard, a mouse, and the like.
The output device may output various information including the determined distance information, direction information, etc., to the outside. The output device may include, for example, a display, speakers, a printer, and a communication network and remote output devices connected thereto, etc.
Of course, only some of the components of the electronic device relevant to the present disclosure are shown in fig. 3 for simplicity, components such as buses, input/output interfaces, etc. being omitted. In addition, the electronic device may include any other suitable components depending on the particular application.
In addition to the methods and apparatus described above, embodiments of the present disclosure may also be a computer program product comprising computer program instructions which, when executed by a processor, cause the processor to perform the steps in an interactive display method in virtual reality according to various embodiments of the present disclosure described in the foregoing sections of this specification.
The computer program product may write program code for performing the operations of embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device, partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium, having stored thereon computer program instructions, which when executed by a processor, cause the processor to perform steps in an interactive display method in virtual reality according to various embodiments of the present disclosure described in the above section of the specification.
The computer readable storage medium may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. The readable storage medium may include, for example, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium would include the following: an electrical connection having one or more wires, a portable disk, a hard disk, random Access Memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The basic principles of the present disclosure have been described above in connection with specific embodiments, however, it should be noted that the advantages, benefits, effects, etc. mentioned in the present disclosure are merely examples and not limiting, and these advantages, benefits, effects, etc. are not to be considered as necessarily possessed by the various embodiments of the present disclosure. Furthermore, the specific details disclosed herein are for purposes of illustration and understanding only, and are not intended to be limiting, since the disclosure is not necessarily limited to practice with the specific details described.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different manner from other embodiments, so that the same or similar parts between the embodiments are mutually referred to. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.
The block diagrams of the devices, apparatuses, devices, systems referred to in this disclosure are merely illustrative examples and are not intended to require or imply that the connections, arrangements, configurations must be made in the manner shown in the block diagrams. As will be appreciated by one of skill in the art, the devices, apparatuses, devices, systems may be connected, arranged, configured in any manner. Words such as "including," "comprising," "having," and the like are words of openness and mean "including but not limited to," and are used interchangeably therewith. The terms "or" and "as used herein refer to and are used interchangeably with the term" and/or "unless the context clearly indicates otherwise. The term "such as" as used herein refers to, and is used interchangeably with, the phrase "such as, but not limited to.
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, firmware. The above-described sequence of steps for the method is for illustration only, and the steps of the method of the present disclosure are not limited to the sequence specifically described above unless specifically stated otherwise. Furthermore, in some embodiments, the present disclosure may also be implemented as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the apparatus, devices and methods of the present disclosure, components or steps may be disassembled and/or assembled. Such decomposition and/or recombination should be considered equivalent to the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit the embodiments of the disclosure to the form disclosed herein. Although a number of example aspects and embodiments have been discussed above, a person of ordinary skill in the art will recognize certain variations, modifications, alterations, additions, and subcombinations thereof.

Claims (10)

1. An interactive display method in virtual reality is characterized by comprising the following steps:
acquiring images of a preset scene by two virtual cameras at a first preset position to obtain a sedum aizoon empty box; wherein the two virtual cameras have the same camera parameters;
acquiring images of the control icons through the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image;
respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image;
and rendering the left-eye image to a left-view screen of the head-mounted display device, and rendering the right-eye image to a right-view screen of the head-mounted display device, so as to realize the display of the preset scene comprising the control icon.
2. The method of claim 1, wherein the two virtual cameras comprise a left-eye virtual camera and a right-eye virtual camera;
The image acquisition is carried out on a preset scene through two virtual cameras at a first preset position to obtain a scene sky box, and the method comprises the following steps:
and performing image acquisition rendering on a preset scene at the same position through the left-eye virtual camera and the right-eye virtual camera to obtain a rhodiola empty box.
3. The method according to claim 2, wherein before the image capturing of the control icon by the two virtual cameras at the second preset position, obtaining the left eye control image and the right eye control image, further comprises:
adjusting the translation distance between the left-eye virtual camera and the right-eye virtual camera to be a preset distance;
the image acquisition is performed on the control icon by the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image, which comprises the following steps:
and respectively carrying out image acquisition on the control icons based on the left-eye virtual camera and the right-eye virtual camera after the translation distance is adjusted, so as to obtain the left-eye control image and the right-eye control image.
4. A method according to any one of claims 1 to 3, wherein before the image capturing of the control icon by the two virtual cameras at the second preset position to obtain the left eye control image and the right eye control image, the method further comprises:
Acquiring real position information of a controller in a world coordinate system; the control icon is an icon which corresponds to the controller and has a preset shape;
adding the control icon corresponding to the controller in a blank scene based on the real position information; wherein the blank scene corresponds to the preset scene, but the scene sky box is not rendered;
the image acquisition is performed on the control icon by the two virtual cameras at the second preset position to obtain a left eye control image and a right eye control image, which comprises the following steps:
and acquiring images of the blank scene added with the control icon through the two virtual cameras at the second preset position, so as to obtain a left eye control image and a right eye control image.
5. The method of claim 4, wherein the adding the control icon corresponding to the controller in a blank scene based on the real position information comprises:
determining an icon position of the control icon in the blank scene based on the real position information;
and adding the control icon into the blank scene according to the icon position.
6. The method according to any one of claims 1-5, wherein after the image capturing of the preset scene by the two virtual cameras at the first preset position to obtain the scene sky box, further comprises:
storing the scene sky box into a first buffer zone of a memory; wherein, the information cached in the first buffer area is not displayed;
the step of respectively overlapping the left eye control image and the right eye control image into the scene sky box to obtain a left eye image and a right eye image, and then further comprises the steps of:
storing the left eye image and the right eye image in the first buffer; wherein the left eye image and the right eye image correspond to a left eye region and a right eye region in the first buffer, respectively.
7. The method of claim 6, wherein the memory further comprises a second buffer;
the rendering the left eye image to a left view screen of the head-mounted display device, and the rendering the right eye image to a right view screen of the head-mounted display device, to realize the exhibition of a preset scene including control icons, includes:
swapping the left eye image stored in the left eye region in the first buffer to a left eye region in the second buffer;
Swapping the right eye image stored in the right eye region in the first buffer to a right eye region in the second buffer;
and rendering the left-eye image in the left-eye area of the second buffer area to a left-view screen of the head-mounted display device, and rendering the right-eye image in the right-eye area of the second buffer area to a right-view screen of the head-mounted display device, so as to realize the display of a preset scene comprising control icons.
8. An interactive display device in virtual reality, comprising:
the first image acquisition module is used for acquiring images of a preset scene through two virtual cameras at a first preset position to obtain a rhodiola empty box; wherein the two virtual cameras have the same camera parameters;
the second image acquisition module is used for acquiring images of the control icons through the two virtual cameras at a second preset position to obtain a left eye control image and a right eye control image;
the image superposition module is used for superposing the left eye control image and the right eye control image into the scene sky box respectively to obtain a left eye image and a right eye image;
and the image rendering module is used for rendering the left-eye image to a left-view screen of the head-mounted display device and rendering the right-eye image to a right-view screen of the head-mounted display device, so that display of the preset scene comprising the control icon is realized.
9. An electronic device, comprising:
a memory for storing a computer program product;
a processor for executing the computer program product stored in the memory, and when executed, implementing the interactive display method in virtual reality as claimed in any one of the preceding claims 1-7.
10. A computer readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the interactive display method in virtual reality of any of the preceding claims 1-7.
CN202310525051.5A 2023-05-10 2023-05-10 Interactive display method and device in virtual reality, electronic equipment and storage medium Pending CN116610213A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310525051.5A CN116610213A (en) 2023-05-10 2023-05-10 Interactive display method and device in virtual reality, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310525051.5A CN116610213A (en) 2023-05-10 2023-05-10 Interactive display method and device in virtual reality, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116610213A true CN116610213A (en) 2023-08-18

Family

ID=87684643

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310525051.5A Pending CN116610213A (en) 2023-05-10 2023-05-10 Interactive display method and device in virtual reality, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116610213A (en)

Similar Documents

Publication Publication Date Title
US8228327B2 (en) Non-linear depth rendering of stereoscopic animated images
EP3712840A1 (en) Method and system for generating an image of a subject in a scene
US7884823B2 (en) Three dimensional rendering of display information using viewer eye coordinates
JP5996814B1 (en) Method and program for providing image of virtual space to head mounted display
WO2018086295A1 (en) Application interface display method and apparatus
KR20130138177A (en) Displaying graphics in multi-view scenes
JP2017016577A (en) Information processor, and control method, and program, for the same
JP2006178900A (en) Stereoscopic image generating device
CN109920043B (en) Stereoscopic rendering of virtual 3D objects
JP2011164781A (en) Stereoscopic image generation program, information storage medium, apparatus and method for generating stereoscopic image
US20040212612A1 (en) Method and apparatus for converting two-dimensional images into three-dimensional images
JP6618260B2 (en) Information processing apparatus, information processing method, and program
WO2019048819A1 (en) A method of modifying an image on a computational device
JP2018157331A (en) Program, recording medium, image generating apparatus, image generation method
CN116610213A (en) Interactive display method and device in virtual reality, electronic equipment and storage medium
Carvalho et al. Dynamic adjustment of stereo parameters for virtual reality tools
WO2017085803A1 (en) Video display device and video display method
CN108197248B (en) Method, device and system for displaying 3D (three-dimensional) 2D webpage
US10701345B2 (en) System and method for generating a stereo pair of images of virtual objects
KR20160128735A (en) Display apparatus and control method thereof
CN111344744A (en) Method for presenting a three-dimensional object, and related computer program product, digital storage medium and computer system
JP2019197368A (en) Stereoscopic motion image depth compression device and stereoscopic motion image depth compression program
US9547933B2 (en) Display apparatus and display method thereof
EP4030752A1 (en) Image generation system and method
WO2022239297A1 (en) Information processing device, information processing method, and recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination