CN117289454A - Display method and device of virtual reality equipment, electronic equipment and storage medium - Google Patents

Display method and device of virtual reality equipment, electronic equipment and storage medium Download PDF

Info

Publication number
CN117289454A
CN117289454A CN202210687337.9A CN202210687337A CN117289454A CN 117289454 A CN117289454 A CN 117289454A CN 202210687337 A CN202210687337 A CN 202210687337A CN 117289454 A CN117289454 A CN 117289454A
Authority
CN
China
Prior art keywords
image
depth information
dimensional environment
dimensional
environment image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210687337.9A
Other languages
Chinese (zh)
Inventor
李晨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zitiao Network Technology Co Ltd
Original Assignee
Beijing Zitiao Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zitiao Network Technology Co Ltd filed Critical Beijing Zitiao Network Technology Co Ltd
Priority to CN202210687337.9A priority Critical patent/CN117289454A/en
Publication of CN117289454A publication Critical patent/CN117289454A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • G02B27/0172Head mounted characterised by optical features
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/014Head-up displays characterised by optical features comprising information/image processing systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The disclosure relates to a display method and device of virtual reality equipment, electronic equipment and storage medium, wherein the method comprises the following steps: acquiring a two-dimensional environment image of an environment where a user is located in reality; converting the two-dimensional environment image into a three-dimensional environment image; determining a gaze point location of the user in the three-dimensional environmental image; taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image; and displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user. The binocular stereoscopic depth image can be displayed for a user only by collecting the environmental image through the single camera. The method is simple in calculation and easy to realize.

Description

Display method and device of virtual reality equipment, electronic equipment and storage medium
Technical Field
The disclosure relates to the technical field of virtual reality, and in particular relates to a display method and device of virtual reality equipment, electronic equipment and a storage medium.
Background
The See-Through refers to a function of viewing a real-time environment condition outside the virtual reality device by using a front camera of the device when the user wears the virtual reality device, and is also generally referred to as a "perspective function". The See-Through not only can facilitate the user to know the relative position of the user and the boundary of the virtual reality device without removing the virtual reality device, but also can easily return to the center origin and sense the external environment (such as searching a mobile phone, signing in and delivering and the like). Which may increase the sustainability of the virtual reality device experience.
Then, there are two common ways to implement the se-Through: one is realized by a single camera, and the other is realized by a double camera. By means of the double-camera implementation mode, human eyes can be well simulated, and the effect of binocular stereoscopic depth of field is achieved. However, in the manner of dual camera implementation, there are a number of constraints on dual camera installation, such as the rotatable angle involved with the cameras, the distance between the two cameras, etc. These constraints can result in the seethrough-Through function implemented by the dual cameras being able to exhibit depth of field effects only over a small distance. By means of the mode of single camera, the virtual reality device displays a plane picture without stereoscopic impression.
Disclosure of Invention
In order to solve the technical problems described above or at least partially solve the technical problems described above, the present disclosure provides a display method, apparatus, electronic device, and storage medium of a virtual reality device.
In a first aspect, the present disclosure provides a display method of a virtual reality device, including:
acquiring a two-dimensional environment image of an environment where a user is located in reality;
converting the two-dimensional environment image into a three-dimensional environment image;
determining a gaze point location of the user in the three-dimensional environmental image;
taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image;
and displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user.
In a second aspect, the present disclosure further provides a display apparatus of a virtual reality device, including:
the acquisition module is used for acquiring a two-dimensional environment image of the environment where the user is located in reality;
the conversion module is used for converting the two-dimensional environment image into a three-dimensional environment image;
a determining module, configured to determine a gaze point position of a user in the three-dimensional environment image;
The shooting module is used for taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image;
the display module is used for displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of a user, and light rays of the right eye image enter the right eye of the user.
In a third aspect, the present disclosure also provides an electronic device, including:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of displaying a virtual reality device as described above.
In a fourth aspect, the present disclosure also provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display method of a virtual reality device as described above.
Compared with the prior art, the technical scheme provided by the embodiment of the disclosure has the following advantages:
according to the technical scheme provided by the embodiment of the disclosure, a two-dimensional environment image of an environment where a user is located in reality is obtained; converting the two-dimensional environment image into a three-dimensional environment image; determining a gaze point location of the user in the three-dimensional environmental image; taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image; and displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user. The two-dimensional environment image of the environment where the user is in reality is required to be acquired and is a plane image, so that the two-dimensional environment image can be obtained by adopting a single camera, and a double camera is not required, so that the method is not limited by the limiting condition of double camera installation. In addition, the left eye image and the right eye image are shot by the virtual cameras arranged at different positions, so that the left eye image and the right eye image have displacement differences, a user can watch a stereoscopic image similar to an actual image of an object, and the aim of realizing binocular stereoscopic depth of field is achieved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure.
In order to more clearly illustrate the embodiments of the present disclosure or the solutions in the prior art, the drawings that are required for the description of the embodiments or the prior art will be briefly described below, and it will be obvious to those skilled in the art that other drawings can be obtained from these drawings without inventive effort.
Fig. 1 is a flowchart of a display method of a virtual reality device according to an embodiment of the disclosure;
fig. 2 is a flowchart of another method for displaying a virtual reality device according to an embodiment of the disclosure;
FIG. 3 is a schematic diagram of a two-dimensional environmental image provided by an embodiment of the present disclosure;
FIG. 4 is a schematic diagram of the two-dimensional environment image of FIG. 3 after segmentation;
fig. 5 is a schematic diagram for implementing S250 according to an embodiment of the disclosure;
fig. 6 is a schematic structural diagram of a display device of a virtual reality apparatus in an embodiment of the disclosure;
fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
In order that the above objects, features and advantages of the present disclosure may be more clearly understood, a further description of aspects of the present disclosure will be provided below. It should be noted that, without conflict, the embodiments of the present disclosure and features in the embodiments may be combined with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure, but the present disclosure may be practiced otherwise than as described herein; it will be apparent that the embodiments in the specification are only some, but not all, embodiments of the disclosure.
The virtual reality device is a terminal for realizing a virtual reality effect, and may be provided in the form of glasses, a head mounted display (Head Mount Display, HMD), or a contact lens for realizing visual perception and other forms of perception, but the form of the virtual reality device is not limited thereto, and may be further miniaturized or enlarged as needed.
In this disclosure, a user refers to a wearer of a virtual reality device.
Fig. 1 is a flowchart of a display method of a virtual reality device according to an embodiment of the disclosure, where the method may be performed by a display apparatus of the virtual reality device, the apparatus may be implemented in software and/or hardware, and the apparatus may be configured in the virtual reality device.
As shown in fig. 1, the method specifically may include:
s110, acquiring a two-dimensional environment image of the environment where the user is located in reality.
There are a variety of implementations of this step, which the present disclosure does not limit. Illustratively, a camera is installed in the virtual reality device, and the camera is a real camera, and the camera may be an RGB camera or a gray-scale camera. When the step is executed, a camera installed in the virtual reality equipment is utilized to acquire two-dimensional environment image information of the environment where the user is located in reality.
Because the two-dimensional image is required to be acquired in the step, the two-dimensional image can be acquired by a single camera, and a double camera is not required.
S120, converting the two-dimensional environment image into a three-dimensional environment image.
The implementation method of the step is various, and exemplary, the depth information of each position in the two-dimensional environment image is acquired; and converting the two-dimensional environment image into a three-dimensional environment image based on the depth information of each position in the two-dimensional environment image. The depth information of a position in the two-dimensional environment image refers to the distance of the position from the user.
S130, determining the position of the gaze point of the user in the three-dimensional environment image.
There are various ways to implement this step, and this disclosure is not limited thereto. For example, eye tracking techniques may be utilized to determine the gaze point location of a user in a three-dimensional environmental image.
And S140, taking the position of the point of regard as a focus, and shooting three-dimensional environment images by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image.
The virtual camera is a software camera, can be simulated into a real camera, and can adjust the internal deflection angle so as to focus and shoot a specific position in a virtual picture (such as a three-dimensional environment image in the present disclosure).
In some embodiments, the distance between the two virtual cameras is set to be equal to the pupil distance of the eyes of the user, so that the displacement difference between the left eye image and the right eye image obtained through the two virtual cameras is ensured to be the same as the displacement difference between the left eye image and the right eye image obtained through direct observation of human eyes.
The virtual camera may be a virtual RGB camera or a gray-scale camera.
And S150, displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user.
According to the technical scheme, the two-dimensional environment image of the environment where the user is located in reality is required to be acquired and is the plane image, so that the two-dimensional environment image can be obtained by adopting a single camera, and the double cameras are not required to be used, so that the limitation of the installation condition of the double cameras is avoided. In addition, the technical scheme converts the two-dimensional environment image into a three-dimensional environment image; determining a gaze point position of a user in the three-dimensional environmental image; taking the position of the point of regard as a focus, and shooting three-dimensional environment images by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image; the left eye image and the right eye image are displayed such that light of the left eye image enters the left eye of the user and light of the right eye image enters the right eye of the user. Because the left eye image and the right eye image are shot by the virtual cameras arranged at different positions, the left eye image and the right eye image have displacement differences, and a user can watch a stereoscopic image similar to an actual image of an object, so that the aim of realizing binocular stereoscopic depth of field is fulfilled.
Fig. 2 is a flowchart of another method for displaying a virtual reality device according to an embodiment of the disclosure, and fig. 2 is a specific example of fig. 1. Referring to fig. 2, the method includes:
s210, acquiring a two-dimensional environment image of the environment where the user is located in reality.
S220, acquiring a first depth information set corresponding to the two-dimensional environment image; the first set of depth information includes first depth information for different locations in the two-dimensional ambient image.
If the two-dimensional environment image is considered as a set of points, the first depth information includes distances from the points in the two-dimensional environment image to the user. And summarizing the first depth information of all points in the two-dimensional environment image, wherein the obtained summarizing result is a first depth information set.
In practice, the first depth information may be a result directly acquired by the device having the depth information acquisition function, or may be a processing result obtained by processing a result directly acquired by the device having the depth information acquisition function. The collection of depth information of each location in the environment where the user is located may be implemented by one or more of ToF (Time of flight) ranging technology, millimeter wave ranging technology, acoustic wave ranging technology, and binocular camera ranging technology.
For example, if the first depth information is a result of direct acquisition by a device having a depth information acquisition function, in one embodiment, the implementation method of this step includes: the method comprises the steps of collecting first depth information of each position in an environment where a user is located, and collecting the first depth information of all positions in a two-dimensional environment image to obtain a first depth information set. The depth information corresponding to each position is regarded as one piece of depth information. In this case, the number of pieces of first depth information included in the first depth information set is determined by the resolution achievable by the device having the depth information acquisition function.
If the first depth information is a result obtained by processing a result directly acquired by a device having a depth information acquisition function, in another embodiment, the implementation method of this step includes: acquiring second depth information at a plurality of positions in the two-dimensional environment image; dividing the two-dimensional environment image to form a plurality of divided images; for any one of the divided images, obtaining first depth information corresponding to the divided image based on all second depth information corresponding to the divided image, so that the number of pieces of the first depth information corresponding to the same divided image is smaller than that of the second depth information; and summarizing the first depth information corresponding to each divided image to obtain a first depth information set corresponding to the two-dimensional environment image. The second depth information is acquired directly through equipment with a depth information acquisition function. Optionally, each segmented image corresponds to a piece of second depth information.
The number of pieces of second depth information acquired is determined by the resolution achievable by the device having the depth information acquisition function. The number of pieces of first depth information included in the first depth information set is determined by the processing method selected in the step of "obtaining the first depth information corresponding to the divided image based on all the second depth information corresponding to the divided image" and the number of divided images.
For example, if the device with the depth information collection function collects 10000 positions of a certain two-dimensional environment image, the number of pieces of second depth information is 10000, the two-dimensional environment image is divided into 100 divided images, and the number of pieces of second depth information corresponding to each divided image is the same, each divided image corresponds to 100 pieces of second depth information. And obtaining one piece of first depth information corresponding to the segmented image based on all pieces of second depth information corresponding to the segmented image, wherein each segmented image corresponds to 1 piece of first depth information, and the first depth information set corresponding to the two-dimensional environment image comprises 100 pieces of first depth information.
There are various methods of "obtaining first depth information corresponding to a divided image based on all the second depth information corresponding to the divided image", which the present disclosure does not limit. Illustratively, the selection rules are preset; based on a preset selection rule, selecting a few (such as one) second depth information from all second depth information corresponding to the same segmented image as first depth information corresponding to the segmented image. Alternatively, an average value of all the second depth information corresponding to the same divided image is calculated, and the obtained average value is used as the first depth information corresponding to the divided image.
The purpose of dividing the two-dimensional environment image is to group the second depth information, determine all the second depth information corresponding to the same divided image as one group, and process all the depth information in the same group, wherein the purpose of the processing is to reduce the amount of the depth information, thereby reducing the operation burden of the subsequent algorithm for converting the two-dimensional environment image into the three-dimensional environment image.
Illustratively, fig. 3 is a schematic diagram of a two-dimensional environment image provided by an embodiment of the present disclosure. Fig. 4 is a schematic diagram of the two-dimensional environment image of fig. 3 after segmentation. In fig. 4, a broken line indicates a dividing line used when dividing a two-dimensional environment image. The two-dimensional environment image may be segmented into a plurality of segmented images by a dashed line. The depth information corresponding to each position is regarded as one piece of depth information. It is assumed that 300 pieces of second depth information are corresponding to each divided image before processing, i.e., each divided image is regarded as being composed of 300 points. After processing, each divided image is regarded as a point, and each divided image corresponds to first depth information.
In fig. 4, the dividing line for dividing the two-dimensional environment image is a straight line, and the obtained divided patterns have the same size and shape and are rectangular. This is merely one particular example of the present disclosure and is not intended to limit the present disclosure. In practice, the dividing line for dividing the two-dimensional environment image may be a straight line or a curved line. The resulting segmented patterns may be the same or different in shape. The shape of the obtained divided pattern may be regular pattern such as square, circle or triangle, or irregular pattern. The resulting segmented patterns may be the same or different in size.
In practice, there are various methods of "segmenting a two-dimensional environment image", which the present disclosure does not limit. Illustratively, the method of "segmenting a two-dimensional environmental image" includes: and dividing the two-dimensional environment image based on all the second depth information to form a plurality of divided images, wherein the difference between the maximum value and the minimum value of the second depth information in all the depth information corresponding to the same divided image is smaller than a set threshold value. By the arrangement, the target three-dimensional model determined later can be accurate, and the reduction degree of the three-dimensional environment image to the real environment is improved.
S230, converting the two-dimensional environment image into a three-dimensional environment image based on the first depth information set.
There are various ways to implement this step, and this disclosure is not limited thereto. Optionally, the implementation method of the step includes: determining a target three-dimensional model based on the first depth information set; and converting the two-dimensional environment image into a three-dimensional environment image based on the target three-dimensional model.
There are various methods of determining the target three-dimensional model based on the first depth information set, which the present disclosure does not limit. Illustratively, "determining a target three-dimensional model based on the first depth information set" includes: determining depth change feature information of the two-dimensional environment image based on the first depth information set; and determining the target three-dimensional model based on the depth change characteristic information of the two-dimensional environment image. The depth change characteristic information of the two-dimensional environment image includes, but is not limited to, a trend of change of the first depth information of the two-dimensional environment image.
Optionally, if the depth change feature information of the two-dimensional environment image includes a change trend of the first depth information of the two-dimensional environment image, "determining the depth change feature information of the two-dimensional environment image based on the first depth information set" includes: smoothing the first depth information in the first depth information set; depth change feature information of the two-dimensional environment image is determined based on the smoothed first depth information set. This arrangement can improve the accuracy of the subsequently determined three-dimensional model of the object.
Alternatively, a plurality of three-dimensional models may be preset, including but not limited to a curved surface model, a cambered surface model, and a spherical surface model; and determining one of the set three-dimensional models as a target three-dimensional model, wherein the similarity between the depth change characteristic information of the target three-dimensional model and the depth change characteristic information of the two-dimensional environment image is larger than a set threshold value. Essentially, the method takes the two-dimensional environment image as a whole to determine a target three-dimensional model matched with the whole two-dimensional environment image. The method has the advantages that the calculation mode is simple and easy to realize in the process of determining the target three-dimensional model and subsequently converting the two-dimensional environment image into the three-dimensional environment image.
Optionally, "determining the target three-dimensional model based on the first depth information set" may further include: identifying the two-dimensional environment image to obtain an identification object; determining a third depth information set corresponding to the recognition object based on the first depth information set; wherein the third set of depth information comprises first depth information identifying different locations in the object; determining depth change characteristic information of the identification object based on a third depth information set corresponding to the identification object; and determining the target three-dimensional model based on the depth change characteristic information of the identification object. Wherein the object is a thing in the picture. Illustratively, in fig. 3, the items in the picture include two tables, a floor, and a wall. The determination of the third depth information set corresponding to the recognition object based on the first depth information set refers to the process of removing the first depth information irrelevant to the recognition object from the first depth information to obtain the first depth information associated with the recognition object and capable of representing different positions in the recognition object, and the set is composed of the first depth information. Illustratively, in fig. 3, the set of first depth information corresponding to the divided image occupied by the table a is a third depth information set corresponding to the table a.
The depth change characteristic information of the recognition object includes, but is not limited to, a change trend of the first depth information of the recognition object.
Optionally, if the depth change feature information of the identified object includes a change trend of the first depth information of the identified object, "determining the depth change feature information of the identified object based on the third depth information set corresponding to the identified object" includes: smoothing the first depth information in the third depth information set; depth change feature information of the recognition object is determined based on the smoothed third depth information set. This arrangement can improve the accuracy of the subsequently determined three-dimensional model of the object.
Alternatively, in one embodiment, three-dimensional models may be set for different objects in advance, the same object corresponding to a plurality of three-dimensional models. For example, if the object is a table, the three-dimensional model corresponding to the table includes three-dimensional models of tables of different types and different angles. And determining one of the set three-dimensional models corresponding to the target object as a target three-dimensional model, wherein the similarity between the depth change characteristic information of the target three-dimensional model and the depth change characteristic information of the target object is larger than a set threshold value. The method is essentially to decompose a two-dimensional environment image into a plurality of recognition objects and respectively determine a target three-dimensional model matched with each recognition object.
After determining the target three-dimensional model matched with each recognition object, converting the two-dimensional image of each recognition object into a three-dimensional image based on each target three-dimensional model, and splicing the three-dimensional images of each recognition object to obtain a three-dimensional environment image. The arrangement can improve the reduction degree of the identification object, and further provide the reduction degree of the three-dimensional environment image to the real environment.
S240, determining the position of the gaze point of the user in the three-dimensional environment image.
There are various ways to implement this step, and this disclosure is not limited thereto. Illustratively, the method for implementing the step includes: determining coordinate values of a user's gaze point in a three-dimensional space; and determining the position of the user's gaze point in the three-dimensional environment image based on the coordinate values of the user's gaze point in the three-dimensional space.
The coordinate value of the user's gaze point in the three-dimensional space can reflect the position of the user's gaze target relative to the user and the distance of the user's gaze target from the user. The three-dimensional environment image is obtained by converting the two-dimensional environment image combined with the first depth information set, and the three-dimensional environment image carries the first depth information set and coordinate values of different positions. The gaze point position of the user in the three-dimensional environment image can thus be determined based on the coordinate values of the user gaze point in the three-dimensional space.
S250, taking the position of the point of regard as a focus, and shooting three-dimensional environment images by using two virtual cameras installed at different positions to obtain a left eye image and a right eye image.
Fig. 5 is a schematic diagram for implementing S250 according to an embodiment of the disclosure. Referring to fig. 5, alternatively, when this step is performed, the internal offset angles of the two virtual cameras mounted at different positions may be adjusted so that both virtual cameras take the gaze point position as the focus.
And S260, displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user.
The technical scheme provides a method for converting a two-dimensional environment image into a three-dimensional environment image by combining depth information in detail and a method for determining the position of a point of regard of a user in the three-dimensional environment image, and the whole method can display a binocular stereoscopic depth image for the user only by collecting the environment image through a single camera. The method is simple in calculation and easy to realize.
On the basis of the above technical solutions, optionally, if the depth information of at least one position in the two-dimensional environment image is changed, and/or the coordinate value of the user's gaze point in the three-dimensional space is changed, updating the left-eye image and the right-eye image. If the depth information of at least one location in the two-dimensional environment image changes, this means that the user moves. If the coordinate value of the user's gaze point in the three-dimensional space changes, meaning the human eye or head rotates, in this case, updating the left eye image and the right eye image can make the environmental image perceived by the user through the eyes more realistic.
Further, updating the left-eye image and the right-eye image may include, but is not limited to, re-performing all of the steps of the display method of the virtual reality device described above, or re-performing only part of the steps of the display method of the virtual reality device described above.
If only some of the steps of the display method of the virtual reality device are re-executed, S130 to S150 or S240 to S260 of the above steps are re-executed, for example.
It should be noted that, for simplicity of description, the foregoing method embodiments are all described as a series of acts, but it should be understood by those skilled in the art that the present invention is not limited by the order of acts described, as some steps may be performed in other orders or concurrently in accordance with the present invention. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required for the present invention.
Fig. 6 is a schematic structural diagram of a display device of a virtual reality apparatus in an embodiment of the disclosure. The display device of the virtual reality device provided by the embodiment of the disclosure may be configured in the virtual reality device. Referring to fig. 6, the display device of the virtual reality apparatus specifically includes:
An acquisition module 310, configured to acquire a two-dimensional environment image of an environment where a user is located in reality;
a conversion module 320, configured to convert the two-dimensional environment image into a three-dimensional environment image;
a determining module 330, configured to determine a gaze point location of a user in the three-dimensional environment image;
the shooting module 340 is configured to take the gaze point position as a focus, and shoot the three-dimensional environment image by using two virtual cameras installed at different positions to obtain a left eye image and a right eye image;
and a display module 350, configured to display the left eye image and the right eye image, so that light of the left eye image enters the left eye of the user, and light of the right eye image enters the right eye of the user.
Further, the apparatus further comprises:
the depth information acquisition module is used for acquiring a first depth information set corresponding to the two-dimensional environment image; the first depth information set comprises first depth information of different positions in the two-dimensional environment image;
a conversion module 320, configured to convert the two-dimensional environment image into a three-dimensional environment image based on the first depth information set.
Further, the depth information acquisition module is configured to:
acquiring second depth information at a plurality of positions in the two-dimensional environment image;
Dividing the two-dimensional environment image to form a plurality of divided images;
for any one of the divided images, obtaining first depth information corresponding to the divided image based on all second depth information corresponding to the divided image, so that the number of pieces of the first depth information corresponding to the same divided image is smaller than that of the second depth information;
and summarizing the first depth information corresponding to each segmented image to obtain a first depth information set corresponding to the two-dimensional environment image.
Further, the depth information acquisition module is configured to:
and dividing the two-dimensional environment image based on all the second depth information to form a plurality of divided images, wherein the difference between the maximum value and the minimum value of the second depth information in all the depth information corresponding to the same divided image is smaller than a set threshold value.
Further, the conversion module 320 is configured to:
determining a target three-dimensional model based on the first depth information set;
and converting the two-dimensional environment image into a three-dimensional environment image based on the target three-dimensional model.
Further, the conversion module 320 is configured to:
determining depth variation characteristic information of the two-dimensional environment image based on the first depth information set;
And determining the target three-dimensional model based on the depth change characteristic information of the two-dimensional environment image.
Further, the conversion module 320 is configured to:
identifying the two-dimensional environment image to obtain an identification object;
determining a third depth information set corresponding to the recognition object based on the first depth information set; wherein the third set of depth information includes first depth information for different locations in the recognition object;
determining depth change characteristic information of the identification object based on a third depth information set corresponding to the identification object;
and determining the target three-dimensional model based on the depth change characteristic information of the identification object.
Further, the determining module is used for:
determining coordinate values of a user's gaze point in a three-dimensional space;
and determining the position of the user's gaze point in the three-dimensional environment image based on the coordinate values of the user's gaze point in the three-dimensional space.
Further, the device also comprises an updating module for:
and if the depth information of at least one position in the two-dimensional environment image is changed, and/or the coordinate value of the user's gaze point in the three-dimensional space is changed, updating the left eye image and the right eye image.
The display device of the virtual reality device provided by the embodiment of the disclosure may execute steps of the display method of the virtual reality device provided by the embodiment of the disclosure, and has the executing steps and beneficial effects, which are not described herein.
Fig. 7 is a schematic structural diagram of an electronic device in an embodiment of the disclosure. Referring now in particular to fig. 7, a schematic diagram of an electronic device 1000 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device 1000 in the embodiments of the present disclosure may include, but is not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs (personal digital assistants), PADs (tablet computers), PMPs (portable multimedia players), in-vehicle terminals (e.g., in-vehicle navigation terminals), wearable electronic devices, and the like, and fixed terminals such as digital TVs, desktop computers, smart home devices, and the like. The electronic device shown in fig. 7 is merely an example and should not be construed to limit the functionality and scope of use of the disclosed embodiments.
As shown in fig. 7, the electronic device 1000 may include a processing means (e.g., a central processing unit, a graphic processor, etc.) 1001 that may perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 1002 or a program loaded from a storage means 1008 into a Random Access Memory (RAM) 1003 to implement a display method of a virtual reality device according to an embodiment of the disclosure. In the RAM 1003, various programs and information necessary for the operation of the electronic apparatus 1000 are also stored. The processing device 1001, the ROM 1002, and the RAM 1003 are connected to each other by a bus 1004. An input/output (I/O) interface 1005 is also connected to bus 1004.
In general, the following devices may be connected to the I/O interface 1005: input devices 1006 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, and the like; an output device 1007 including, for example, a Liquid Crystal Display (LCD), speaker, vibrator, etc.; storage 1008 including, for example, magnetic tape, hard disk, etc.; and communication means 1009. The communication means 1009 may allow the electronic device 1000 to communicate wirelessly or by wire with other devices to exchange information. While fig. 7 shows an electronic device 1000 having various means, it is to be understood that not all of the illustrated means are required to be implemented or provided. More or fewer devices may be implemented or provided instead.
In particular, according to embodiments of the present disclosure, the processes described above with reference to flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a non-transitory computer readable medium, the computer program containing program code for performing the method shown in the flowchart, thereby implementing the method of displaying a virtual reality device as described above. In such an embodiment, the computer program may be downloaded and installed from a network via the communication device 1009, or installed from the storage device 1008, or installed from the ROM 1002. The above-described functions defined in the method of the embodiment of the present disclosure are performed when the computer program is executed by the processing device 1001.
It should be noted that the computer readable medium described in the present disclosure may be a computer readable signal medium or a computer readable storage medium, or any combination of the two. The computer readable storage medium can be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples of the computer-readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this disclosure, a computer-readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In the present disclosure, however, the computer-readable signal medium may include an information signal propagated in baseband or as part of a carrier wave, with the computer-readable program code embodied therein. Such a propagated signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination of the foregoing. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, fiber optic cables, RF (radio frequency), and the like, or any suitable combination of the foregoing.
In some implementations, the clients, servers may communicate using any known or future developed network protocol, such as HTTP (HyperText Transfer Protocol ), and may be interconnected with digital information communication (e.g., a communication network) in any form or medium. Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the internet (e.g., the internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any known or future developed networks.
The computer readable medium may be contained in the electronic device; or may exist alone without being incorporated into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to:
acquiring a two-dimensional environment image of an environment where a user is located in reality;
converting the two-dimensional environment image into a three-dimensional environment image;
determining a gaze point location of the user in the three-dimensional environmental image;
taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image;
And displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user.
Alternatively, the electronic device may perform other steps described in the above embodiments when the above one or more programs are executed by the electronic device.
Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, including, but not limited to, an object oriented programming language such as Java, smalltalk, C ++ and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computer (for example, through the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units involved in the embodiments of the present disclosure may be implemented by means of software, or may be implemented by means of hardware. Wherein the names of the units do not constitute a limitation of the units themselves in some cases.
The functions described above herein may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a Complex Programmable Logic Device (CPLD), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, the present disclosure provides an electronic device comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of displaying a virtual reality device as any one of the present disclosure provides.
According to one or more embodiments of the present disclosure, the present disclosure provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a display method of any of the virtual reality devices as provided by the present disclosure.
The disclosed embodiments also provide a computer program product comprising a computer program or instructions which, when executed by a processor, implement a method of displaying a virtual reality device as described above.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is merely a specific embodiment of the disclosure to enable one skilled in the art to understand or practice the disclosure. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the disclosure. Thus, the present disclosure is not intended to be limited to the embodiments shown and described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (12)

1. A method for displaying a virtual reality device, comprising:
acquiring a two-dimensional environment image of an environment where a user is located in reality;
converting the two-dimensional environment image into a three-dimensional environment image;
determining a gaze point location of the user in the three-dimensional environmental image;
taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image;
and displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of the user and light rays of the right eye image enter the right eye of the user.
2. The display method according to claim 1, characterized in that the method further comprises:
acquiring a first depth information set corresponding to the two-dimensional environment image; the first depth information set comprises first depth information of different positions in the two-dimensional environment image;
the converting the two-dimensional environment image into a three-dimensional environment image includes:
the two-dimensional environmental image is converted into a three-dimensional environmental image based on the first depth information set.
3. The display method according to claim 2, wherein the acquiring the first depth information set corresponding to the two-dimensional environment image includes:
acquiring second depth information at a plurality of positions in the two-dimensional environment image;
dividing the two-dimensional environment image to form a plurality of divided images;
for any one of the divided images, obtaining first depth information corresponding to the divided image based on all second depth information corresponding to the divided image, so that the number of pieces of the first depth information corresponding to the same divided image is smaller than that of the second depth information;
and summarizing the first depth information corresponding to each segmented image to obtain a first depth information set corresponding to the two-dimensional environment image.
4. A display method according to claim 3, wherein the segmenting the two-dimensional environmental image to form a plurality of segmented images comprises:
and dividing the two-dimensional environment image based on all the second depth information to form a plurality of divided images, wherein the difference between the maximum value and the minimum value of the second depth information in all the depth information corresponding to the same divided image is smaller than a set threshold value.
5. The display method according to claim 2, wherein the converting the two-dimensional environment image into a three-dimensional environment image based on the first depth information set includes:
determining a target three-dimensional model based on the first depth information set;
and converting the two-dimensional environment image into a three-dimensional environment image based on the target three-dimensional model.
6. The display method of claim 5, wherein the determining a target three-dimensional model based on the first depth information set comprises:
determining depth variation characteristic information of the two-dimensional environment image based on the first depth information set;
and determining the target three-dimensional model based on the depth change characteristic information of the two-dimensional environment image.
7. The display method of claim 5, wherein the determining a target three-dimensional model based on the first depth information set comprises:
identifying the two-dimensional environment image to obtain an identification object;
determining a third depth information set corresponding to the recognition object based on the first depth information set; wherein the third set of depth information includes first depth information for different locations in the recognition object;
determining depth change characteristic information of the identification object based on a third depth information set corresponding to the identification object;
and determining the target three-dimensional model based on the depth change characteristic information of the identification object.
8. The display method according to claim 1, wherein the determining the gaze point position of the user in the three-dimensional environment image comprises:
determining coordinate values of a user's gaze point in a three-dimensional space;
and determining the position of the user's gaze point in the three-dimensional environment image based on the coordinate values of the user's gaze point in the three-dimensional space.
9. The display method according to claim 1, characterized by further comprising:
and if the depth information of at least one position in the two-dimensional environment image is changed, and/or the coordinate value of the user's gaze point in the three-dimensional space is changed, updating the left eye image and the right eye image.
10. A display device of a virtual reality apparatus, comprising:
the acquisition module is used for acquiring a two-dimensional environment image of the environment where the user is located in reality;
the conversion module is used for converting the two-dimensional environment image into a three-dimensional environment image;
a determining module, configured to determine a gaze point position of a user in the three-dimensional environment image;
the shooting module is used for taking the fixation point position as a focus, and shooting the three-dimensional environment image by using two virtual cameras arranged at different positions to obtain a left eye image and a right eye image;
the display module is used for displaying the left eye image and the right eye image so that light rays of the left eye image enter the left eye of a user, and light rays of the right eye image enter the right eye of the user.
11. An electronic device, the electronic device comprising:
one or more processors;
a storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method of any of claims 1-9.
12. A computer readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the method according to any one of claims 1-9.
CN202210687337.9A 2022-06-16 2022-06-16 Display method and device of virtual reality equipment, electronic equipment and storage medium Pending CN117289454A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210687337.9A CN117289454A (en) 2022-06-16 2022-06-16 Display method and device of virtual reality equipment, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210687337.9A CN117289454A (en) 2022-06-16 2022-06-16 Display method and device of virtual reality equipment, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117289454A true CN117289454A (en) 2023-12-26

Family

ID=89252274

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210687337.9A Pending CN117289454A (en) 2022-06-16 2022-06-16 Display method and device of virtual reality equipment, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117289454A (en)

Similar Documents

Publication Publication Date Title
CN108492364B (en) Method and apparatus for generating image generation model
CN106846497B (en) Method and device for presenting three-dimensional map applied to terminal
CN110866977B (en) Augmented reality processing method, device, system, storage medium and electronic equipment
US20220092803A1 (en) Picture rendering method and apparatus, terminal and corresponding storage medium
CN108665510B (en) Rendering method and device of continuous shooting image, storage medium and terminal
CN113724391A (en) Three-dimensional model construction method and device, electronic equipment and computer readable medium
US11741671B2 (en) Three-dimensional scene recreation using depth fusion
CN116563740A (en) Control method and device based on augmented reality, electronic equipment and storage medium
CN109816791B (en) Method and apparatus for generating information
CN117289454A (en) Display method and device of virtual reality equipment, electronic equipment and storage medium
CN114020150A (en) Image display method, image display device, electronic apparatus, and medium
CN109284002B (en) User distance estimation method, device, equipment and storage medium
KR102534449B1 (en) Image processing method, device, electronic device and computer readable storage medium
CN116193246A (en) Prompt method and device for shooting video, electronic equipment and storage medium
CN112991542B (en) House three-dimensional reconstruction method and device and electronic equipment
CN114630085B (en) Image projection method, image projection device, storage medium and electronic equipment
US20240078734A1 (en) Information interaction method and apparatus, electronic device and storage medium
CN117788758A (en) Color extraction method, apparatus, electronic device, storage medium and computer program product
CN117389502A (en) Spatial data transmission method, device, electronic equipment and storage medium
CN117641040A (en) Video processing method, device, electronic equipment and storage medium
CN118229921A (en) Image display method, device, electronic equipment and storage medium
CN117934769A (en) Image display method, device, electronic equipment and storage medium
CN117435041A (en) Information interaction method, device, electronic equipment and storage medium
CN116311486A (en) Sight estimation method, device, equipment and medium
CN117132741A (en) Control method and device based on mixed reality, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination