CN107833265B - Image switching display method and virtual reality equipment - Google Patents

Image switching display method and virtual reality equipment Download PDF

Info

Publication number
CN107833265B
CN107833265B CN201711208855.3A CN201711208855A CN107833265B CN 107833265 B CN107833265 B CN 107833265B CN 201711208855 A CN201711208855 A CN 201711208855A CN 107833265 B CN107833265 B CN 107833265B
Authority
CN
China
Prior art keywords
image
grids
rendered
meshes
switching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711208855.3A
Other languages
Chinese (zh)
Other versions
CN107833265A (en
Inventor
秦文东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Goertek Techology Co Ltd
Original Assignee
Goertek Optical Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Goertek Optical Technology Co Ltd filed Critical Goertek Optical Technology Co Ltd
Priority to CN201711208855.3A priority Critical patent/CN107833265B/en
Publication of CN107833265A publication Critical patent/CN107833265A/en
Application granted granted Critical
Publication of CN107833265B publication Critical patent/CN107833265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering

Abstract

The embodiment of the invention provides an image switching display method and virtual reality equipment, wherein the method comprises the following steps: in response to a first switching instruction indicating switching from a plane exhibition mode to a panorama exhibition mode, drawing a virtual sphere model with a position of a user as a sphere center, an inner surface of the virtual sphere model comprising N meshes, wherein N is an integer and N > 1; and gradually increasing the grids occupied by the image to be rendered until the image to be rendered occupies the N grids, so as to render the image to be rendered in the grids determined each time. By implementing the embodiment of the invention, the switching between the connection plane and the panorama can be better, so that the user experience is improved.

Description

Image switching display method and virtual reality equipment
Technical Field
The invention relates to the technical field of virtual reality, in particular to an image switching display method and virtual reality equipment.
Background
With the development of computer technology, virtual reality devices are gradually popularized. The virtual reality equipment is used for playing multimedia, and the user can experience 360-degree immersion while the user feels comfortable with a large screen. The virtual reality equipment can play multimedia in two modes of plane display and panoramic display, and for the multimedia capable of being displayed in a plane and in a 360-degree panoramic display mode, switching between the two modes can be involved.
When the two modes are switched in the prior art, a scene needs to be reloaded and a video needs to be reinitialized, and then a black screen phenomenon, namely a blockage problem, occurs when a user watches the video from the angle of reloading and reinitializing, so that the engagement degree of display contents in the switching process of the traditional modes is not good, and the user experience is poor.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image switching display method and a virtual reality device, so as to better switch between a connection plane and a panorama, thereby improving user experience.
In a first aspect, an embodiment of the present invention provides an image switching display method, including:
in response to a first switching instruction indicating switching from a plane exhibition mode to a panorama exhibition mode, drawing a virtual sphere model with a position of a user as a sphere center, an inner surface of the virtual sphere model comprising N meshes, wherein N is an integer and N > 1;
and gradually increasing the grids occupied by the image to be rendered until the image to be rendered occupies the N grids, so as to render the image to be rendered in the grids determined each time.
In a second aspect, an embodiment of the present invention provides a storage medium, where the storage medium is configured to store executable instructions, where the executable instructions are configured to be executed to implement the image switching display method provided in the first aspect of the embodiment of the present invention.
In a third aspect, an embodiment of the present invention provides a virtual reality device, where the virtual reality device includes:
a processor, a memory;
the memory is used for storing executable instructions;
the processor implements the image switching display method provided by the first aspect of the embodiment of the present invention by executing the executable instructions stored in the memory.
In response to the first switching instruction, in the process of switching from the plane display mode to the panorama display mode, the grids occupied by the image to be rendered and distributed on the inner surface of the virtual sphere model are gradually increased, the image to be rendered is rendered in the grids determined each time until the image to be rendered occupies all the grids of the virtual sphere model, and the switching from the plane display mode to the panorama display mode is completed. Because the user is in the center of the virtual sphere model, from the perspective of the user, the image to be rendered is gradually enlarged and gradually filled in the inner surface of the whole virtual sphere model in the switching process, and then the panoramic effect is formed, so that the switching between the plane display mode and the panoramic display mode is better linked, and the user experience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a flowchart of an image switching display method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of an addition process for successively adding meshes occupied by an image to be rendered;
FIG. 3 is an alternative of the preset orientation;
FIG. 4 is an alternative diagram of an area where a grid corresponding to a predetermined direction is located;
FIG. 5 is an alternative implementation of the determination of the grid in step S101 of the embodiment shown in FIG. 1;
FIG. 6 is a schematic diagram of an alternative determination result for determining N edges with a reference edge and a preset angle;
fig. 7 is a schematic structural diagram of an image switching display device according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a virtual reality device according to an embodiment of the present invention;
fig. 9 is another schematic structural diagram of a virtual reality device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, third, etc. may be used to describe XXX in embodiments of the present invention, these XXX should not be limited to these terms. These terms are used only to distinguish XXX. For example, a first XXX may also be referred to as a second XXX, and similarly, a second XXX may also be referred to as a first XXX, without departing from the scope of embodiments of the present invention.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It is also noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a commodity or system that includes the element.
It is further worth noting that the order between the steps in the embodiments of the present invention may be adjusted, and is not necessarily performed in the order illustrated below.
The embodiment of the invention provides an image switching display method which can be applied to virtual reality equipment, such as single/binocular virtual reality glasses, single/double-display screen virtual reality glasses and the like. The method provided by the embodiment of the invention can also be applied to an image switching display device running on the virtual reality equipment, and the device can be application software for image switching, functional plug-ins of related software and the like.
It should be noted that, in the present embodiment, the image switching refers to switching between a plane exhibition mode and a 360-degree panorama exhibition mode of an image.
As shown in fig. 1, the method provided by the embodiment of the present invention includes the following steps:
s101: in response to a first switching instruction indicating switching from the flat view mode to the panorama view mode, drawing a virtual sphere model with a position where a user is located as a sphere center, an inner surface of the virtual sphere model including N meshes.
When the virtual reality equipment plays multimedia images, the multimedia images can be displayed in two modes, namely plane display and panoramic display, and the two modes adapt to different requirements. In the plane display mode, multimedia is displayed in a plane effect, and the experience of the multimedia is the same as that of 2D display equipment; under the panorama show mode, the image can enclose into a space, and the user is in above-mentioned space, and then can obtain better immersive experience. In practical applications, it may be necessary to switch between the flat display mode and the panoramic display mode.
If the user wishes to get a better immersive experience, a first switch instruction may be issued to switch from the flat presentation mode to the panoramic presentation mode. Alternatively, the user may issue the first switching instruction through operation modes such as voice, key pressing, and body swinging. Of course, the first switching instruction is not limited to being issued by the user, and may be issued periodically by the apparatus itself.
When the first switching instruction is received, the virtual sphere model is established by taking the position of the user as the sphere center, and it can be understood that the inner surface of the virtual sphere model is used for displaying the image to be rendered. Specifically, the inner surface of the virtual sphere model includes N meshes, and the image to be rendered may be rendered on the meshes. Wherein N is an integer, and N > 1. Alternatively, the mesh may be a triangular mesh, a rhombic mesh, a rectangular mesh composed of two triangular meshes, or the like.
S102: and gradually increasing the grids occupied by the image to be rendered until the image to be rendered occupies N grids, so as to render the image to be rendered in the grids determined each time.
Optionally, the mesh occupied by the image to be rendered is increased, which is actually a process of enlarging the image to be rendered, and the method of enlarging the image to be rendered may adopt an interpolation method.
Alternatively, the process of successively increasing the grids occupied by the image to be rendered may be: rendering an image to be rendered in M1 meshes of the N meshes at a first time; at the second moment, the image to be rendered is rendered in M2 grids in N grids, wherein M1 and M2 are integers, and N > is M2> M1> 1. The first time may be a time of rendering for the first time after the first switching instruction is received, and the second time may be a time after the time of rendering for the first time. Of course, the first time may also be the second time, the third time, the fourth time, … …, rendering time, and the second time may be the later time of the second time, the third time, the fourth time, … …, rendering time corresponding to the first time.
When the first time is the time of the first rendering, the value of the number M1 of meshes occupied by the image to be rendered may be 1, and in this case, M2 may be 4, that is, M2 is 4 × M1. Of course, the numerical values of M1 and M2 may also have other relationships, and this embodiment is not listed here.
The above-mentioned successive rendering process is described in detail below by referring to a specific example. As shown in fig. 2, assuming that the mesh is a rectangular mesh, the number of meshes occupied by the image to be rendered at the time of the first rendering is 1, the number of meshes occupied at the second rendering is 4, and the number of meshes occupied at the third rendering is 16 … …, until the number of meshes occupied by the image to be rendered is equal to N, that is, the image to be rendered fills the entire virtual sphere model.
After the number M1 of grids is determined, the positions of M1 grids need to be further determined. Alternatively, it may be determined by: determining K grids corresponding to a preset direction in the N grids, wherein K is an integer, and M1< K < N; the M1 grids located at the center are obtained from the K grids. Specifically, the direction of a certain point on the virtual sphere model relative to the center of the sphere may be predetermined as a preset direction, and optionally, the preset direction may be as shown in fig. 3. Furthermore, the grids within a certain set angle θ around the preset direction are K grids corresponding to the preset direction. For example, as shown in fig. 4, the grid in the area covered by the shading of fig. 4 is a grid corresponding to a preset direction.
Alternatively, M1 meshes may also be determined in the mesh corresponding to the orientation of the user to render the image to be rendered in the above M1 meshes.
For the positions of M2 grids, optionally, the positions of M2 grids include the positions of M1 grids, that is, after the positions of M1 grids are determined, M2-M1 grids are determined around the M1 grids, and the positions are used as the positions of M2 grids. The location of a particular M2 grid may be determined in a grid in the region centered on the M1 grids.
It can be seen that, in the present embodiment, in response to the first switching instruction, in the process of switching from the plane display mode to the panorama display mode, the meshes distributed on the inner surface of the virtual sphere model and occupied by the image to be rendered are gradually increased, and the image to be rendered is rendered in the determined meshes each time until the image to be rendered occupies all the meshes of the virtual sphere model, and then the switching is completed. In the actual display process, the user is located at the center of the virtual sphere model, so that from the perspective of the user, the image to be rendered is gradually enlarged in the switching process until the inner surface of the whole virtual sphere model is filled with the image to form a panoramic effect, the switching between the plane display mode and the panoramic display mode is better connected, and the user experience is improved.
An alternative embodiment is described below, which details the process of determining the N meshes of the virtual sphere model in step S101. As shown in fig. 5, this embodiment may include the following steps:
s201: and determining N edges according to a reference edge in the virtual sphere model and a preset angle variation, wherein the reference edge is a radius of the virtual sphere model.
As shown in fig. 6, assuming that a radius in the horizontal direction of the virtual sphere model is used as a reference edge, the reference edge is rotated in each direction around the center of the sphere, and when the virtual sphere model is rotated in a certain fixed direction, one edge is determined every time the rotation angle increases by a preset angle until N edges are determined. For example, as shown in fig. 6, if the preset angle is 90 degrees and the reference edge is rotated in both vertical and horizontal directions, the N edges may be N1-N6 as shown by the dotted lines in fig. 6.
S202: and determining N rectangular grids by taking the intersection points of the N edges and the spherical surface of the virtual sphere model as vertexes and combining preset edge lengths.
In this embodiment, the grid may be a rectangular grid, but the shape of the grid is not limited to the rectangular grid listed in this embodiment.
Alternatively, the vertex may correspond to a point at the top left corner, a point at the top right corner, a point at the bottom left corner, and a point at the bottom right corner of the rectangular mesh. It should be noted that, if the predetermined vertex corresponds to a point at the lower left corner of the rectangular mesh, the rectangular mesh is determined by taking each vertex as the point at the lower left corner in the determination process.
In practical application, there may be a need to switch from the panoramic display mode to the planar display mode, and if the above switching is to be completed, the user may send a second switching instruction, and in response to the second switching instruction, gradually reduce the meshes occupied by the image to be rendered until the image to be rendered occupies one mesh, so as to render the image to be rendered in the determined meshes each time. It can be understood that successively reducing the mesh occupied by the image to be rendered is essentially a reduction process of the image to be rendered, and optionally the image to be rendered may be reduced by a down-sampling method. The switching process in this embodiment is substantially the reverse process of step S102 in the embodiment shown in fig. 1, and is not described herein again.
Corresponding to the above method embodiment, as shown in fig. 7, an embodiment of the present invention further provides an image switching display device, including: a drawing module 710 and a rendering module 720.
A drawing module 710, configured to, in response to a first switch instruction indicating to switch from the plane display mode to the panorama display mode, draw a virtual sphere model with a location of a user as a sphere center, where an inner surface of the virtual sphere model includes N meshes, where N is an integer and N > 1.
And a rendering module 720, configured to gradually increase the grids occupied by the image to be rendered until the image to be rendered occupies the N grids, so as to render the image to be rendered in the grid determined each time.
Therefore, in the switching process, the user can see the effect that the image to be rendered is gradually enlarged until the inner surface of the whole virtual sphere model is filled, and then the switching between the plane display mode and the panoramic display mode can be better linked through the embodiment, so that the user experience is improved.
Optionally, the rendering module 720 includes: a first rendering submodule 721, a second rendering submodule 722.
A first rendering submodule 721 configured to render the image to be rendered in M1 meshes of the N meshes at a first time.
A second rendering submodule 722, configured to render the image to be rendered in M2 grids of the N grids at a second time, where M1 and M2 are integers, and N > -M2 > M1> 1.
Optionally, the apparatus further comprises: a first determining submodule 723 and an obtaining submodule 724.
The first determining submodule 723 is configured to determine K grids corresponding to a preset direction in the N grids before rendering an image to be rendered in M1 grids of the N grids, and trigger the first rendering submodule, where K is an integer and M1< K < N.
An obtaining submodule 724, configured to obtain the M1 grids located at the center from the K grids.
Optionally, the apparatus further comprises:
a second determining submodule 725 for determining the M1 meshes in the meshes corresponding to the user's orientation before rendering the image to be rendered in M1 meshes of the N meshes, and triggering the first rendering submodule.
Optionally, the apparatus further comprises:
a third determining submodule 726, configured to determine, before rendering the image to be rendered in M2 grids of the N grids, the M2 grids in the grid centered on the M1 grids, and trigger the second rendering submodule.
Optionally, the drawing module 710 includes: a fourth determination submodule 711 and a fifth determination submodule 712.
The fourth determining submodule 711 is configured to determine N edges according to a reference edge in the virtual sphere model and a preset angle variation, where the reference edge is a radius of the virtual sphere model.
And a fifth determining submodule 712, configured to determine N rectangular meshes by using intersection points of the N edges and the virtual sphere model spherical surface as vertices and combining preset edge lengths.
Optionally, the apparatus further comprises:
the processing module 730, configured to, in response to a second switching instruction indicating to switch from the panoramic display mode to the planar display mode, gradually reduce the meshes occupied by the image to be rendered until the image to be rendered occupies one mesh, so as to render the image to be rendered in the determined mesh each time.
It should be noted that, for the device embodiment, since it is basically similar to the method embodiment, the description is simpler, and the relevant points can be referred to only the partial description of the method embodiment.
The embodiment of the invention also provides a storage medium, wherein the storage medium is used for storing executable instructions, and the executable instructions are used for being executed to realize the image switching display method provided by the embodiment of the invention.
As shown in fig. 8, an embodiment of the present invention further provides a virtual reality device, where the virtual reality device includes:
a processor 810, a memory 820;
the memory 820 is used for storing executable instructions;
the processor 810 implements the image switching display method provided by the embodiment of the present invention by executing the executable instructions stored in the memory 820.
Because the virtual reality device executes the image switching display method provided by the embodiment of the invention, in the switching process, a user can see the effect that the image to be rendered is gradually enlarged until the inner surface of the whole virtual sphere model is filled, so that the switching between a plane display mode and a panoramic display mode can be better connected, and the user experience is improved.
Optionally, the virtual reality device provided in the embodiment of the present invention may further include: a communication interface 830, the communication interface 830 is used for realizing communication between the virtual reality device and the outside.
The virtual reality device provided by some embodiments of the present invention may be a head-mounted display device, specifically an external head-mounted display device or an integrated head-mounted display device, where the external head-mounted display device needs to be used with an external processing system (e.g., a computer processing system).
Fig. 9 is a schematic diagram showing an internal configuration of the head-mounted display apparatus 900 in some embodiments.
The display unit 901 may include a display panel disposed on a side surface of the head-mounted display device 900 facing the face of the user, which may be an integral panel, or a left panel and a right panel corresponding to the left eye and the right eye of the user, respectively. The display panel may be an Electroluminescence (EL) element, a liquid crystal display or a micro display having a similar structure, or a laser scanning type display in which the retina can directly display or the like.
The virtual image optical unit 902 captures an image displayed by the display unit 901 in an enlarged manner, and allows the user to observe the displayed image as the enlarged virtual image. As the display image output onto the display unit 901, an image of a virtual scene provided from a content reproduction apparatus (blu-ray disc or DVD player) or a streaming server, or an image of a real scene photographed using the external camera 910 may be possible. In some embodiments, virtual image optics 902 may include a lens element, such as a spherical lens, an aspherical lens, a fresnel lens, or the like.
The input operation unit 903 includes at least one operation member such as a key, a button, a switch, or other members having similar functions for performing an input operation, receives a user instruction through the operation member, and outputs the instruction to the control unit 907.
The state information acquisition unit 904 is used to acquire state information of a user wearing the head-mounted display device 900. The state information acquisition unit 904 may include various types of sensors for detecting state information by itself, and may acquire the state information from an external device (e.g., a smartphone, a wristwatch, and other multi-function terminals worn by a user) through the communication unit 905. The state information acquisition unit 904 may acquire position information and/or posture information of the head of the user. The state information acquisition unit 904 may include one or more of a gyro sensor, an acceleration sensor, a Global Positioning System (GPS) sensor, a geomagnetic sensor, a doppler effect sensor, an infrared sensor, and a radio frequency field intensity sensor. Further, the state information acquisition unit 904 acquires state information of the user wearing the head mounted display device 900, for example, acquires, for example, an operation state of the user (whether the user is wearing the head mounted display device 900), an action state of the user (a moving state such as still, walking, running, and the like, a posture of a hand or a fingertip, an open or closed state of an eye, a line of sight direction, a pupil size), a mental state (whether the user is immersed in viewing a displayed image, and the like), and even a physiological state.
The communication unit 905 performs communication processing with an external device, modulation and demodulation processing, and encoding and decoding processing of a communication signal. In addition, the control unit 907 may transmit transmission data to an external device from the communication unit 905. The communication means may be in a wired or wireless form, such as mobile high definition link (MHL) or Universal Serial Bus (USB), High Definition Multimedia Interface (HDMI), wireless fidelity (Wi-Fi), bluetooth communication or bluetooth low energy communication, and mesh network of ieee802.11s standard, etc. Additionally, communications unit 905 may be a cellular radio transceiver operating in accordance with wideband code division multiple access (W-CDMA), Long Term Evolution (LTE), and similar standards.
In some embodiments, the head mounted display device 900 may further include a storage unit, and the storage unit 906 is a mass storage device configured with a Solid State Drive (SSD) or the like. In some embodiments, the storage unit 906 may store application programs or various types of data. For example, content viewed by a user using the head mounted display device 900 may be stored in the storage unit 906.
In some embodiments, the head mounted display device 900 may also include a control unit, and the control unit 907 may include a Computer Processing Unit (CPU) or other device with similar functionality. In some embodiments, the control unit 907 may be used to execute applications stored by the storage unit 906, or the control unit 907 may also be used to execute circuits that perform the methods, functions, and operations disclosed in some embodiments of the present application.
The image processing unit 908 is used to perform signal processing such as image quality correction related to the image signal output from the control unit 907, and to convert the resolution thereof to the resolution according to the screen of the display unit 901. Then, the display driving unit 909 sequentially selects each row of pixels of the display unit 901, and sequentially scans each row of pixels of the display unit 901 row by row, thus providing pixel signals based on the signal-processed image signals.
In some embodiments, head mounted display device 900 may also include an external camera. The external camera 910 may be disposed on a front surface of the body of the head mounted display device 900, and the external camera 910 may be one or more. The external camera 910 may acquire three-dimensional information and may also function as a distance sensor. Additionally, a Position Sensitive Detector (PSD) or other type of distance sensor that detects reflected signals from objects may be used with the external camera 910. The external camera 910 and distance sensors may be used to detect the body position, pose, and shape of a user wearing the head mounted display device 900. In addition, the user may directly view or preview the real scene through the external camera 910 under certain conditions.
In some embodiments, the head-mounted display apparatus 900 may further include a sound processing unit, and the sound processing unit 911 may perform sound quality correction or sound amplification of the sound signal output from the control unit 907, signal processing of the input sound signal, and the like. Then, the sound input/output unit 912 outputs sound to the outside and inputs sound from the microphone after sound processing.
It is noted that the structures or components shown in the dashed line box in fig. 9 may be independent of the head-mounted display device 900, and may be disposed in an external processing system (e.g., a computer system) for use with the head-mounted display device 900; alternatively, structures or components shown in phantom may be disposed within or on the surface of the head mounted display device 900.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (8)

1. An image switching display method is characterized by comprising the following steps:
in response to a first switching instruction indicating switching from a plane exhibition mode to a panorama exhibition mode, drawing a virtual sphere model with a position of a user as a sphere center, an inner surface of the virtual sphere model comprising N meshes, wherein N is an integer and N > 1;
rendering an image to be rendered in M1 meshes of the N meshes at a first time;
at a second time, rendering the image to be rendered in M2 grids of the N grids, wherein M1 and M2 are integers, and N > -M2 > M1> 1.
2. The method of claim 1, wherein prior to rendering the image to be rendered in M1 of the N meshes, further comprising:
determining K grids corresponding to a preset direction in the N grids, wherein K is an integer and M1< K < N;
obtaining the M1 grids located at the center from the K grids.
3. The method of claim 1, wherein prior to rendering the image to be rendered in M1 of the N meshes, further comprising:
determining the M1 grids among the grids corresponding to the user's orientation.
4. The method of any of claims 1-3, wherein prior to rendering the image to be rendered in M2 of the N meshes, further comprising:
determining the M2 grids in a grid centered at the M1 grids.
5. The method according to any one of claims 1 to 3, wherein the drawing a virtual sphere model with the position of the user as the sphere center, an inner surface of the virtual sphere model comprising N meshes, comprises:
determining N edges according to a reference edge in the virtual sphere model and a preset angle variation, wherein the reference edge is a radius of the virtual sphere model;
and determining N rectangular grids by taking the intersection points of the N edges and the spherical surface of the virtual sphere model as vertexes and combining preset edge lengths.
6. The method according to any one of claims 1 to 3, further comprising: and in response to a second switching instruction for indicating switching from the panoramic display mode to the plane display mode, gradually reducing the grids occupied by the image to be rendered until the image to be rendered occupies one grid, so as to render the image to be rendered in the determined grids each time.
7. A storage medium for storing executable instructions for being executed to implement the image switch presentation method according to any one of claims 1 to 6.
8. A virtual reality device, characterized in that the virtual reality device comprises:
a processor, a memory;
the memory is used for storing executable instructions;
the processor implements the image switch presentation method of any one of claims 1 to 6 by executing executable instructions stored in the memory.
CN201711208855.3A 2017-11-27 2017-11-27 Image switching display method and virtual reality equipment Active CN107833265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711208855.3A CN107833265B (en) 2017-11-27 2017-11-27 Image switching display method and virtual reality equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711208855.3A CN107833265B (en) 2017-11-27 2017-11-27 Image switching display method and virtual reality equipment

Publications (2)

Publication Number Publication Date
CN107833265A CN107833265A (en) 2018-03-23
CN107833265B true CN107833265B (en) 2021-07-27

Family

ID=61645935

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711208855.3A Active CN107833265B (en) 2017-11-27 2017-11-27 Image switching display method and virtual reality equipment

Country Status (1)

Country Link
CN (1) CN107833265B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109348246B (en) * 2018-11-21 2021-11-30 北京未来媒体科技股份有限公司 4K panoramic super-fusion video live broadcast method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101208723A (en) * 2005-02-23 2008-06-25 克雷格·萨默斯 Automatic scene modeling for the 3D camera and 3D video
CN101635138A (en) * 2009-08-27 2010-01-27 北京水晶石数字科技有限公司 Method for scene visualization
CN103020962A (en) * 2012-11-27 2013-04-03 武汉海达数云技术有限公司 Rapid level detection method of mouse applied to three-dimensional panoramic picture
CN103413353A (en) * 2013-07-31 2013-11-27 天脉聚源(北京)传媒科技有限公司 Resource showing method, device and terminal
CN104867175A (en) * 2015-06-02 2015-08-26 孟君乐 Real-scene displaying device for virtual effect picture and implementing method therefor
CN105869201A (en) * 2016-03-25 2016-08-17 北京全景思维科技有限公司 Method and device for achieving smooth switching of panoramic views in panoramic roaming
CN106127681A (en) * 2016-07-19 2016-11-16 刘牧野 A kind of image-pickup method, virtual reality image transmission method and display packing
CN106445437A (en) * 2016-09-08 2017-02-22 深圳市金立通信设备有限公司 Terminal and view angle switching method thereof
CN107169924A (en) * 2017-06-14 2017-09-15 歌尔科技有限公司 The method for building up and system of three-dimensional panoramic image
CN107248193A (en) * 2017-05-22 2017-10-13 北京红马传媒文化发展有限公司 The method, system and device that two dimensional surface is switched over virtual reality scenario

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102750734B (en) * 2011-08-26 2017-09-19 新奥特(北京)视频技术有限公司 The method and system that a kind of virtual three-dimensional earth system is shown
CN103021013A (en) * 2012-11-28 2013-04-03 无锡羿飞科技有限公司 High-efficiency processing method for spherical display and rotary output image of projector
US20160344999A1 (en) * 2013-12-13 2016-11-24 8702209 Canada Inc. SYSTEMS AND METHODs FOR PRODUCING PANORAMIC AND STEREOSCOPIC VIDEOS

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101208723A (en) * 2005-02-23 2008-06-25 克雷格·萨默斯 Automatic scene modeling for the 3D camera and 3D video
CN101635138A (en) * 2009-08-27 2010-01-27 北京水晶石数字科技有限公司 Method for scene visualization
CN103020962A (en) * 2012-11-27 2013-04-03 武汉海达数云技术有限公司 Rapid level detection method of mouse applied to three-dimensional panoramic picture
CN103413353A (en) * 2013-07-31 2013-11-27 天脉聚源(北京)传媒科技有限公司 Resource showing method, device and terminal
CN104867175A (en) * 2015-06-02 2015-08-26 孟君乐 Real-scene displaying device for virtual effect picture and implementing method therefor
CN105869201A (en) * 2016-03-25 2016-08-17 北京全景思维科技有限公司 Method and device for achieving smooth switching of panoramic views in panoramic roaming
CN106127681A (en) * 2016-07-19 2016-11-16 刘牧野 A kind of image-pickup method, virtual reality image transmission method and display packing
CN106445437A (en) * 2016-09-08 2017-02-22 深圳市金立通信设备有限公司 Terminal and view angle switching method thereof
CN107248193A (en) * 2017-05-22 2017-10-13 北京红马传媒文化发展有限公司 The method, system and device that two dimensional surface is switched over virtual reality scenario
CN107169924A (en) * 2017-06-14 2017-09-15 歌尔科技有限公司 The method for building up and system of three-dimensional panoramic image

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Automatic Panoramic Image Stitching using Invariant Features;MATTHEW BROWN,et al.;《International Journal of Computer Vision》;20061231;第74卷(第1期);第59-73页 *
一种简单的网格图形局部放大技术;陈尚桥 等;《地质灾害与环境保护》;19951231;第6卷(第4期);第59-63页 *
三维显示中的图像放大相关技术研究;谢征;《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》;20061015(第10期);I138-509 *
基于三角形网格的图像多项式插值放大算法;徐岚;《中国优秀硕士学位论文全文数据库 信息科技辑》;20170115(第1期);I138-718 *
平面屏式真三维立体显示的图像数据生成方法;段现银 等;《山东大学学报(工学版)》;20110430;第41卷(第2期);第66-70页 *

Also Published As

Publication number Publication date
CN107833265A (en) 2018-03-23

Similar Documents

Publication Publication Date Title
KR102506480B1 (en) Image processing apparatus and method for image processing thereof
US10534428B2 (en) Image processing device and image processing method, display device and display method, and image display system
US9491357B2 (en) Image-processing system and image-processing method in which a size of a viewing angle and a position of a viewing point are changed for zooming
US20180192022A1 (en) Method and System for Real-time Rendering Displaying Virtual Reality (VR) On Mobile Using Head-Up Display Devices
EP3671408B1 (en) Virtual reality device and content adjusting method therefor
JP2019047167A (en) Imaging apparatus, image display system, operation method, program
US11294535B2 (en) Virtual reality VR interface generation method and apparatus
WO2018223663A1 (en) Vr image processing method, device, and apparatus
JP6126821B2 (en) Image generation method, image display method, image generation program, image generation system, and image display apparatus
US11573629B2 (en) Image processing and display method, augmented reality device, image processing device, and display system
US20210382313A1 (en) Image generation appratus, head-mounted display, content processing system, and image display method
CN107833265B (en) Image switching display method and virtual reality equipment
CN108021346A (en) VR helmets show method, VR helmets and the system of image
CN109696959B (en) Picture display method, equipment and storage medium
CN107545595B (en) VR scene processing method and VR equipment
CN107958478B (en) Rendering method of object in virtual reality scene and virtual reality head-mounted equipment
US20210397005A1 (en) Image processing apparatus, head-mounted display, and image displaying method
CN107844197A (en) Virtual reality scenario display methods and equipment
CN107426522B (en) Video method and system based on virtual reality equipment
JP6031016B2 (en) Video display device and video display program
US11240482B2 (en) Information processing device, information processing method, and computer program
CN108108019B (en) Virtual reality equipment and display method thereof
WO2019085109A1 (en) Image writing control method and apparatus, and electronic device
US10628113B2 (en) Information processing apparatus
JP2018074295A (en) Display device, information processing device, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20201028

Address after: 261061 north of Yuqing East Street, east of Dongming Road, Weifang High tech Zone, Weifang City, Shandong Province (Room 502, Geer electronic office building)

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 266104 Laoshan Qingdao District North House Street investment service center room, Room 308, Shandong

Applicant before: GOERTEK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right

Effective date of registration: 20201103

Address after: 261061 north of Yuqing East Street, east of Dongming Road, Weifang High tech Zone, Weifang City, Shandong Province (Room 502, Geer electronic office building)

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 266104 Laoshan Qingdao District North House Street investment service center room, Room 308, Shandong

Applicant before: GOERTEK TECHNOLOGY Co.,Ltd.

TA01 Transfer of patent application right
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 261061 east of Dongming Road, north of Yuqing East Street, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronics office building)

Applicant after: GoerTek Optical Technology Co.,Ltd.

Address before: 261061 east of Dongming Road, north of Yuqing East Street, Weifang High tech Zone, Weifang City, Shandong Province (Room 502, Geer electronics office building)

Applicant before: GoerTek Optical Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20221223

Address after: 266104 No. 500, Songling Road, Laoshan District, Qingdao, Shandong

Patentee after: GOERTEK TECHNOLOGY Co.,Ltd.

Address before: 261061 east of Dongming Road, north of Yuqing East Street, high tech Zone, Weifang City, Shandong Province (Room 502, Geer electronics office building)

Patentee before: GoerTek Optical Technology Co.,Ltd.