CN117676111A - Display method, device and equipment for augmented reality image - Google Patents

Display method, device and equipment for augmented reality image Download PDF

Info

Publication number
CN117676111A
CN117676111A CN202311550604.9A CN202311550604A CN117676111A CN 117676111 A CN117676111 A CN 117676111A CN 202311550604 A CN202311550604 A CN 202311550604A CN 117676111 A CN117676111 A CN 117676111A
Authority
CN
China
Prior art keywords
image
target
virtual
sphere
panoramic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311550604.9A
Other languages
Chinese (zh)
Inventor
陈建
方凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alipay Hangzhou Information Technology Co Ltd
Original Assignee
Alipay Hangzhou Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alipay Hangzhou Information Technology Co Ltd filed Critical Alipay Hangzhou Information Technology Co Ltd
Priority to CN202311550604.9A priority Critical patent/CN117676111A/en
Publication of CN117676111A publication Critical patent/CN117676111A/en
Pending legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the specification discloses a display method, a device and equipment for an augmented reality image. The scheme may include: the virtual panoramic image comprises a first panoramic sub-image for extracting color values and a second panoramic sub-image for extracting transparency values, so that a target sphere model with the color values and the transparency values at each position can be rendered by using the virtual panoramic image, further, a target spherical image with perspective effect in a user visual angle range can be obtained from the target sphere model, and the target spherical image with perspective effect is overlapped on a surrounding environment image acquired by a terminal device for display, so that an augmented reality image is obtained.

Description

Display method, device and equipment for augmented reality image
Technical Field
The present disclosure relates to the field of augmented reality technologies, and in particular, to a method, an apparatus, and a device for displaying an augmented reality image.
Background
The augmented reality (ExtendedReality, XR) may refer to a technology that combines a plurality of technical means with a hardware device to fuse virtual content and a real scene, and enables a user to interact with these elements in real time, thereby bringing an experienter with the sense of immersion of seamless transition between the virtual world and the real world. Augmented Reality is a broader term that may include augmented Reality (AugmentedReality, AR), virtual Reality (VirtualReality, VR), mixed Reality (MR), and possibly leading edge extension techniques. The augmented reality technology can bring better immersive experience, so that the augmented reality technology is gradually applied to industries such as entertainment, medical treatment, travel, education and the like. At present, when a user looks at the surrounding environment, the user has the requirements of acquiring more information and improving the interestingness of the environment sensing process.
Based on the above, how to use the augmented reality technology to increase the information amount that can be obtained when the user views the surrounding environment and improve the perception experience of the user to the surrounding environment becomes a technical problem to be solved urgently.
Disclosure of Invention
According to the display method, device and equipment for the augmented reality image, which are provided by the embodiment of the specification, the information quantity which can be obtained when a user views the surrounding environment can be increased, and the perception experience of the user on the surrounding environment is improved.
In order to solve the above technical problems, the embodiments of the present specification are implemented as follows:
the display method of the augmented reality image, provided by the embodiment of the specification, is applied to a terminal device and comprises the following steps:
acquiring a virtual panoramic image and a surrounding environment image acquired by the terminal equipment;
drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; a color value at a target position in a sphere of the target sphere model is determined from a pixel value at a position in the first panoramic sub-image corresponding to the target position, and a transparency value at the target position is determined from a pixel value at a position in the second panoramic sub-image corresponding to the target position;
Acquiring a target spherical image in a user visual angle range in the target spherical model;
and superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image.
The display device for an augmented reality image provided in an embodiment of the present disclosure is applied to a terminal device, and includes:
the first acquisition module is used for acquiring the virtual panoramic image and the surrounding environment image acquired by the terminal equipment;
the first drawing module is used for drawing the spherical surface of the virtual three-dimensional spherical model by utilizing the virtual panoramic image to obtain a target spherical model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; a color value at a target position in a sphere of the target sphere model is determined from a pixel value at a position in the first panoramic sub-image corresponding to the target position, and a transparency value at the target position is determined from a pixel value at a position in the second panoramic sub-image corresponding to the target position;
the second acquisition module is used for acquiring a target spherical image in the view angle range of the user in the target spherical model;
And the display module is used for superposing the target spherical image on the surrounding environment image for displaying to obtain an augmented reality image.
The display device for an augmented reality image provided in the embodiments of the present specification includes:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a virtual panoramic image and a surrounding environment image acquired by the terminal equipment;
drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; a color value at a target position in a sphere of the target sphere model is determined from a pixel value at a position in the first panoramic sub-image corresponding to the target position, and a transparency value at the target position is determined from a pixel value at a position in the second panoramic sub-image corresponding to the target position;
Acquiring a target spherical image in a user visual angle range in the target spherical model;
and superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image.
At least one embodiment provided in this specification enables the following benefits:
the virtual panoramic image comprises a first panoramic sub-image for extracting color values and a second panoramic sub-image for extracting transparency values, so that a target sphere model with the color values and the transparency values at each position can be rendered by using the virtual panoramic image, further, a target spherical image with perspective effect in a user visual angle range can be obtained from the target sphere model, and the target spherical image with perspective effect is overlapped on a surrounding environment image acquired by a terminal device for display, so that an augmented reality image is obtained. When a user perceives the surrounding environment by viewing the augmented reality image, the user can know the real surrounding environment and view the virtual content, so that the information quantity which can be obtained when the user views the surrounding environment can be increased, and the perception experience of the user on the surrounding environment is improved.
Drawings
In order to more clearly illustrate the embodiments of the present description or the technical solutions in the prior art, the drawings that are required to be used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present application, and that other drawings may be obtained according to these drawings without inventive effort to a person skilled in the art.
Fig. 1 is an application scenario schematic diagram of a display method of an augmented reality image according to an embodiment of the present disclosure;
fig. 2 is a flow chart of a display method of an augmented reality image according to an embodiment of the present disclosure;
fig. 3 is a schematic view of a virtual panoramic image according to an embodiment of the present disclosure;
fig. 4 is a schematic view of another virtual panoramic image provided in an embodiment of the present disclosure;
FIG. 5 is a schematic diagram of a target sphere model according to an embodiment of the present disclosure;
FIG. 6 is a schematic view of a lane flow corresponding to the display method of the augmented reality image in FIG. 2 according to an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of a display device corresponding to an augmented reality image of fig. 2 according to an embodiment of the present disclosure;
Fig. 8 is a schematic structural diagram of a display device corresponding to one of the augmented reality images of fig. 2 according to an embodiment of the present disclosure.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of one or more embodiments of the present specification more clear, the technical solutions of one or more embodiments of the present specification will be clearly and completely described below in connection with specific embodiments of the present specification and corresponding drawings. It will be apparent that the described embodiments are only some, but not all, of the embodiments of the present specification. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without undue burden, are intended to be within the scope of one or more embodiments herein.
The following describes in detail the technical solutions provided by the embodiments of the present specification with reference to the accompanying drawings.
In the prior art, people can only observe the surrounding real environment by a visual method to know the surrounding real environment, in the scheme, the information quantity which can be obtained by a user is limited, the process is tedious, and the user experience is influenced. Thus, in some scenes, there is often a need to fuse and display virtual content with a real environment, for example, when a user is watching a sports competition in a stadium, there may be a need to fuse and display an animated video of a cartoon image with a meaning lucky, with an internal environment of the stadium in a user device, so as to improve user experience.
In order to solve the drawbacks of the prior art, the present solution provides the following embodiments:
fig. 1 is an application scenario schematic diagram of a display method of an augmented reality image according to an embodiment of the present disclosure.
As shown in fig. 1, a user may acquire a virtual panoramic image and acquire a surrounding environment image using a terminal device 101. Subsequently, the terminal device 101 may draw the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image, to obtain the target sphere model. The virtual panoramic image may include a first panoramic sub-image and a second panoramic sub-image, so that a color value and a transparency value at a target position in a spherical surface of a target sphere model can be determined according to pixel values at corresponding positions in the first panoramic sub-image and the second panoramic sub-image, and then, a target spherical image 102 with perspective effect in a user view angle range can be obtained from the target sphere model, and the target spherical image 102 is superimposed on the surrounding environment image 103 for display, so as to obtain an augmented reality image. The user can obtain the seamless spliced virtual content and the real environment information at the same time by referring to the augmented reality image, so that the information quantity which can be obtained when the user views the surrounding environment can be increased, and the perception experience of the user to the surrounding environment is improved.
Next, a specific description will be given of a display method of an augmented reality image provided for an embodiment of the specification with reference to the accompanying drawings:
fig. 2 is a flowchart of a display method of an augmented reality image according to an embodiment of the present disclosure. From the program point of view, the execution subject of the flow may be the terminal device of the user or an application program carried at the terminal device.
As shown in fig. 2, the process may include the steps of:
step 202: and acquiring a virtual panoramic image and a surrounding environment image acquired by the terminal equipment.
In the embodiment of the present disclosure, when a user holds the terminal device to view the surrounding environment, an image capturing device (for example, a camera) at the terminal device may be used to capture an image of the surrounding environment, and display the image on a screen of the terminal device. If the user also needs to see the virtual content, the terminal equipment can be used for acquiring the virtual panoramic image which is manufactured by the service provider in advance, so that at least part of the virtual panoramic image and the acquired surrounding environment image can be fused and displayed conveniently.
In practical applications, the types of the terminal devices may be various, for example, smart phones, smart watches, tablet computers, etc. The virtual panoramic image may be stored locally in the terminal device in advance, or may be downloaded from the server in real time by the terminal device, which is not limited in any way.
Step 204: drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; the color value at a target position in the sphere of the target sphere model is determined from the pixel value at a position in the first panoramic sub-image corresponding to the target position, and the transparency value at the target position is determined from the pixel value at a position in the second panoramic sub-image corresponding to the target position.
In the embodiment of the present disclosure, in order to enhance the immersion of the picture, the virtual panoramic image may be laid on the spherical surface of the virtual three-dimensional spherical model, and it is assumed that the user is located inside the virtual three-dimensional spherical model, so that after determining the corresponding user viewing angle range, the spherical image in the user viewing angle range is used as the target spherical image to be displayed to the user.
In the embodiment of the present disclosure, in order to ensure the fusion effect of the target spherical image and the surrounding environment image, the target spherical image needs to have a perspective effect. Specifically, the virtual panoramic image that is pre-fabricated may include a first panoramic sub-image and a second panoramic sub-image, where the pixel value at the corresponding position in the first panoramic sub-image may be used to determine the color value at the target position in the sphere of the target sphere model, and the pixel value at the corresponding position in the second panoramic sub-image may be used to determine the transparency value at the target position in the sphere of the target sphere model, and then, after the sphere of the target sphere model is rendered by combining the color value and the transparency value, the target sphere image extracted from the target sphere model may have a perspective effect.
Wherein the color values may be used to reflect the colors that the target location has in the different color modes. For example, in an RGB color mode (RGB color mode), the color value corresponding to red may be (255, 0); the color Value corresponding to green may be (0, 255, 0), and the color Value corresponding to blue may be (0, 255), and of course, the color mode to which the color Value belongs may also be a color Value in an HSV (Hue, saturation, value) color model, a gray-scale color model, or other types of color models, which is not limited in particular.
The transparency value may be used to reflect the degree of transparency at the target location. In particular, the transparency value may also be referred to as Alpha value, and in general, the greater the value of the transparency value, the lower the transparency level. When the transparency value reaches a maximum value, a completely opaque effect may be exhibited, and when the transparency value reaches a minimum value, a completely transparent effect may be exhibited.
For ease of understanding, fig. 3 is a schematic diagram of a virtual panoramic image provided in an embodiment of the present disclosure. As shown in fig. 3, the virtual panoramic image may include a first panoramic sub-image 301 and a second panoramic sub-image 302.
When virtual contents such as a virtual starry sky environment and a virtual lantern object need to be presented to a user through the virtual panoramic image, and a surrounding environment in reality is presented below the virtual starry sky environment and the virtual lantern object, the first panoramic sub-image 301 can be made to include drawn colorful virtual starry sky images and virtual lantern images, so as to improve the visual effect of the virtual contents. And the second panoramic sub-image may be a gray-scale image or a color image with a gradual color change, thereby controlling the transparency of each region of the virtual content to be displayed later. For example, since the pixel value of the upper region of the second panoramic sub-image 302 is greater than the pixel value of the lower region, the transparency of the upper region of the virtual content can be made low, so that the virtual star environment and the virtual lantern object can be better displayed, and the transparency of the lower region of the virtual content is made high, so that the surrounding environment in reality can be better seen through, and the method is convenient and quick.
Step 206: and acquiring a target spherical image in the view angle range of the user in the target spherical model.
In the embodiment of the present disclosure, in order to make the visual effect of the augmented reality image more consistent with the actual viewing angle range of the user of the terminal device, the user viewing angle range that is relatively consistent with the actual viewing angle range of the user of the terminal device may be determined first, and the spherical image of the target sphere model in the user viewing angle range may be extracted as the target spherical image that is currently required to be displayed to the user.
It can be appreciated that, since the target sphere model is used to simulate the virtual environment where the user is located, it will be generally assumed that the user is currently located inside the target sphere model, so that the spherical image in the view angle range of the user is an image in a partial area inside the target sphere model, which will not be described in detail.
Step 208: and superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image.
In the embodiment of the present disclosure, since the target spherical image has a perspective effect, the target spherical image may be displayed superimposed on the surrounding environment image, and at this time, the fused target spherical image and at least a part of the surrounding environment image may be displayed in the screen of the terminal device at the same time, so as to realize a function of displaying an augmented reality image to a user.
In practical application, a plurality of layers may be set at the terminal device, and by displaying the target spherical image on an upper layer and displaying the surrounding environment image on a lower layer, the overlapping display of the target spherical image and the surrounding environment image is conveniently and reliably achieved, which is not limited in detail.
In the method in fig. 2, the virtual panoramic image includes a first panoramic sub-image for extracting color values and a second panoramic sub-image for extracting transparency values, so that a target sphere model with the color values and the transparency values at each position can be rendered by using the virtual panoramic image, further, a target spherical image with perspective effect in a user view angle range can be obtained from the target sphere model, and the target spherical image with perspective effect is superimposed on a surrounding environment image collected by a terminal device to be displayed, so as to obtain an augmented reality image. When a user perceives the surrounding environment by viewing the augmented reality image, the user can know the real surrounding environment and view the virtual content, so that the information quantity which can be obtained when the user views the surrounding environment can be increased, and the perception experience of the user on the surrounding environment is improved.
Based on the method in fig. 2, the examples of the present specification also provide some specific embodiments of the method, as described below.
For ease of understanding, a specific rendering process of the target sphere model is presented herein.
In the embodiment of the present disclosure, step 204: drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model, which specifically comprises the following steps:
And creating the virtual three-dimensional sphere model in a virtual three-dimensional space according to preset sphere parameters.
And setting the virtual panoramic image as a material object of a sphere grid at the virtual three-dimensional sphere model.
And rendering the sphere grid at the virtual three-dimensional sphere model by using a vertex shader and a fragment shader based on the virtual panoramic image corresponding to the material object to obtain the target sphere model.
In the embodiment of the present disclosure, preset sphere parameters for reflecting information such as a size and a grid density of the virtual three-dimensional sphere model may be generally preconfigured, so that the virtual three-dimensional sphere model meeting the actual needs can be created in the virtual three-dimensional space using the preset sphere parameters.
In the present embodiment, a Material (Material) may refer to information describing the shape and surface appearance of an object. In describing the shape of the virtual three-dimensional sphere model using the mesh, the surface of the virtual three-dimensional sphere model may be described using a material, for example, describing the appearance, color, texture, smoothness, transparency, etc. of the sphere of the virtual three-dimensional sphere model. Based on this, the virtual panoramic image may be set as a material object of a sphere mesh at the virtual three-dimensional sphere model. In practical applications, the mesh renderer merrender may set the material objects of the sphere mesh at the virtual three-dimensional sphere model.
In the present embodiment, a Shader (loader) is an editable program for implementing image rendering. Wherein the Vertex Shader (Vertex Shader) is mainly responsible for operations of geometric relationships of vertices and the like, and the fragment Shader (FragmentShader) is mainly responsible for computation of fragment colors and the like. Because the Material (Material) is actually an example of a Shader, a vertex Shader and a fragment Shader can be utilized to perform image rendering processing on a sphere grid at the virtual three-dimensional sphere model based on the virtual panoramic image corresponding to the Material object, so as to obtain a target sphere model with a virtual panoramic image with perspective effect at the sphere.
For ease of understanding, a specific implementation of rendering a virtual panoramic image with perspective effect at a sphere is presented herein.
Specifically, the rendering the sphere grid at the virtual three-dimensional sphere model by using a vertex shader and a fragment shader based on the virtual panoramic image corresponding to the material object to obtain the target sphere model may include:
and inputting vertex coordinate data of the sphere grid in the virtual three-dimensional space to the vertex shader aiming at any sphere grid to obtain vertex texture coordinate data output by the vertex shader.
And determining color values at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the first panoramic sub-image by using the fragment shader and a preset strategy.
And determining transparency values at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the second panoramic sub-image by using the fragment shader and the preset strategy.
And performing image rendering processing on the sphere grid according to the color value and the transparency value at each target position to obtain the target sphere model.
In the embodiment of the present disclosure, the sphere grid at the virtual three-dimensional sphere model is usually in the coordinate system where the virtual three-dimensional space is located, and the virtual panoramic image is usually in the texture coordinate system, so when the virtual panoramic image needs to be rendered in the sphere grid in the virtual three-dimensional space, the correspondence between the coordinate system where the virtual three-dimensional space is located and the texture coordinate system where the virtual panoramic image is located needs to be determined, so that when any position in the sphere grid is rendered, the pixel value at the corresponding position in the virtual panoramic image that needs to be used can be determined.
In practical applications, vertex coordinate data of a sphere grid in a virtual three-dimensional space is usually only input to a vertex shader, and vertex texture coordinate data corresponding to the vertex and located in the texture coordinate system output by the vertex shader can be obtained. After the vertex texture coordinate data is input into the fragment shader, the fragment shader can determine texture coordinates in a texture coordinate system corresponding to each position in the sphere grid through interpolation technology.
However, since when any position in the sphere grid is rendered, a corresponding position needs to be found from the first panoramic sub-image within the virtual panoramic image, to extract the pixel value at the corresponding position as the color value at this position in the sphere grid; and a corresponding position needs to be found out from the second panoramic sub-image in the virtual panoramic image, so that a pixel value at the corresponding position is extracted as a transparency value at the position in the sphere grid, and therefore, two positions in the virtual panoramic image actually exist and have a corresponding relation with the same position in the sphere grid.
Based on this, a preset policy may be set to reflect the principle of calculating texture coordinates of two positions (respectively located in the first panoramic sub-image and the second panoramic sub-image) in the virtual panoramic image, corresponding to any position in the sphere grid, according to texture coordinates in the texture coordinate system corresponding to any position in the sphere grid determined by the fragment shader.
Subsequently, for any position in the sphere grid, after determining the texture coordinates in the texture coordinate system corresponding to the position in the sphere grid, the fragment shader can extract the color value and the transparency value from the corresponding positions in the first panoramic sub-image and the second panoramic sub-image in the virtual panoramic image according to the preset strategy, and render the position in the sphere grid according to the extracted color value and transparency value.
For ease of understanding, specific implementations of the preset strategy are presented herein.
In the embodiment of the present disclosure, in order to simplify the calculation process, the first panoramic sub-image and the second panoramic sub-image may be made to have the same size, and a virtual panoramic image located in the UV coordinate system may be obtained by stitching. At this time, the preset policy may be determined according to a stitching manner of the first panoramic sub-image and the second panoramic sub-image with the same size. Wherein the UV coordinate system typically uses values between 0 and 1 to represent pixel locations on a texture, and typically with the lower left corner as the origin of coordinates (0, 0) and the upper right corner as the point (1, 1).
Specifically, if the stitching manner is to stitch the first panoramic sub-image and the second panoramic sub-image from left to right transversely to obtain the virtual panoramic image, the preset policy may be used to indicate: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of which the abscissa is 0.5 times the target texture abscissa and the ordinate is the target texture ordinate in the virtual panoramic image as a color value at the target position, and determining a pixel value at a position of which the abscissa is 0.5 times the sum of the target texture abscissa and 0.5 and the ordinate is the target texture ordinate in the virtual panoramic image as a transparency value at the target position.
If the stitching manner is to stitch the first panoramic sub-image and the second panoramic sub-image longitudinally from top to bottom to obtain the virtual panoramic image, the preset policy may be used to indicate: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of the virtual panoramic image, the position of which is the sum of the target texture abscissa and the target texture ordinate which is 0.5 times as large as the ordinate, as a color value at the target position, and determining a pixel value at a position of the virtual panoramic image, the position of which is the target texture abscissa and the ordinate which is 0.5 times as large as the ordinate, as a transparency value at the target position.
This is illustrated for ease of understanding. Fig. 4 is a schematic view of another virtual panoramic image provided in an embodiment of the present disclosure. As shown in fig. 4, the first panoramic sub-image 401 and the second panoramic sub-image 402 with the same size are transversely spliced from left to right to obtain a virtual panoramic image. Assuming that the range of the texture abscissa of the virtual panoramic image is [0,1], and the range of the texture ordinate is also [0,1], when the target texture abscissa and the target texture ordinate of the target position in the spherical mesh are 0.4 and 0.8, respectively, the pixel value at the position of the virtual panoramic image where the abscissa is 0.2 and the ordinate is 0.8 can be taken as the color value at the target position, and the pixel value at the position of the virtual panoramic image where the abscissa is 0.7 and the ordinate is 0.8 can be taken as the transparency value at the target position.
As shown in fig. 3, when the first panoramic sub-image 301 and the second panoramic sub-image 302 with the same size are vertically spliced from top to bottom to obtain a virtual panoramic image, it is also assumed that the range of the texture abscissa of the virtual panoramic image is [0,1], the range of the texture ordinate is also [0,1], and the target texture abscissa and the target texture ordinate of the target position in the spherical grid are respectively 0.4 and 0.8, in this case, the pixel value at the position where the abscissa is 0.4 and the ordinate is 0.9 in the virtual panoramic image may be used as the color value at the target position, and the pixel value at the position where the abscissa is 0.4 and the ordinate is 0.4 in the virtual panoramic image may be used as the transparency value at the target position.
It may be appreciated that the first panoramic sub-image and the second panoramic sub-image with the same size may also be transversely stitched from right to left to obtain a virtual panoramic image, and at this time, the preset policy may be used to indicate: and determining a pixel value at a position of which the abscissa is 0.5 times of the target texture abscissa and the ordinate is the target texture ordinate in the virtual panoramic image as a transparency value at a target position, and determining a pixel value at a position of which the abscissa is 0.5 times of the sum of the target texture abscissa and 0.5 and the ordinate is the target texture ordinate in the virtual panoramic image as a color value at the target position.
And the first panoramic sub-image and the second panoramic sub-image with the same size can be longitudinally spliced from bottom to top to obtain the virtual panoramic image, and the preset strategy can be used for indicating: and determining a pixel value at a position of which the abscissa is the target texture abscissa and the ordinate is 0.5 times of the sum of the target texture ordinate and 0.5 in the virtual panoramic image as a transparency value at the target position, and determining a pixel value at a position of which the abscissa is the target texture abscissa and the ordinate is 0.5 times of the target texture ordinate in the virtual panoramic image as a color value at the target position.
In practical applications, the sizes of the first panoramic sub-image and the second panoramic sub-image may also be inconsistent. For example, assume that the length ratio of the first panoramic sub-image to the second panoramic sub-image is N: m, and the width is unanimous, when first panorama sub-image and second panorama sub-image transversely splice from left to right obtains virtual panorama image, the tactics of predetermining can be used for instructing: and determining a pixel value at a position of which the abscissa is N/(N+M) times of the target texture and the ordinate is the ordinate of the target texture in the virtual panoramic image as a color value at the target position, and determining a pixel value at a position of which the abscissa is M/(N+M) times of the target texture and the abscissa is the sum of N/(N+M) and the ordinate is the ordinate of the target texture in the virtual panoramic image as a transparency value at the target position.
Alternatively, assume that the width ratio of the first panoramic sub-image to the second panoramic sub-image is N: m, and length is unanimous, when first panorama sub-image and second panorama sub-image from top to bottom vertically splice obtain virtual panorama image, the tactics of predetermining can be used for instructing: the pixel value at the position where the abscissa is the target texture abscissa and the ordinate is the sum of the target texture ordinate and M/(N+M) times of N/(N+M) in the virtual panoramic image is determined as the color value at the target position, and the pixel value at the position where the abscissa is the target texture abscissa and the ordinate is the target texture ordinate of M/(N+M) in the virtual panoramic image is determined as the transparency value at the target position, which will not be described.
In general, the manner of stitching the first panoramic sub-image and the second panoramic sub-image needs to make the shape of the stitched virtual panoramic image approximate to a square, which is beneficial to improving the calculation efficiency.
For ease of understanding, the determination process and principles of the user's range of viewing angles are presented herein.
Specifically, step 206: the obtaining a target spherical image in a user view angle range in the target sphere model may include:
And determining the view angle range of the user according to camera parameters of a virtual camera positioned in the target sphere model, the sphere center position information of the target sphere model and the equipment posture data of the terminal equipment.
And acquiring a spherical image in the target sphere model in the user visual angle range to obtain the target spherical image.
In the embodiment of the present specification, to display a three-dimensional object on a two-dimensional screen, it is necessary to perform a conversion process from three-dimensional to two-dimensional, which is a projection process. Mathematically speaking, projection is a geometric transformation that converts points in a three-dimensional coordinate system into a two-dimensional coordinate system. Based on the method, a virtual camera can be arranged in the target sphere model, and the image acquisition range of the virtual camera is used as a user visual angle range, so that a target spherical image required to be displayed to a user is obtained by performing projection transformation on a spherical surface in the image acquisition range of the virtual camera.
In practical applications, it is generally necessary to determine information such as a setting position, a shooting direction, and a field of view of a virtual camera according to camera parameters of the virtual camera. And the spherical area in the initial image acquisition range of the virtual camera is determined as an initial user visual angle range by combining the spherical center position information of the target spherical model.
In practical application, because the user usually adjusts the pitch angle, the roll angle and the like of the terminal equipment to adjust the image acquisition area of the terminal equipment, based on the pitch angle, the roll angle and the like, the spherical area currently contained in the image acquisition range of the virtual camera can be adjusted according to the equipment posture data of the terminal equipment to serve as the current user visual angle range. By acquiring the spherical image in the current user visual angle range as the target spherical image, the user can change the specific content of the target spherical image by adjusting the gesture of the terminal equipment, thereby being beneficial to improving the user experience.
Specifically, the determining the user view angle range according to the camera parameters of the virtual camera located in the target sphere model, the spherical center position information of the target sphere model, and the device posture data of the terminal device may include:
and determining camera position information, camera orientation information and camera visual field range information of the virtual camera according to the camera parameters of the virtual camera.
And determining a first rotation direction and a first rotation angle for the target sphere model according to pitch angle data, the camera position information and the sphere center position information in the equipment posture data of the terminal equipment.
And determining the view angle range of the user according to the camera position information, the camera orientation information, the camera view field range information, the sphere center position information, the first rotation direction and the first rotation angle.
In the embodiment of the present disclosure, the camera parameters of the virtual camera may include position parameters to reflect the position of the virtual camera. In practical applications, the position parameter may be coordinate data of a certain position in the virtual three-dimensional space, and for convenience of understanding and calculation, the position parameter may be set to an origin (0, 0) of the virtual three-dimensional space, which is not specifically limited.
The camera parameters of the virtual camera may include a target parameter to reflect the orientation of the virtual camera. Specifically, the target parameter may be a target point position toward which the virtual camera is oriented, and a vector direction from the position point to the target point may be taken as an orientation of the camera.
The camera parameters of the virtual camera may also include a fovy parameter to determine the field of view size of the virtual camera (acting as a focal length of the camera). In practical applications, the larger the value of the fovy parameter, the wider the field of view taken by the virtual camera (i.e., a large fovy value corresponds to shooting with a wide-angle lens); while the larger the value of the fovy parameter, the narrower the field of view that the virtual camera ingests (i.e., a small fovy value corresponds to shooting with a tele lens).
In practical application, the user acquires the image of the upper space when the user takes the attitude of the terminal equipment in the elevation angle, and acquires the image of the lower space when the user takes the attitude of the terminal equipment in the depression angle, so that after the relative position relationship between the camera position information of the virtual camera and the spherical center position information of the target spherical model is determined according to the camera position information of the virtual camera and the spherical center position information of the target spherical model, the first rotation direction and the first rotation angle of the target spherical model are determined according to the pitch angle data in the equipment attitude data of the terminal equipment. And after the target sphere model is rotated according to the first rotation direction and the first rotation angle, determining a spherical area positioned in the current user visual angle range according to the camera position information, the camera orientation information and the camera visual field range information of the virtual camera, and further extracting an image of the spherical area as a target spherical image.
For ease of understanding, implementations and principles are provided herein for determining a first direction of rotation and a first angle of rotation for a target sphere model based on device pose data of a terminal device.
In this embodiment of the present disclosure, the centers of sphere of the virtual camera and the target sphere model may be both located on the Z axis of the right-hand coordinate system.
Correspondingly, the determining the first rotation direction and the first rotation angle for the target sphere model according to the pitch angle data, the camera position information and the sphere center position information in the equipment posture data of the terminal equipment may include:
determining a first rotation direction of the target sphere model, which is required to rotate around the X-axis direction, according to a first angle type corresponding to the pitch angle data; wherein the first angle type includes: depression or elevation; the first rotational direction includes: clockwise or counter-clockwise.
And determining a first rotation angle required by the target sphere model to rotate around the X-axis direction according to the angle value corresponding to the pitch angle data.
In the embodiment of the present specification, a right-hand system (right-hand system) is one of methods of defining a rectangular coordinate system in space. The positive directions of the x-axis, y-axis and z-axis in this coordinate system are defined as follows: the right hand is placed at the original point, so that the thumb, the index finger and the middle finger are mutually right-angled, the thumb points to the positive direction of the x axis, and when the index finger points to the positive direction of the y axis, the direction pointed by the middle finger is the positive direction of the z axis.
Because the application program for constructing the virtual three-dimensional model usually adopts a right-hand coordinate system at present, the sphere centers of the virtual camera and the target sphere model can be positioned on the Z axis of the right-hand coordinate system. In practical application, the position of the virtual camera can be set as the origin (0, 0) of the right-hand coordinate system, and then the relative position between the virtual camera and the target sphere model can be determined according to the preset distance between the virtual camera and the sphere center of the target sphere model and the sphere radius. For ease of understanding, fig. 5 is a schematic diagram of a target sphere model provided in the embodiment of the present disclosure, as shown in fig. 5, a virtual camera 501 may be located at an origin (0, 0) of a right-hand coordinate system, a sphere center 502 of the target sphere model may be located on a Z-axis of the right-hand coordinate system, and an orientation of the virtual camera may be a direction from the origin to the sphere center.
In practical applications, when a user holds the terminal device to capture an image of a front area, if the orientation of the image capturing device at the terminal device is parallel to the ground, the pitch angle of the terminal device may be considered to be 0 degrees. When the user tilts the terminal device to shoot the image of the area above the front side, the terminal device can be considered to present an elevation angle, and at this time, the target sphere model needs to be rotated clockwise around the X-axis direction according to the elevation angle of the terminal device, so that the area, which is closer to the top of the sphere, of the target sphere model falls within the current view angle range of the user of the virtual camera, and therefore the spherical image of the upper area of the target sphere model is displayed to the user. Similarly, when the user tilts the terminal device to capture an image of the area below the front side, the terminal device may be considered to be at a depression angle, and at this time, the target sphere model needs to be rotated counterclockwise around the X-axis direction according to the depression angle of the terminal device, so as to make the area of the target sphere model closer to the bottom of the sphere fall within the current user viewing angle range of the virtual camera, thereby displaying the spherical image of the area below the target sphere model to the user.
It can be understood that the sphere center or any other position of the target sphere model can be set at the origin of the right-hand coordinate system, which is not particularly limited, and only the direction from the virtual camera to the sphere center needs to be used as the direction of the virtual camera, so that more sphere areas can be included in the view angle range of the user, and the user experience can be improved.
In addition, the target sphere model may be located in other types of coordinate systems, such as a left-hand coordinate system, etc.; alternatively, the centers of the virtual camera and the target sphere model may be located at other positions instead of the Z axis; at this time, the rotation direction and rotation angle of the target sphere model need to be determined adaptively according to the actual situation based on the pitch angle data of the terminal device, and the calculation process is only complex, which is not limited in any way.
In practical application, the user can adjust the image acquisition area of the terminal equipment in the left-right direction by adjusting the roll angle of the terminal equipment in addition to adjusting the pitch angle of the terminal equipment in the up-down direction. Based on the method, the target sphere model can be rotated according to the rolling angle of the terminal equipment, so that spherical images of more areas on the left side and the right side of the target sphere model can be conveniently displayed to a user, and the user experience can be improved.
Based on this, the determining the user view angle range according to the camera parameters of the virtual camera located in the target sphere model, the center of sphere position information of the target sphere model, and the device posture data of the terminal device may further include:
determining a second rotation direction of the target sphere model, which is required to rotate around the Y-axis direction, according to a second angle type corresponding to the rolling angle data in the equipment posture data; wherein the second angle type includes: an angle reflecting a left scroll or an angle reflecting a right scroll; the second rotational direction includes: clockwise or counter-clockwise.
And determining a second rotation angle required by the target sphere model to rotate around the Y-axis direction according to the angle value corresponding to the rolling angle data.
Correspondingly, the determining the user viewing angle range according to the camera position information, the camera orientation information, the camera view field range information, the center of sphere position information, the first rotation direction and the first rotation angle may specifically include:
and determining the view angle range of the user according to the camera position information, the camera orientation information, the camera view field range information, the sphere center position information, the first rotation direction, the first rotation angle, the second rotation direction and the second rotation angle.
For ease of understanding, the illustration is still provided in connection with the example of fig. 5. In practical application, when a user triggers to display an augmented reality image, the rolling angle of the terminal device is set to 0 degrees, and when the user horizontally turns over the terminal device to shoot an image of a left area, the rolling angle of the terminal device can be considered to be an angle reflecting left rolling, and at this time, the target sphere model needs to be rotated clockwise around the y-axis direction according to the angle of the rolling angle of the terminal device. Similarly, when the user horizontally turns over the terminal device to capture an image of the right area, the roll angle of the terminal device may be considered as reflecting the right roll angle, and at this time, the target sphere model needs to be rotated counterclockwise around the y-axis direction according to the roll angle of the terminal device. So that different areas on the left and right sides of the target sphere model can fall into the current user view angle range of the virtual camera, and further, spherical images of more areas of the target sphere model are displayed to the user, which is not described in detail.
In practical applications, the user may also need to rotate the terminal device from a vertical screen to a horizontal screen, where the pitch angle and roll angle of the terminal device are not generally changed, and thus the target sphere model is not rotated. However, in order to adaptively adjust the image to be displayed to the user in the scene, the up parameter reflecting the direction right above the virtual camera may be adjusted according to the rotation angle of the terminal device, and the adjusted up parameter is combined to determine the spherical image in the user viewing angle range, so as to extract the spherical image in the current user viewing angle range as the target spherical image, which will not be described in detail.
In the embodiment of the present disclosure, in order to improve the richness and layering sense of virtual content displayed to a user, a new layer may be added to cooperatively display an image pre-manufactured for a virtual object in addition to fusion display of a spherical virtual panoramic image and a surrounding environment image acquired in real time.
Based on this, the method in fig. 2 may further include:
a virtual object image is acquired.
Rendering processing is carried out in a preset plane area of a virtual three-dimensional space by utilizing the virtual object image, so as to obtain a target object image; the virtual object image comprises a first object sub-image and a second object sub-image; the color value at a specified position in the target object image is determined from the pixel value at a position in the first object sub-image corresponding to the specified position, and the transparency value at the specified position is determined from the pixel value at a position in the second object sub-image corresponding to the specified position.
Correspondingly, step 208: the target spherical image is overlapped on the surrounding environment image for display, and the augmented reality image is obtained, which specifically comprises:
And superposing and displaying the target object image, the target spherical image and the surrounding environment image in sequence from top to bottom to obtain the augmented reality image.
In the embodiment of the present disclosure, the virtual object image has consistency with the structure and the usage principle of the virtual panoramic image. The virtual object image may also be obtained by stitching the first object sub-image and the second object sub-image, and then the color value and the transparency value at the same position in the mesh to be rendered may be determined according to the pixel values at the corresponding positions in the first object sub-image and the second object sub-image, which will not be described in detail.
For ease of understanding, the virtual object image is illustrated in connection with fig. 4. If one virtual object image is shown in fig. 4, the first object sub-image and the second object sub-image in the virtual object image may be respectively referred to as a graph 401 and a graph 402. And the virtual object reflected in the virtual object image may be a lantern.
In practical applications, in order to simplify the processing operation, the virtual three-dimensional space in which the preset plane area for displaying the virtual object image is located and the virtual three-dimensional space in which the target sphere model is set may be the same, and the preset plane area may be located within the initial user viewing angle range of the virtual camera. And the distance value between the region center of the preset plane region and the virtual camera can be a preset value calibrated in advance, so that a target object image with a proper size can be extracted from the preset plane region.
In addition, since it is generally necessary to present the visual effect that the virtual object is located inside the virtual environment, the preset planar region may be located inside the target sphere model. In order to simplify the calculation operation, the virtual camera, the area center of the preset plane area and the sphere center of the target sphere model may be all located on the Z axis. Of course, the area center of the preset plane area may be located at other positions, which is not limited in particular.
In the embodiment of the present disclosure, since the target object image generated by using the virtual object image needs to be highlighted, based on this, the layer where the target object image is located may be set above the layer where the target spherical image is located, and the layer where the target spherical image is located may be set above the layer where the surrounding environment image is located, so as to ensure the visual effect of the augmented reality image.
For ease of understanding, an implementation of image rendering of a preset planar region with a virtual object image is presented herein.
Specifically, the performing rendering processing on the virtual object image in a preset plane area of the virtual three-dimensional space to obtain a target object image may include:
And determining vertex coordinate data of the grid at the preset plane area in the virtual three-dimensional space according to the preset plane area parameters.
And setting the virtual object image as a material object of the grid at the preset plane area.
And performing image rendering processing on the grid by using a vertex shader and a fragment shader based on the virtual object image corresponding to the material object and vertex coordinate data of the grid to obtain a virtual object plane area.
The target object image is acquired from the virtual object plane area.
In the embodiment of the present disclosure, the principle of setting a material object for a grid at a preset plane area may be consistent with the principle of setting a material object for a virtual three-dimensional sphere model; the principle of performing image rendering processing on the mesh at the preset plane area by using the vertex shader and the fragment shader is also consistent with the principle of performing image rendering processing on the sphere mesh at the virtual three-dimensional sphere model by using the vertex shader and the fragment shader, and will not be described in detail.
In practical application, in order to ensure the visual effect of the displayed target object image under the condition that the terminal device generates the pitch angle, the acquiring the target object image from the virtual object plane area may specifically include:
And determining a third rotation direction and a third rotation angle aiming at the virtual object plane area according to pitch angle data in the equipment posture data of the terminal equipment.
And acquiring the target object image from the virtual object plane area subjected to rotation processing according to the third rotation direction and the third rotation angle.
In the embodiment of the present disclosure, the principle of determining the third rotation direction and the third rotation angle for the virtual object plane area according to the pitch angle data of the terminal device may be identical to the principle of determining the first rotation direction and the first rotation angle for the target sphere model according to the pitch angle data of the terminal device, which is not described herein.
After the virtual object plane area is subjected to rotation processing according to the third rotation direction and the third rotation angle, image acquisition is performed from the virtual object plane area to obtain a target object image to be displayed to a user, the visual effect of the target object image can be ensured to accord with the actual situation, and the user experience is promoted.
In the embodiment of the present disclosure, when dynamic virtual content needs to be displayed, multiple frames of virtual object images and virtual panoramic images need to be manufactured, so that when each frame of virtual object images and virtual panoramic images are displayed according to a preset frequency, an animation effect can be displayed, which is beneficial to improving user experience.
Based on this, the acquiring the virtual object image may specifically include:
obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual object to obtain the virtual object image; and/or the number of the groups of groups,
step 202: the obtaining the virtual panoramic image may specifically include:
and obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual environment to obtain the virtual panoramic image.
Wherein the virtual environment may include: a virtual starry sky environment, the virtual objects may include: the cartoon object, the surrounding environment image can be an image collected by the terminal equipment aiming at the internal environment of the target venue. Therefore, when a user views the internal environment of the venue by using terminal equipment in the target venue, the virtual starry sky environment with static/dynamic state above the venue and the cartoon objects with static/dynamic state can be viewed through the augmented reality image displayed at the terminal equipment, and meanwhile, the internal environment of the venue can be viewed partially truly, so that the information quantity acquired when the user views the surrounding environment can be increased, and the perception experience of the user on the surrounding environment can be improved.
In practical applications, the virtual environment and the cartoon objects may be set according to actual requirements, for example, the virtual environment may further include a forest environment, a grassland environment, a sunny environment, etc., and the cartoon image may include mascot of a sporting event, lantern with mascot meaning, chinese knot, postcard, etc., which is not limited in particular.
Fig. 6 is a schematic flow chart of a lane corresponding to the display method of the augmented reality image in fig. 2 according to an embodiment of the present disclosure. As shown in fig. 6, the display flow of the augmented reality image may involve an execution subject such as a user and a terminal device.
In the user operation phase, the user can perform a presentation triggering operation for the augmented reality image at the terminal device.
In the stage of the display of the augmented reality image, the terminal equipment responds to the display triggering operation, and the virtual panoramic image, the virtual object image and the surrounding environment image acquired by the terminal equipment can be acquired. After the virtual panoramic image is set as a material object of a sphere grid at a virtual three-dimensional sphere model, and the virtual object image is set as a material object of a grid at a preset plane area, vertex coordinate data of the sphere grid in the virtual three-dimensional space can be input to a vertex shader aiming at any one sphere grid to obtain vertex texture coordinate data output by the vertex shader, and according to a preset strategy by using a fragment shader, color values and transparency values at all target positions in the sphere grid are determined according to the vertex texture coordinate data and the texture coordinate data of a first panoramic sub-image and a second panoramic sub-image in the virtual panoramic image, so as to obtain the target sphere model. And the vertex coordinate data of the grid at the preset plane area in the virtual three-dimensional space can be input to a vertex shader to obtain vertex texture coordinate data output by the vertex shader, and the pixel shader is utilized to determine color values and transparency values at each appointed position in the grid at the preset plane area according to the vertex texture coordinate data and the texture coordinate data of the first object sub-image and the second object sub-image in the virtual object image according to a preset strategy to obtain the virtual object plane area.
The terminal equipment can also acquire a spherical image in the target sphere model in the view angle range of the user according to the camera parameters of the virtual camera, the sphere center position information of the target sphere model and the equipment posture data of the terminal equipment, so as to acquire the target spherical image. And acquiring an image at the plane area of the virtual object according to the camera parameters of the virtual camera, the area center position information of the preset plane area and the equipment posture data of the terminal equipment, so as to obtain a target object image. Therefore, the target object image, the target spherical image and the surrounding environment image can be displayed in a superimposed manner according to the sequence from top to bottom, and the augmented reality image is obtained.
Based on the same thought, the embodiment of the specification also provides a device corresponding to the method. Fig. 7 is a schematic structural diagram of a display device corresponding to the augmented reality image of fig. 2 according to an embodiment of the present disclosure. As shown in fig. 7, the apparatus may be applied to a terminal device, and may include:
the first obtaining module 702 is configured to obtain the virtual panoramic image and the surrounding image collected by the terminal device.
A first drawing module 704, configured to draw a spherical surface of a virtual three-dimensional spherical model by using the virtual panoramic image, so as to obtain a target spherical model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; the color value at a target position in the sphere of the target sphere model is determined from the pixel value at a position in the first panoramic sub-image corresponding to the target position, and the transparency value at the target position is determined from the pixel value at a position in the second panoramic sub-image corresponding to the target position.
A second obtaining module 706, configured to obtain a target spherical image in a user perspective range within the target spherical model.
And the display module 708 is configured to superimpose the target spherical image on the surrounding environment image for display, so as to obtain an augmented reality image.
The present description example also provides some specific embodiments of the device based on the device of fig. 7, which is described below.
Optionally, the first drawing module 704 may specifically include:
and the model creation unit is used for creating the virtual three-dimensional sphere model in a virtual three-dimensional space according to preset sphere parameters.
And the first setting unit is used for setting the virtual panoramic image as a material object of the sphere grid at the virtual three-dimensional sphere model.
The first rendering unit is used for rendering the sphere grid at the virtual three-dimensional sphere model by using a vertex shader and a fragment shader based on the virtual panoramic image corresponding to the material object to obtain the target sphere model;
optionally, the first rendering unit may specifically include:
and the input subunit is used for inputting vertex coordinate data of the sphere grid in the virtual three-dimensional space to the vertex shader aiming at any sphere grid to obtain vertex texture coordinate data output by the vertex shader.
And the first determining subunit is used for determining the color value of each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the first panoramic sub-image by using the fragment shader and according to a preset strategy.
And the second determining subunit is used for determining transparency values at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the second panoramic sub-image by using the fragment shader and according to the preset strategy.
And the rendering subunit is used for performing image rendering processing on the sphere grid according to the color value and the transparency value at each target position to obtain the target sphere model.
The preset strategy is determined according to a splicing mode of the first panoramic sub-image and the second panoramic sub-image with the same size.
If the stitching manner is to stitch the first panoramic sub-image and the second panoramic sub-image from left to right transversely to obtain the virtual panoramic image, the preset strategy is used for indicating: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of which the abscissa is 0.5 times the target texture abscissa and the ordinate is the target texture ordinate in the virtual panoramic image as a color value at the target position, and determining a pixel value at a position of which the abscissa is 0.5 times the sum of the target texture abscissa and 0.5 and the ordinate is the target texture ordinate in the virtual panoramic image as a transparency value at the target position.
If the stitching manner is that the virtual panoramic image is stitched to the first panoramic sub-image and the second panoramic sub-image longitudinally from top to bottom, the preset policy is used for indicating: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of the virtual panoramic image, the position of which is the sum of the target texture abscissa and the target texture ordinate which is 0.5 times as large as the ordinate, as a color value at the target position, and determining a pixel value at a position of the virtual panoramic image, the position of which is the target texture abscissa and the ordinate which is 0.5 times as large as the ordinate, as a transparency value at the target position.
Optionally, the second obtaining module 706 may specifically include:
and the first determining unit is used for determining the view angle range of the user according to camera parameters of the virtual camera positioned in the target sphere model, the sphere center position information of the target sphere model and the equipment posture data of the terminal equipment.
The first acquisition unit is used for acquiring a spherical image in the target sphere model in the user visual angle range to obtain the target spherical image.
Optionally, the first determining unit may specifically include:
and the third determination subunit is used for determining camera position information, camera orientation information and camera visual field range information of the virtual camera according to the camera parameters of the virtual camera.
And the fourth determination subunit is used for determining a first rotation direction and a first rotation angle aiming at the target sphere model according to pitch angle data, the camera position information and the sphere center position information in the equipment posture data of the terminal equipment.
And a fifth determining subunit, configured to determine the user viewing angle range according to the camera position information, the camera orientation information, the camera field of view range information, the center of sphere position information, the first rotation direction, and the first rotation angle.
Optionally, the centers of sphere of the virtual camera and the target sphere model may be both located on the Z axis of the right-hand coordinate system.
The fourth determining subunit may specifically be configured to:
determining a first rotation direction of the target sphere model, which is required to rotate around the X-axis direction, according to a first angle type corresponding to the pitch angle data; wherein the first angle type includes: depression or elevation; the first rotational direction includes: clockwise or counter-clockwise.
And determining a first rotation angle required by the target sphere model to rotate around the X-axis direction according to the angle value corresponding to the pitch angle data.
Optionally, the first determining unit may further include:
a sixth determining subunit, configured to determine a second rotation direction in which the target sphere model needs to rotate around the Y-axis direction according to a second angle type corresponding to the roll angle data in the equipment posture data; wherein the second angle type includes: an angle reflecting a left scroll or an angle reflecting a right scroll; the second rotational direction includes: clockwise or counter-clockwise.
And a seventh determining subunit, configured to determine a second rotation angle required by the target sphere model to rotate around the Y-axis direction according to the angle value corresponding to the roll angle data.
The fifth determining subunit may specifically be configured to: and determining the view angle range of the user according to the camera position information, the camera orientation information, the camera view field range information, the sphere center position information, the first rotation direction, the first rotation angle, the second rotation direction and the second rotation angle.
Optionally, the apparatus shown in fig. 7 may further include:
And the third acquisition module is used for acquiring the virtual object image.
The second drawing module is used for performing rendering processing in a preset plane area of the virtual three-dimensional space by utilizing the virtual object image to obtain a target object image; the virtual object image comprises a first object sub-image and a second object sub-image; the color value at a specified position in the target object image is determined from the pixel value at a position in the first object sub-image corresponding to the specified position, and the transparency value at the specified position is determined from the pixel value at a position in the second object sub-image corresponding to the specified position.
The display module 708 may specifically be configured to: and superposing and displaying the target object image, the target spherical image and the surrounding environment image in sequence from top to bottom to obtain the augmented reality image.
Optionally, the second drawing module may specifically include:
and the second determining unit is used for determining vertex coordinate data of the grid at the preset plane area in the virtual three-dimensional space according to the preset plane area parameters.
And the second setting unit is used for setting the virtual object image as a material object of the grid at the preset plane area.
And the second rendering unit is used for performing image rendering processing on the grid by using a vertex shader and a fragment shader based on the virtual object image corresponding to the material object and vertex coordinate data of the grid to obtain a virtual object plane area.
And a second acquisition unit configured to acquire the target object image from the virtual object plane area.
Optionally, the second obtaining unit may specifically be configured to:
and determining a third rotation direction and a third rotation angle aiming at the virtual object plane area according to pitch angle data in the equipment posture data of the terminal equipment.
And acquiring the target object image from the virtual object plane area subjected to rotation processing according to the third rotation direction and the third rotation angle.
Optionally, the third obtaining module may specifically be configured to:
obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual object to obtain the virtual object image; and/or the number of the groups of groups,
the first obtaining module 702 may specifically be configured to: and obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual environment to obtain the virtual panoramic image.
Wherein the virtual environment comprises: a virtual starry sky environment, the virtual object comprising: and the cartoon object, the surrounding environment image is an image collected by the terminal equipment aiming at the internal environment of the target venue.
Based on the same thought, the embodiment of the specification also provides equipment corresponding to the method.
Fig. 8 is a schematic structural diagram of a display device corresponding to one of the augmented reality images of fig. 2 according to an embodiment of the present disclosure. As shown in fig. 8, the device 800 may include:
at least one processor 810; the method comprises the steps of,
a memory 830 communicatively coupled to the at least one processor; wherein,
the memory 830 stores instructions 820 executable by the at least one processor 810 to enable the at least one processor 810 to:
and acquiring a virtual panoramic image and a surrounding environment image acquired by the terminal equipment.
Drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; the color value at a target position in the sphere of the target sphere model is determined from the pixel value at a position in the first panoramic sub-image corresponding to the target position, and the transparency value at the target position is determined from the pixel value at a position in the second panoramic sub-image corresponding to the target position.
And acquiring a target spherical image in the view angle range of the user in the target spherical model.
And superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the apparatus shown in fig. 8, the description is relatively simple, as it is substantially similar to the method embodiment, with reference to the partial description of the method embodiment.
In the 90 s of the 20 th century, improvements to one technology could clearly be distinguished as improvements in hardware (e.g., improvements to circuit structures such as diodes, transistors, switches, etc.) or software (improvements to the process flow). However, with the development of technology, many improvements of the current method flows can be regarded as direct improvements of hardware circuit structures. Designers almost always obtain corresponding hardware circuit structures by programming improved method flows into hardware circuits. Therefore, an improvement of a method flow cannot be said to be realized by a hardware entity module. For example, a programmable logic device (Programmable Logic Device, PLD) (e.g., field programmable gate array (FieldProgrammable Gate Array, FPGA)) is an integrated circuit whose logic function is determined by the programming of the device by a user. A designer programs to "integrate" a digital system onto a PLD without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Moreover, nowadays, instead of manually manufacturing integrated circuit chips, such programming is mostly implemented by using "logic compiler" software, which is similar to the software compiler used in program development and writing, and the original code before the compiling is also written in a specific programming language, which is called hardware description language (Hardware Description Language, HDL), but not just one of the hdds, but a plurality of kinds, such as ABEL (AdvancedBoolean Expression Language), AHDL (Altera Hardware Description Language), confluence, CUPL (Cornell University Programming Language), HDCal, JHDL (JavaHardware Description Language), lava, lola, myHDL, PALASM, RHDL (Ruby Hardware Description Language), etc., VHDL (Very-High-Speed Integrated Circuit Hardware Description Language) and Verilog are currently most commonly used. It will also be apparent to those skilled in the art that a hardware circuit implementing the logic method flow can be readily obtained by merely slightly programming the method flow into an integrated circuit using several of the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer readable medium storing computer readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, application specific integrated circuits (Application Specific Integrated Circuit, ASIC), programmable logic controllers, and embedded microcontrollers, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, atmel AT91SAM, microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic of the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller in a pure computer readable program code, it is well possible to implement the same functionality by logically programming the method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers, etc. Such a controller may thus be regarded as a kind of hardware component, and means for performing various functions included therein may also be regarded as structures within the hardware component. Or even means for achieving the various functions may be regarded as either software modules implementing the methods or structures within hardware components.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present application.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In one typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The foregoing is merely exemplary of the present application and is not intended to limit the present application. Various modifications and changes may be made to the present application by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. which are within the spirit and principles of the present application are intended to be included within the scope of the claims of the present application.

Claims (25)

1. A display method of an augmented reality image is applied to a terminal device and comprises the following steps:
acquiring a virtual panoramic image and a surrounding environment image acquired by the terminal equipment;
drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; a color value at a target position in a sphere of the target sphere model is determined from a pixel value at a position in the first panoramic sub-image corresponding to the target position, and a transparency value at the target position is determined from a pixel value at a position in the second panoramic sub-image corresponding to the target position;
acquiring a target spherical image in a user visual angle range in the target spherical model;
And superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image.
2. The method of claim 1, wherein the drawing the sphere of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain the target sphere model specifically comprises:
creating the virtual three-dimensional sphere model in a virtual three-dimensional space according to preset sphere parameters;
setting the virtual panoramic image as a material object of a sphere grid at the virtual three-dimensional sphere model;
and rendering the sphere grid at the virtual three-dimensional sphere model by using a vertex shader and a fragment shader based on the virtual panoramic image corresponding to the material object to obtain the target sphere model.
3. The method of claim 2, wherein the rendering the sphere mesh at the virtual three-dimensional sphere model by using a vertex shader and a fragment shader based on the virtual panoramic image corresponding to the material object to obtain the target sphere model specifically comprises:
inputting vertex coordinate data of the sphere grid in the virtual three-dimensional space to the vertex shader aiming at any one sphere grid to obtain vertex texture coordinate data output by the vertex shader;
Determining color values at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the first panoramic sub-image by using the fragment shader and a preset strategy;
determining transparency values at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the second panoramic sub-image by using the fragment shader and the preset strategy;
and performing image rendering processing on the sphere grid according to the color value and the transparency value at each target position to obtain the target sphere model.
4. The method of claim 3, wherein the preset strategy is determined according to a stitching mode of the first panoramic sub-image and the second panoramic sub-image with the same size;
if the stitching manner is to stitch the first panoramic sub-image and the second panoramic sub-image from left to right transversely to obtain the virtual panoramic image, the preset strategy is used for indicating: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of which the abscissa is 0.5 times the target texture abscissa and the ordinate is the target texture ordinate in the virtual panoramic image as a color value at the target position, and determining a pixel value at a position of which the abscissa is 0.5 times the sum of the target texture abscissa and 0.5 and the ordinate is the target texture ordinate in the virtual panoramic image as a transparency value at the target position;
If the stitching manner is that the virtual panoramic image is stitched to the first panoramic sub-image and the second panoramic sub-image longitudinally from top to bottom, the preset policy is used for indicating: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of the virtual panoramic image, the position of which is the sum of the target texture abscissa and the target texture ordinate which is 0.5 times as large as the ordinate, as a color value at the target position, and determining a pixel value at a position of the virtual panoramic image, the position of which is the target texture abscissa and the ordinate which is 0.5 times as large as the ordinate, as a transparency value at the target position.
5. The method according to claim 1, wherein the acquiring the target spherical image in the user view angle range in the target sphere model specifically comprises:
determining the view angle range of the user according to camera parameters of a virtual camera positioned in the target sphere model, the sphere center position information of the target sphere model and the equipment posture data of the terminal equipment;
And acquiring a spherical image in the target sphere model in the user visual angle range to obtain the target spherical image.
6. The method according to claim 5, wherein the determining the user view angle range according to the camera parameters of the virtual camera located in the target sphere model, the sphere center position information of the target sphere model and the device posture data of the terminal device specifically comprises:
determining camera position information, camera orientation information and camera visual field range information of the virtual camera according to camera parameters of the virtual camera;
determining a first rotation direction and a first rotation angle for the target sphere model according to pitch angle data, camera position information and sphere center position information in equipment posture data of the terminal equipment;
and determining the view angle range of the user according to the camera position information, the camera orientation information, the camera view field range information, the sphere center position information, the first rotation direction and the first rotation angle.
7. The method of claim 6, the centers of sphere of the virtual camera and the target sphere model are both located on the Z-axis of a right-hand coordinate system;
The determining a first rotation direction and a first rotation angle for the target sphere model according to pitch angle data, the camera position information and the sphere center position information in the equipment posture data of the terminal equipment specifically includes:
determining a first rotation direction of the target sphere model, which is required to rotate around the X-axis direction, according to a first angle type corresponding to the pitch angle data; wherein the first angle type includes: depression or elevation; the first rotational direction includes: clockwise or counterclockwise;
and determining a first rotation angle required by the target sphere model to rotate around the X-axis direction according to the angle value corresponding to the pitch angle data.
8. The method of claim 7, the determining the user perspective range from camera parameters of a virtual camera located within the target sphere model, center of sphere position information of the target sphere model, and device pose data of the terminal device, further comprising:
determining a second rotation direction of the target sphere model, which is required to rotate around the Y-axis direction, according to a second angle type corresponding to the rolling angle data in the equipment posture data; wherein the second angle type includes: an angle reflecting a left scroll or an angle reflecting a right scroll; the second rotational direction includes: clockwise or counterclockwise;
Determining a second rotation angle required by the target sphere model to rotate around the Y-axis direction according to the angle value corresponding to the rolling angle data;
the determining the user viewing angle range according to the camera position information, the camera orientation information, the camera view field range information, the center of sphere position information, the first rotation direction and the first rotation angle specifically includes:
and determining the view angle range of the user according to the camera position information, the camera orientation information, the camera view field range information, the sphere center position information, the first rotation direction, the first rotation angle, the second rotation direction and the second rotation angle.
9. The method of claim 1, further comprising:
obtaining a virtual object image;
rendering processing is carried out in a preset plane area of a virtual three-dimensional space by utilizing the virtual object image, so as to obtain a target object image; the virtual object image comprises a first object sub-image and a second object sub-image; the color value at a specified position in the target object image is determined according to the pixel value at a position corresponding to the specified position in the first object sub-image, and the transparency value at the specified position is determined according to the pixel value at a position corresponding to the specified position in the second object sub-image;
The step of superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image specifically comprises the following steps:
and superposing and displaying the target object image, the target spherical image and the surrounding environment image in sequence from top to bottom to obtain the augmented reality image.
10. The method of claim 9, wherein the rendering process is performed in a preset plane area of the virtual three-dimensional space by using the virtual object image to obtain a target object image, and the method specifically comprises:
according to the preset plane area parameters, vertex coordinate data of the grid at the preset plane area in the virtual three-dimensional space are determined;
setting the virtual object image as a material object of a grid at the preset plane area;
based on the virtual object image corresponding to the material object and vertex coordinate data of the grid, performing image rendering processing on the grid by using a vertex shader and a fragment shader to obtain a virtual object plane area;
the target object image is acquired from the virtual object plane area.
11. The method according to claim 9, wherein the acquiring the target object image from the virtual object plane area specifically includes:
Determining a third rotation direction and a third rotation angle for the virtual object plane area according to pitch angle data in equipment posture data of the terminal equipment;
and acquiring the target object image from the virtual object plane area subjected to rotation processing according to the third rotation direction and the third rotation angle.
12. The method according to any one of claims 9-11, wherein the acquiring the virtual object image specifically comprises:
obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual object to obtain the virtual object image; and/or the number of the groups of groups,
the obtaining the virtual panoramic image specifically includes:
and obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual environment to obtain the virtual panoramic image.
13. The method of claim 12, the virtual environment comprising: a virtual starry sky environment, the virtual object comprising: and the cartoon object, the surrounding environment image is an image collected by the terminal equipment aiming at the internal environment of the target venue.
14. A display device of an augmented reality image, applied to a terminal device, comprising:
the first acquisition module is used for acquiring the virtual panoramic image and the surrounding environment image acquired by the terminal equipment;
The first drawing module is used for drawing the spherical surface of the virtual three-dimensional spherical model by utilizing the virtual panoramic image to obtain a target spherical model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; a color value at a target position in a sphere of the target sphere model is determined from a pixel value at a position in the first panoramic sub-image corresponding to the target position, and a transparency value at the target position is determined from a pixel value at a position in the second panoramic sub-image corresponding to the target position;
the second acquisition module is used for acquiring a target spherical image in the view angle range of the user in the target spherical model;
and the display module is used for superposing the target spherical image on the surrounding environment image for displaying to obtain an augmented reality image.
15. The apparatus of claim 14, the first drawing module specifically comprises:
the model creation unit is used for creating the virtual three-dimensional sphere model in a virtual three-dimensional space according to preset sphere parameters;
a first setting unit, configured to set the virtual panoramic image as a material object of a sphere grid at the virtual three-dimensional sphere model;
And the first rendering unit is used for rendering the sphere grid at the virtual three-dimensional sphere model by using a vertex shader and a fragment shader based on the virtual panoramic image corresponding to the material object to obtain the target sphere model.
16. The apparatus of claim 15, the first rendering unit, in particular comprising:
an input subunit, configured to input vertex coordinate data of the sphere grid in the virtual three-dimensional space to the vertex shader for any one of the sphere grids, to obtain vertex texture coordinate data output by the vertex shader;
a first determining subunit, configured to determine, by using the fragment shader, a color value at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the first panoramic sub-image according to a preset policy;
the second determining subunit is configured to determine, by using the fragment shader, a transparency value at each target position in the sphere grid according to the vertex texture coordinate data and the texture coordinate data of the second panoramic sub-image according to the preset policy;
The rendering subunit is used for performing image rendering processing on the sphere grid according to the color value and the transparency value at each target position to obtain the target sphere model;
the preset strategy is determined according to a splicing mode of the first panoramic sub-image and the second panoramic sub-image which are consistent in size;
if the stitching manner is to stitch the first panoramic sub-image and the second panoramic sub-image from left to right transversely to obtain the virtual panoramic image, the preset strategy is used for indicating: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of which the abscissa is 0.5 times the target texture abscissa and the ordinate is the target texture ordinate in the virtual panoramic image as a color value at the target position, and determining a pixel value at a position of which the abscissa is 0.5 times the sum of the target texture abscissa and 0.5 and the ordinate is the target texture ordinate in the virtual panoramic image as a transparency value at the target position;
If the stitching manner is that the virtual panoramic image is stitched to the first panoramic sub-image and the second panoramic sub-image longitudinally from top to bottom, the preset policy is used for indicating: after determining a target texture abscissa and a target texture ordinate of the target position according to the vertex texture coordinate data, determining a pixel value at a position of the virtual panoramic image, the position of which is the sum of the target texture abscissa and the target texture ordinate which is 0.5 times as large as the ordinate, as a color value at the target position, and determining a pixel value at a position of the virtual panoramic image, the position of which is the target texture abscissa and the ordinate which is 0.5 times as large as the ordinate, as a transparency value at the target position.
17. The apparatus of claim 14, the second acquisition module specifically comprising:
a first determining unit, configured to determine the user view angle range according to camera parameters of a virtual camera located in the target sphere model, spherical center position information of the target sphere model, and device posture data of the terminal device;
the first acquisition unit is used for acquiring a spherical image in the target sphere model in the user visual angle range to obtain the target spherical image.
18. The apparatus of claim 17, the first determining unit specifically comprises:
a third determining subunit, configured to determine camera position information, camera orientation information, and camera field of view information of the virtual camera according to camera parameters of the virtual camera;
a fourth determining subunit, configured to determine a first rotation direction and a first rotation angle for the target sphere model according to pitch angle data, the camera position information, and the center position information in the equipment posture data of the terminal equipment;
and a fifth determining subunit, configured to determine the user viewing angle range according to the camera position information, the camera orientation information, the camera field of view range information, the center of sphere position information, the first rotation direction, and the first rotation angle.
19. The apparatus of claim 18, the centers of sphere of the virtual camera and the target sphere model are both located on the Z-axis of a right-hand coordinate system;
the fourth determining subunit is specifically configured to:
determining a first rotation direction of the target sphere model, which is required to rotate around the X-axis direction, according to a first angle type corresponding to the pitch angle data; wherein the first angle type includes: depression or elevation; the first rotational direction includes: clockwise or counterclockwise;
And determining a first rotation angle required by the target sphere model to rotate around the X-axis direction according to the angle value corresponding to the pitch angle data.
20. The apparatus of claim 19, the first determining unit further comprising:
a sixth determining subunit, configured to determine a second rotation direction in which the target sphere model needs to rotate around the Y-axis direction according to a second angle type corresponding to the roll angle data in the equipment posture data; wherein the second angle type includes: an angle reflecting a left scroll or an angle reflecting a right scroll; the second rotational direction includes: clockwise or counterclockwise;
a seventh determining subunit, configured to determine a second rotation angle required by the target sphere model to rotate around the Y-axis direction according to the angle value corresponding to the roll angle data;
the fifth determining subunit is specifically configured to:
and determining the view angle range of the user according to the camera position information, the camera orientation information, the camera view field range information, the sphere center position information, the first rotation direction, the first rotation angle, the second rotation direction and the second rotation angle.
21. The apparatus of claim 14, further comprising:
the third acquisition module is used for acquiring the virtual object image;
the second drawing module is used for performing rendering processing in a preset plane area of the virtual three-dimensional space by utilizing the virtual object image to obtain a target object image; the virtual object image comprises a first object sub-image and a second object sub-image; the color value at a specified position in the target object image is determined according to the pixel value at a position corresponding to the specified position in the first object sub-image, and the transparency value at the specified position is determined according to the pixel value at a position corresponding to the specified position in the second object sub-image;
the display module is specifically configured to:
and superposing and displaying the target object image, the target spherical image and the surrounding environment image in sequence from top to bottom to obtain the augmented reality image.
22. The apparatus of claim 21, the second drawing module specifically includes:
a second determining unit, configured to determine vertex coordinate data of a grid at a preset plane area in the virtual three-dimensional space according to a preset plane area parameter;
A second setting unit, configured to set the virtual object image as a material object of a grid at the preset plane area;
the second rendering unit is used for performing image rendering processing on the grid by using a vertex shader and a fragment shader based on the virtual object image corresponding to the material object and vertex coordinate data of the grid to obtain a virtual object plane area;
and a second acquisition unit configured to acquire the target object image from the virtual object plane area.
23. The apparatus of claim 21, the second acquisition unit being specifically configured to:
determining a third rotation direction and a third rotation angle for the virtual object plane area according to pitch angle data in equipment posture data of the terminal equipment;
and acquiring the target object image from the virtual object plane area subjected to rotation processing according to the third rotation direction and the third rotation angle.
24. The apparatus of any one of claims 21-23, wherein the third obtaining module is specifically configured to:
obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual object to obtain the virtual object image; and/or the number of the groups of groups,
The first obtaining module is specifically configured to:
obtaining an animation frame to be displayed from an animation frame sequence aiming at a virtual environment to obtain the virtual panoramic image;
wherein the virtual environment comprises: a virtual starry sky environment, the virtual object comprising: and the cartoon object, the surrounding environment image is an image collected by the terminal equipment aiming at the internal environment of the target venue.
25. A display device of an augmented reality image, comprising:
at least one processor; the method comprises the steps of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to:
acquiring a virtual panoramic image and a surrounding environment image acquired by the terminal equipment;
drawing the spherical surface of the virtual three-dimensional sphere model by using the virtual panoramic image to obtain a target sphere model; the virtual panoramic image comprises a first panoramic sub-image and a second panoramic sub-image; a color value at a target position in a sphere of the target sphere model is determined from a pixel value at a position in the first panoramic sub-image corresponding to the target position, and a transparency value at the target position is determined from a pixel value at a position in the second panoramic sub-image corresponding to the target position;
Acquiring a target spherical image in a user visual angle range in the target spherical model;
and superposing the target spherical image on the surrounding environment image for display to obtain an augmented reality image.
CN202311550604.9A 2023-11-20 2023-11-20 Display method, device and equipment for augmented reality image Pending CN117676111A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311550604.9A CN117676111A (en) 2023-11-20 2023-11-20 Display method, device and equipment for augmented reality image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311550604.9A CN117676111A (en) 2023-11-20 2023-11-20 Display method, device and equipment for augmented reality image

Publications (1)

Publication Number Publication Date
CN117676111A true CN117676111A (en) 2024-03-08

Family

ID=90078115

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311550604.9A Pending CN117676111A (en) 2023-11-20 2023-11-20 Display method, device and equipment for augmented reality image

Country Status (1)

Country Link
CN (1) CN117676111A (en)

Similar Documents

Publication Publication Date Title
TWI698841B (en) Data processing method and device for merging map areas
US10134198B2 (en) Image compensation for an occluding direct-view augmented reality system
CN108939556B (en) Screenshot method and device based on game platform
US20170038942A1 (en) Playback initialization tool for panoramic videos
CN107146274B (en) Image data processing system, texture mapping compression method and method for generating panoramic video
CN110193193B (en) Rendering method and device of game scene
CN111803945B (en) Interface rendering method and device, electronic equipment and storage medium
CN103327217B (en) A kind of method for processing video frequency and device
US11044398B2 (en) Panoramic light field capture, processing, and display
CA3045133C (en) Systems and methods for augmented reality applications
KR20070086037A (en) Method for inter-scene transitions
CN113223130B (en) Path roaming method, terminal equipment and computer storage medium
EP3839699A1 (en) Augmented virtuality self view
CN107005689B (en) Digital video rendering
CN114926612A (en) Aerial panoramic image processing and immersive display system
CN110889384A (en) Scene switching method and device, electronic equipment and storage medium
CN116547718A (en) User interface
GB2584753A (en) All-around spherical light field rendering method
Schwandt et al. Glossy reflections for mixed reality environments on mobile devices
CN117676111A (en) Display method, device and equipment for augmented reality image
US20220108420A1 (en) Method and system of efficient image rendering for near-eye light field displays
CN109908576A (en) A kind of rendering method and device, electronic equipment, storage medium of information module
Kropp et al. Acquiring and rendering high-resolution spherical mosaics
CN114327174A (en) Virtual reality scene display method and cursor three-dimensional display method and device
CN108961371B (en) Panorama starting page, APP display method, processing device and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination