CN111563966B - Virtual content display method, device, terminal equipment and storage medium - Google Patents

Virtual content display method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111563966B
CN111563966B CN201910082681.3A CN201910082681A CN111563966B CN 111563966 B CN111563966 B CN 111563966B CN 201910082681 A CN201910082681 A CN 201910082681A CN 111563966 B CN111563966 B CN 111563966B
Authority
CN
China
Prior art keywords
content
virtual
plane
virtual content
displayed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910082681.3A
Other languages
Chinese (zh)
Other versions
CN111563966A (en
Inventor
乔亚楠
林彬烯
卢智雄
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910082681.3A priority Critical patent/CN111563966B/en
Priority to PCT/CN2019/129222 priority patent/WO2020135719A1/en
Publication of CN111563966A publication Critical patent/CN111563966A/en
Application granted granted Critical
Publication of CN111563966B publication Critical patent/CN111563966B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/005General purpose rendering architectures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/02Non-photorealistic rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the application discloses a virtual content display method, a device, terminal equipment and a storage medium, and relates to the technical field of display. The virtual content display method is applied to the terminal equipment and comprises the following steps: identifying a target marker and acquiring position and posture information of the target marker relative to the terminal equipment; acquiring the reflection content of the virtual content relative to a designated plane based on the virtual content to be displayed, wherein the designated plane is the horizontal plane in which the bottom of the virtual content in the virtual space is positioned; according to the position and posture information, the virtual content and the rendering position of the reflection content in the virtual space are obtained; rendering the virtual content and the back image content according to the rendering position; and displaying the virtual content and the inverted image content. The method can realize the simultaneous display of the virtual content and the inverted image content corresponding to the virtual content.

Description

Virtual content display method, device, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a virtual content display method, device, terminal equipment, and storage medium.
Background
Along with development of technology, machine intelligence and information intelligence are becoming popular, and technologies for identifying user images through image acquisition devices such as machine vision or virtual vision to realize man-machine interaction are becoming important. The augmented reality technology (Augmented Reality, AR) constructs virtual content that does not exist in a real environment by means of computer graphics technology and visualization technology, accurately fuses the virtual content into a real environment by means of image recognition positioning technology, fuses the virtual content and the real environment by means of a display device, and displays the virtual content and the real environment to a user for a real sensory experience. The first technical problem to be solved by the augmented reality technology is how to accurately fuse the virtual content into the real world, that is, to make the virtual content appear at the correct position of the real scene in the correct angular pose, thereby generating a strong visual sense of realism. Therefore, how to improve the display effect of virtual content is an important research direction of augmented reality or mixed reality.
Disclosure of Invention
In view of the above problems, embodiments of the present application provide a method, an apparatus, a terminal device, and a storage medium for displaying virtual content, which can improve the display effect of the virtual content, so as to improve the sense of reality.
In a first aspect, an embodiment of the present application provides a virtual content display method, applied to a terminal device, where the method includes: identifying a target marker and acquiring position and posture information of the target marker relative to the terminal equipment; acquiring the reflection content of the virtual content relative to a designated plane based on the virtual content to be displayed, wherein the designated plane is the horizontal plane in which the bottom of the virtual content in the virtual space is positioned; according to the position and posture information, the virtual content and the rendering position of the reflection content in the virtual space are obtained; rendering the virtual content and the back image content according to the rendering position; and displaying the virtual content and the inverted image content.
In a second aspect, an embodiment of the present application provides a virtual content display apparatus, applied to a terminal device, where the apparatus includes: the system comprises an image recognition module, a content acquisition module, a position acquisition module, a rendering module and a display module, wherein the image recognition module is used for recognizing a target marker and acquiring position and posture information of the target marker relative to the terminal equipment; the content acquisition module is used for acquiring the reflection content of the virtual content relative to a designated plane based on the virtual content to be displayed, wherein the designated plane is the horizontal plane where the bottom of the virtual content in the virtual space is located; the position acquisition module is used for acquiring the rendering positions of the virtual content and the inverted image content in the virtual space according to the position and the gesture information; the rendering module is used for rendering the virtual content and the back image content according to the rendering position; the display module is used for displaying the virtual content and the inverted image content.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more program configured to perform the virtual content display method provided in the first aspect described above.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium having program code stored therein, the program code being callable by a processor to perform the virtual content display method provided in the first aspect.
The scheme provided by the embodiment of the application is applied to terminal equipment, the position and the gesture information of the target marker relative to the terminal equipment are obtained through identifying the target marker, and the inverted image content of the virtual content relative to a designated plane is obtained based on the virtual content to be displayed, wherein the designated plane is the horizontal plane where the bottom of the virtual content in the virtual space is located, then the rendering positions of the virtual content and the inverted image content in the virtual space are obtained according to the position and the gesture information, finally the virtual content and the inverted image content are rendered according to the rendering positions, and the virtual content and the inverted image content are displayed, so that the display of the inverted image content of the virtual content and the virtual content is realized according to the relative position and the gesture relation between the actual marker and the terminal, the display effect of the virtual content is improved, and the sense of reality of the virtual content in augmented reality is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 illustrates a schematic diagram of an application environment suitable for use with embodiments of the present application.
Fig. 2 shows a flow chart of a virtual content display method according to an embodiment of the present application.
Fig. 3 shows a schematic diagram of model data provided according to an embodiment of the present application.
Fig. 4 shows a schematic diagram of a display effect according to an embodiment of the present application.
Fig. 5 shows another display effect schematic diagram according to an embodiment of the present application.
Fig. 6 shows a flow chart of a virtual content display method according to another embodiment of the present application.
Fig. 7 shows a schematic diagram of model data provided according to an embodiment of the present application.
Fig. 8 shows a flowchart of step S230 in a virtual content display method according to an embodiment of the present application.
Fig. 9A-9B show a schematic view of a display effect according to an embodiment of the present application.
Fig. 10 shows another display effect diagram according to an embodiment of the present application.
Fig. 11 shows a flowchart of step S240 in a virtual content display method according to an embodiment of the present application.
Fig. 12 shows still another display effect diagram according to an embodiment of the present application.
Fig. 13 shows still another display effect diagram according to an embodiment of the present application.
Fig. 14 shows still another display effect diagram according to an embodiment of the present application.
Fig. 15 shows still another display effect diagram according to an embodiment of the present application.
Fig. 16 shows still another display effect diagram according to an embodiment of the present application.
Fig. 17 shows still another display effect diagram according to an embodiment of the present application.
Fig. 18 shows a block diagram of a virtual content display apparatus according to an embodiment of the present application.
Fig. 19 is a block diagram of a terminal device for performing a virtual content display method according to an embodiment of the present application.
Fig. 20 is a storage unit for storing or carrying program code for implementing a virtual content display method according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
In recent years, with the advancement of technology, technologies such as augmented reality (AR, augmented Reality), which is a technology for increasing the perception of the real world by a user through information provided by a computer system, superimposes a virtual object generated by a computer, a scene, or a content object such as system prompt information into the real scene to enhance or modify the perception of the real world environment or data representing the real world environment, have become a hotspot for research at home and abroad. In the conventional augmented reality display technology, when virtual contents (such as virtual objects like virtual characters and animals) are displayed by using a device, only the virtual objects themselves are usually displayed, and the reality may be weak.
In order to solve the above problems, the inventors have studied and proposed a virtual content display method, device, terminal equipment and storage medium in the embodiments of the present application, to perform augmented reality display on virtual content and inverted image content corresponding to the virtual content, so as to improve the display effect of the virtual content.
The application scenario of the virtual content display method provided by the embodiment of the application is described below.
Referring to fig. 1, a schematic diagram of an application scenario of a virtual content display method according to an embodiment of the present application is shown, where the application scenario includes a display system 10. The display system 10 includes: the terminal device 100 and the tag 200.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone or a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may be an intelligent terminal such as a mobile phone connected to an external/access type head-mounted display device, that is, the terminal device 100 may be used as a processing and storage device of the head-mounted display device, and may be inserted into or connected to the external type head-mounted display device, so as to display virtual contents in the head-mounted display device.
In the embodiment of the present application, the image of the marker 200 is stored in the terminal device 100. The tag 200 may include at least one sub-tag having one or more characteristic points. When the above-described marker 200 is within the field of view of the terminal device 100, the terminal device 100 may take the above-described marker 200 within the field of view as a target marker and collect an image containing the target marker. When the image containing the target marker is acquired, the acquired image of the target marker can be identified, so that spatial position information such as the position and the gesture of the target marker relative to the terminal device 100 and identification results such as the identity information of the target marker are obtained, and the target marker is positioned and tracked. The terminal device 100 may display the corresponding virtual content based on the information of the position, posture, etc. of the target mark with respect to the terminal device 100. It should be understood that the specific marker 200 is not limited in this embodiment of the present application, and may be identified and tracked by the terminal device.
For example, referring to fig. 1 again, the terminal device 100 is a head-mounted display device, and a user can scan the marker 200 in real time through the worn head-mounted display device, so that the user can see the superposition display of the virtual character 401, the virtual animal 402 and the real space, so as to embody the display effect of the augmented reality of the virtual content and improve the display effect of the virtual content.
Based on the display system, the embodiment of the application provides a virtual content display method which is applied to terminal equipment of the display system. A specific virtual content display method is described below.
Referring to fig. 2, an embodiment of the present application provides a virtual content display method, which may be applied to a terminal device, and the virtual content display method may include:
step S110: and identifying the target marker and acquiring the position and posture information of the target marker relative to the terminal equipment.
In the conventional augmented reality display technology, only the virtual content itself is usually displayed, resulting in weaker reality, so that the reality of the virtual content can be enhanced by simultaneously realizing the augmented reality display of the inverted image content of the virtual content when the augmented reality display of the virtual content is realized, thereby improving the display effect of the virtual content. The reflection in the real world may refer to a virtual image formed by an object on an imaging medium (such as water, a mirror surface, etc.), and when the virtual content is displayed, the reflection corresponding to the virtual content is displayed, so that the virtual content can be more attached to the real world, and the effect of augmented reality is improved.
In the embodiment of the application, when the virtual content and the inverted image content of the virtual content are displayed, the terminal equipment can identify the target marker to obtain the identification result of the target marker, and the identification result at least comprises the position and the gesture information of the target marker relative to the terminal equipment, so that the terminal equipment can acquire the position and the gesture information of the target marker relative to the terminal equipment. The gesture information is the relative orientation, rotation angle and the like of the target marker relative to the terminal equipment.
In some embodiments, the target marker may include at least one sub-marker, which may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, where the shape of the feature points is not limited, and may be a dot, a ring, or a triangle, or other shapes. In addition, the distribution rules of the sub-markers in different target markers are different, so each target marker can have different identity information. The terminal device may acquire the identity information corresponding to the target marker by identifying the sub-marker included in the target marker, and the identity information may be information such as a code that can be used to uniquely identify the target marker, but is not limited thereto.
As an embodiment, the outline of the target marker may be rectangular, however, the shape of the target marker may be other shapes, which are not limited herein, and the rectangular area and the plurality of sub-markers in the area form one target marker. Of course, the target marker may be an object that can emit light by itself and is formed by a light spot, the light spot marker may emit light of different wavelength bands or different colors, and the terminal device obtains the identity information corresponding to the target marker by identifying the information of the wavelength band or the color of the light emitted by the light spot marker. It should be noted that the specific shape, style, size, color, number of feature points, and distribution of the target marker are not limited in this embodiment, and only the marker needs to be identified and tracked by the terminal device.
In the embodiment of the application, the target marker can be placed at any position in the real world, so that the target marker is ensured to be in the visual field of the terminal equipment, and the terminal equipment can recognize the target marker and obtain the relative spatial position information. For example, the target marker may be placed on a marker plate, or may be placed on the ground, on a table top, or the like.
As an embodiment, the terminal device may collect the image including the target marker by the image collecting device, and then identify the target marker. The terminal equipment collects the image containing the target marker, and can be used for collecting and identifying the image of the target marker by adjusting the spatial position of the terminal equipment or adjusting the spatial position of the target marker so that the target marker is in the visual field of an image collecting device of the terminal equipment. The field of view of the image acquisition device can be determined by the size of the field angle.
As a further embodiment, the terminal device can also recognize the target marker by means of further sensor means. The sensor device has a function of identifying a marker, and may be an image sensor, a photosensor, or the like. Of course, the above sensor device is merely exemplary, and is not meant to be limiting of the sensor device in embodiments of the present application. Similarly, the spatial position of the terminal device can be adjusted, or the spatial position of the target marker can be adjusted, so that the target marker is in the sensing range of the sensor device, and the terminal device can perform image recognition on the target marker. The sensing range of the sensor device can be determined by the sensitivity level.
Step S120: and acquiring the inverted image content of the virtual content relative to a designated plane, wherein the designated plane is the horizontal plane where the bottom of the virtual content in the virtual space is positioned, based on the virtual content to be displayed.
In the embodiment of the present application, the virtual content to be displayed is a 3D object capable of being represented as a reflection, such as a 3D virtual character, a 3D virtual animal, a 3D artistic exhibit, a 3D doll, a 3D furniture, a 3D book, a 3D mechanical model, and the like.
It can be appreciated that, when virtual content and the reflection content of the virtual content are rendered in the virtual space, the terminal device needs to acquire the virtual content and the reflection content of the virtual content. After obtaining the virtual content to be displayed, the terminal device may obtain, based on the virtual content to be displayed, the inverted image content of the virtual content relative to a designated plane, where the designated plane may be a horizontal plane where the bottom of the virtual content in the virtual space is located.
Specifically, the terminal device may first obtain the model data of the virtual content to be displayed, where the model data may include a color, a model vertex coordinate, model contour data, and the like used to construct a model corresponding to the virtual content, and the model data of the virtual content may be stored in the terminal device or may be stored in other electronic devices. And then, according to model data corresponding to the virtual content, taking the horizontal plane of the bottommost vertex of the model as a designated plane, and obtaining mirror image content of the virtual content relative to the designated plane by utilizing a mirror reflection principle, wherein the mirror image content is the inverted image content of the virtual content, that is, the terminal equipment can obtain the model data corresponding to the inverted image content according to the model data corresponding to the virtual content and the data of the designated plane, wherein the model data corresponding to the inverted image content corresponds to the model data corresponding to the virtual content one by one, and the method also can comprise the color, the model vertex coordinates, the model contour data and the like for constructing the model corresponding to the inverted image content. For example, referring to fig. 3, a plane is designated as a horizontal plane 302 where the bottommost vertex of the model 301 corresponding to the virtual animal is located, and the model 303 corresponding to the inverted image content can be obtained in the above manner.
It will be appreciated that the above specified plane may be regarded as an aid for capturing the content of the reflection.
In some embodiments, the foregoing virtual content may be the inverted image content of the virtual content with respect to the specified plane, where the terminal device calculates, according to the data of the virtual content to be displayed and the data of the specified plane, the data of the inverted image content of the virtual content with respect to the specified plane by using the principle of specular reflection, so that the terminal device may obtain the inverted image content of the virtual content with respect to the specified plane. The terminal device may download the reflection content of the virtual content from the server. For example, the terminal device may send the virtual content to be displayed and the data of the designated plane to the server, the server calculates the data of the inverted image content according to the virtual content to be displayed and the data of the designated plane by using the principle of specular reflection, and then the server returns the obtained data result of the inverted image content to the terminal device, so that the terminal device may obtain the inverted image content of the virtual content relative to the designated plane. Similarly, the terminal device may acquire the reflection content of the virtual content from the other terminal.
Step S130: and according to the position and posture information, obtaining the rendering positions of the virtual content and the inverted image content in the virtual space.
The terminal device may obtain the virtual content and a rendering position of the reflection content in the virtual space. In some embodiments, the terminal device may obtain the rendering positions of the virtual content and the reflection content in the virtual space according to the position and the gesture information.
In some embodiments, since the terminal device has obtained information such as the position and the posture of the target marker relative to the terminal device, the terminal device may obtain the spatial position coordinates of the target marker in real space, and convert the spatial position coordinates into the spatial coordinates in virtual space. Wherein the virtual space can comprise a virtual camera which is used for simulating the eyes of a user, and the position of the virtual camera in the virtual space can be regarded as the position of the terminal equipment in the virtual space. According to the position relation between the virtual content and the target marker to be displayed in the virtual space and the position relation between the virtual content and the back image content, the virtual camera is taken as a reference, the spatial positions of the virtual content and the back image content relative to the virtual camera can be obtained, so that the rendering coordinates of the virtual content and the back image content in the virtual space are obtained, the rendering positions (the rendering position of the virtual content is called a first rendering position, the rendering position of the back image content is called a second rendering position), the first rendering position can be used as the rendering coordinates of the virtual content to achieve the rendering of the virtual content at the first rendering position, and the second rendering position can be used as the rendering coordinates of the back image content to achieve the rendering of the back image content at the second rendering position. The rendering coordinates refer to three-dimensional space coordinates of virtual content or back image content in a virtual space, the three-dimensional space coordinates taking a head-mounted display device as an origin (also referred to as an origin with a human eye).
Step S140: and rendering the virtual content and the back image content according to the rendering position.
In the embodiment of the application, after the terminal equipment obtains the rendering position, virtual content and inverted image content can be rendered according to the rendering position.
It can be understood that after obtaining rendering coordinates for rendering virtual content and back image content in the virtual space, the terminal device may acquire data of the virtual content to be displayed and data of the back image content, then construct the virtual content according to the data of the virtual content, construct the back image content according to the data of the back image content, and render the virtual content and the back image content according to the rendering coordinates, where RGB values of each pixel point in the virtual content and the back image content, corresponding pixel point coordinates, and the like may be obtained. The data corresponding to the virtual content to be displayed and the data corresponding to the inverted image content may include model data of the virtual content and the inverted image content, where the model data is data for rendering the virtual content and the inverted image content. For example, the model data may include color data, vertex coordinate data, contour data, and the like for creating virtual content and corresponding to the back image content.
Step S150: and displaying the virtual content and the inverted image content.
The terminal device may display the rendered virtual content and the inverted image content, where a display position of the virtual content and the inverted image content corresponds to a rendering position of the virtual content and the inverted image content, and the display position may be understood as a position where the virtual content and the inverted image content are displayed in the real world, which are seen by the user through the head-mounted display device. Therefore, the virtual content and the inverted image content are displayed in the virtual space, and the user can see the effect that the virtual content and the inverted image content are overlapped on the real world.
For example, referring to fig. 4, a user can scan the marker 200 in real time through the wearable head-mounted display device, and can see the virtual character (304), the virtual animal (306), the reflection (305) of the virtual character, and the superposition display of the reflection (307) of the virtual character and the real space, so that the display effect of the augmented reality of the virtual content is reflected, and the display effect of the virtual content is improved.
In some embodiments, the display position of the content of the reflection may be on any plane in the real world, i.e. the display position of the content of the reflection may overlap a certain plane in the real world. For example, referring to fig. 5, a user can scan the tag 200 on the ground in real time through the wearable head-mounted display device, and can see that the virtual character (304) and the virtual animal are displayed on the desktop in the real space, and the reflection (305) of the virtual character and the reflection of the virtual animal are displayed on the desktop in the real space, so as to create the illusion that the desktop reflects the virtual character (304) and the virtual animal to generate the reflection for the user, thereby improving the sense of realism of the virtual content and improving the display effect of the virtual content in the augmented reality scene. Of course, the display position of the reflection content may be in a plane corresponding to the marker.
According to the virtual content display method provided by the embodiment of the application, the position and posture information of the target marker relative to the terminal equipment are obtained through identifying the target marker, and the inverted image content of the virtual content relative to the designated plane is obtained based on the virtual content to be displayed, wherein the designated plane is the horizontal plane where the bottom of the virtual content in the virtual space is located, then the rendering positions of the virtual content and the inverted image content in the virtual space are obtained according to the position and posture information, finally the virtual content and the inverted image content are rendered according to the rendering positions, and the virtual content and the inverted image content are displayed, so that the virtual content and the inverted image content are displayed in the virtual space, the display effect that the virtual content and the inverted image content are overlapped on a real scene can be observed by a user, and the display effect of the virtual content is improved.
Referring to fig. 6, another embodiment of the present application provides a virtual content display method, which may be applied to a terminal device, and the virtual content display method may include:
step S210: and identifying the target marker and acquiring the position and posture information of the target marker relative to the terminal equipment.
In some embodiments, the terminal device may further obtain identity information of the target tag after identifying the target tag, that is, the terminal device may obtain the position and posture information of the target tag relative to the terminal device and the identity information of the target tag after identifying the target tag or identifying the image containing the target tag.
Step S220: and acquiring the inverted image content of the virtual content relative to a designated plane, wherein the designated plane is the horizontal plane where the bottom of the virtual content in the virtual space is positioned, based on the virtual content to be displayed.
Further, at least one virtual content corresponding to the identity information may be obtained. It can be understood that different target markers can correspond to different virtual contents, that is, the identity information of the target marker has a corresponding relationship with the virtual contents, so that the terminal device can obtain the virtual contents corresponding to the identity information of the target marker according to the identity information of the target marker and the corresponding relationship, and take the virtual contents corresponding to the identity information as the virtual contents to be displayed. Thus, the back image content can be obtained according to the virtual content, so that the virtual content and the back image content can be displayed. In some embodiments, the correspondence may be stored in the terminal device, may be stored in a server, or may be stored at other terminals. For example, the virtual content corresponding to the first marker whose identity information is "number 1" is a three-dimensional virtual car, the virtual content corresponding to the second marker whose identity information is "number 2" is a three-dimensional virtual building, and so on. In an embodiment, the virtual content to be displayed may also be preset, and there is no direct association with the identity information of the target marker, that is, after the terminal device collects the image of the target marker, the preset virtual content may be displayed according to the position and posture information of the target marker.
In some embodiments, after obtaining the virtual content to be displayed, the terminal device may obtain the inverted image content of the virtual content with respect to the specified plane based on the virtual content to be displayed. Wherein, the obtaining the reflection content of the virtual content relative to the designated plane may include:
and taking the appointed plane as a specular reflection surface, and acquiring the reflection content of the virtual content relative to the appointed plane by utilizing a specular reflection matrix.
The mirror surface is smooth, and when the parallel incident light rays are incident on the mirror surface, the parallel incident light rays can be reflected in one direction. Each specular reflection surface has a specular reflection matrix, and the coordinates of any point in space can obtain the symmetrical point of the point relative to the specular reflection surface according to the specular reflection matrix, so that the symmetrical point of each vertex coordinate relative to the specular reflection surface can be obtained by calculating each vertex coordinate of a model corresponding to virtual content with the specular reflection matrix, and each symmetrical point can be used as each vertex coordinate of the model corresponding to the reflection content of the virtual content, thereby obtaining the reflection content of the virtual content relative to the specular reflection surface.
In some embodiments, the specular reflection matrix may be the following matrix:
n in the matrix x 、n y 、n z Is the unit normal vector (n) x ,n y ,n z ) Is a numerical value of (a).
Therefore, in some embodiments, the above specified plane may be used as a specular reflection surface, that is, a horizontal plane where the bottom of the virtual content is located is used as a specular reflection surface, and the reflection content of the virtual content with respect to the specified plane may be obtained according to the specular reflection matrix of the specified plane. As one way, a unit normal vector of the designated plane may be obtained to determine a specular reflection matrix of the designated plane, and coordinates of each vertex of the model corresponding to the virtual content may be obtained to obtain coordinates of each vertex of the model corresponding to the inverted image content according to the specular reflection matrix of the designated plane, so as to obtain inverted image content of the virtual content relative to the designated plane.
In some embodiments, the terminal device may establish a world coordinate system in the virtual space, and flip coordinates of each vertex of the virtual content in a vertical direction (i.e. a direction perpendicular to a horizontal plane) according to the world coordinate system, so as to obtain the inverted image content corresponding to the virtual content. The world coordinate system of the virtual space may coincide with the ground in the real world or may be parallel to the ground in the real world. The terminal device may establish a world coordinate system in the virtual space according to the ground position of the real world, an X0Z plane of the world coordinate system coinciding with the ground of the real world, a Y axis of the world coordinate system being oriented vertically upwards. In one embodiment, the specified plane is parallel to the X0Z plane of the world coordinate system, and each vertex of the virtual content may be flipped with respect to the coordinate of the specified plane on the Y axis, so as to obtain the inverted image content of the virtual content with respect to the specified plane, where flipping the coordinate on the Y axis may refer to the virtual interior The difference between the Y coordinate of the container vertex and the Y coordinate of the appointed plane is the same as the difference between the Y coordinate of the corresponding turned over and the Y coordinate of the appointed plane, that is, the distance between the vertex of the virtual content and the appointed plane is the same as the distance between the corresponding vertex of the virtual content and the appointed plane after the turning over. As shown in fig. 7, it can be seen that, among the respective vertex coordinates of the corresponding model of the obtained inverted image content 320, compared with the respective vertex coordinates of the model corresponding to the virtual content 310 (as a in the figure 2 Point and A 1 ) Only the Y coordinate is changed, and the X coordinate and the Z coordinate remain unchanged, wherein the Y coordinate of each vertex coordinate of the inverted image content and the Y coordinate of each vertex coordinate of the virtual content are symmetrical with respect to the designated plane. Therefore, when the designated plane is parallel to the X0Z plane of the world coordinate system, only the Y coordinate in each vertex coordinate of the virtual content may be calculated to obtain the symmetrical Y coordinate of the Y coordinate with respect to the designated plane, so as to obtain the coordinates of each vertex of the model corresponding to the inverted image content, and further obtain the inverted image content of the virtual content with respect to the designated plane.
As an embodiment, the difference between the Y coordinates of each vertex coordinate of the virtual content and the Y coordinates of the designated plane may be obtained, and the Y coordinates of each vertex coordinate of the inverted image content may be obtained by subtracting the difference corresponding to the Y coordinates of the designated plane by 2 times from the Y coordinates of each vertex coordinate of the virtual content, so as to obtain the coordinates of each vertex of the model corresponding to the inverted image content, and further obtain the inverted image content of the virtual content with respect to the designated plane. For example, if the Y coordinate of the a vertex of the virtual content in the world coordinate system is 8, the Y coordinate of the designated plane in the world coordinate system is 5, the difference between the Y coordinate of the a vertex and the Y coordinate of the designated plane is 3, the Y coordinate of the vertex corresponding to the inverted image content is 8-3*2 =2.
It can be understood that the above-mentioned method of obtaining the vertex coordinates of the inverted image content by using the Y coordinates is suitable for specular reflection of straight up and down, that is, the inverted image content reflected is identical to the virtual content, and is directly the inversion of the virtual content, for example, an application scene of displaying the inverted image content needs to be realized on a plane placed in parallel, such as a ground, a desktop, a marker board, etc. The operation of matrix multiplication is not needed for each vertex coordinate of the virtual content, the operation amount can be reduced, the efficiency of real-time rendering of the inverted image content of the virtual content is ensured, and the display effect of the virtual content is improved.
Step S230: and according to the position and posture information, obtaining the rendering positions of the virtual content and the inverted image content in the virtual space.
In some embodiments, the terminal device may determine the display direction of the virtual reflection according to the light source direction of the ambient light source. Specifically, referring to fig. 8, the obtaining, according to the position and posture information, the rendering position of the virtual content and the reflection content in the virtual space may include:
step S231: and determining the display direction of the reflection content according to the light path direction of the light source of the environment where the target marker is located.
In some embodiments, the terminal device may determine the display direction of the reflection content in the virtual space according to the light path direction of the light source of the environment in which the target marker is located, that is, the light path direction of the light source in the real space. The display direction of the back image content is understood to be the display direction of the back image content relative to the virtual content, such as the front, rear, side direction of the virtual content.
As an embodiment, since the terminal device has obtained information such as the position and the posture of the target marker relative to the terminal device, the terminal device may obtain the spatial position coordinates of the target marker in the real space, then determine the spatial position of the light source relative to the target marker in the real space according to the light path direction of the light source in the real space, and then determine the positional relationship between the virtual content and the light source according to the positional relationship between the virtual content and the target marker in the virtual space, so as to determine the display direction of the reflection content according to the positional relationship between the virtual content and the light source. For example, referring to fig. 9A, the virtual content 310 is a virtual character, and the reflection content 320 is displayed in front of the virtual character when the light source is in front of the virtual character, and for example, referring to fig. 9B, the reflection content 320 is displayed in rear of the virtual character when the light source is in rear of the virtual character.
In other embodiments, the terminal device may also determine the display direction of the reflection content according to the light path direction of the light source in the virtual space. For example, the terminal device may acquire virtual content and scene content of the virtual content, wherein the scene content includes a virtual light source, and the terminal device may determine a display direction of the reflection content according to a positional relationship between the virtual content and the virtual light source.
Step S232: and determining a first rendering position of the virtual content according to the position and the gesture information, and determining a second rendering position of the inverted image content according to the position, the gesture information and the display direction.
Because the terminal device has obtained the information such as the position and the posture of the target marker relative to the terminal device, the terminal device can obtain the spatial position coordinates of the target marker in the real space, and convert the spatial position coordinates into the spatial coordinates in the virtual space. Wherein the virtual space can comprise a virtual camera which is used for simulating the eyes of a user, and the position of the virtual camera in the virtual space can be regarded as the position of the terminal equipment in the virtual space. According to the position relation between the virtual content to be displayed in the virtual space and the target marker, the virtual camera is taken as a reference, and the first space position of the virtual content relative to the virtual camera can be obtained, so that the first rendering coordinate of the virtual content in the virtual space is obtained, and the first rendering position of the virtual content is obtained. Similarly, according to the position relation between the virtual content and the target marker, the position relation between the virtual content and the inverted image content and the display direction of the inverted image content relative to the virtual content, which are required to be displayed in the virtual space, the virtual camera is taken as a reference, so that the second spatial position of the inverted image content relative to the virtual camera can be obtained, and the second rendering coordinate of the inverted image content in the virtual space can be obtained, and the second rendering position of the inverted image content can be obtained. The rendering coordinates refer to three-dimensional space coordinates of virtual content or back image content in a virtual space, the three-dimensional space coordinates taking a head-mounted display device as an origin (also referred to as an origin with a human eye).
In addition, in some embodiments, the terminal device may further set the display brightness of the reflection content according to the brightness of the ambient light source. When the ambient light source is a light source in real space, the terminal device can collect the brightness of the light source in the environment through a light sensor and the like, and can also recognize and process the image by shooting the image of the surrounding environment to obtain the brightness of the ambient light source, so that the display brightness of the inverted image content can be changed in real time after the brightness of the light source is obtained. As another embodiment, when the ambient light source is a light source in the virtual space, the terminal device may obtain the luminance value of the virtual light source according to the obtained construction data of the scene content of the virtual content, so as to set the display luminance of the reflection content according to the luminance value of the virtual light source. For example, when the light source is bright, the displayed back image content is bright, the back image content is obvious, and when the light source is dark, the displayed back image content is dark, and the back image content is not obvious.
For example, referring to fig. 9A, the head-mounted display device collects an image of the marker 200 on the ground through the image collecting device, so that the position and posture information of the marker 200 can be obtained, and the virtual content 310 and the inverted image content 320 are displayed, when the light in the real space is brighter, the user can see that the virtual content 310 is superimposed on the desktop in the real space, the inverted image content 320 is superimposed on the desktop in the real space, and the inverted image content 320 is more obvious, and for example, referring to fig. 9B, when the light in the real space is darker, the inverted image content 320 is darker.
Step S240: and rendering the virtual content and the inverted image content according to the rendering position. In some embodiments, after obtaining the rendering position, the terminal device may render the virtual content and the reflection content on a certain virtual plane in the virtual space according to the rendering position, specifically, the virtual content is rendered above the virtual plane, and the reflection content is rendered in the virtual plane. When the superposition of the virtual content and the inverted image content with the real space is required to be realized, the virtual plane can be superposed and displayed on the plane in the real space. Therefore, the rendering the virtual content and the back image content according to the rendering position may include:
and rendering virtual content and back-image content according to the rendering position, wherein the back-image content is rendered in a plane to be displayed, and the plane to be displayed is a back-image surface corresponding to the virtual content in the virtual space.
In some implementations, the rendering location of the virtual content may be above a plane to be displayed in the virtual space, and the rendering location of the reflection content may be within the plane to be displayed in the virtual space. Specifically, the terminal device may render the virtual content and the back-image content according to the data of the virtual content and the data of the back-image content, and render the virtual content above the plane to be displayed according to the rendering positions of the virtual content and the back-image content, and render the back-image content in the plane to be displayed, where the plane to be displayed is a back-image plane corresponding to the virtual content in the virtual space, that is, a plane of the back-image for the back-image out virtual content, and the plane to be displayed may be understood as a virtual plane for displaying the back-image content.
In some embodiments, the bottom of the virtual content may not be fully attached to the plane to be displayed, that is, the virtual content may be at a distance from the plane to be displayed. It is understood that when the bottom of the virtual content is completely or incompletely attached to the plane to be displayed, the reflection content is rendered in the plane to be displayed. For example, referring to fig. 10, the virtual content 310 is located at a distance from the plane to be displayed 330, and the reflection content 313 is still displayed on the plane to be displayed 312.
Further, in some embodiments, to prevent the rendering area of the content from exceeding the planar area of the plane to be displayed, the terminal device may limit the rendering area of the content. Specifically, referring to fig. 11, the rendering virtual content and the reflection content according to the rendering position may include:
step S241: and determining partial back image content in the plane to be displayed in the back image content according to the rendering position.
In some embodiments, when the terminal device needs to limit the rendering area of the back-image content, the plane area of the plane to be displayed may be used as the rendering area of the back-image content, that is, the terminal device may not render the back-image content beyond the plane area of the plane to be displayed. Therefore, the terminal device needs to acquire the reflection content in the plane to be displayed. Specifically, the terminal device may determine, according to the rendering position of the reflection content, a portion of the reflection content in the plane to be displayed, so that the terminal device may obtain the reflection content in the plane to be displayed.
As an implementation manner, the terminal device may intercept the reflection content located in the area boundary line according to the area boundary line of the plane to be displayed, where the intercepted reflection content is a part of the reflection content located in the plane to be displayed.
In other embodiments, the terminal device may limit the rendering area of the reflection content according to the size of the planar area that is a physical plane in real space. For example, the terminal device may collect an image of the physical plane in real time through the image collecting device, and identify the image to obtain a plane area size of the physical plane, so that the terminal device may set the plane area size of the plane to be displayed according to the plane area size of the physical plane, and further render only part of the reflection content in the plane to be displayed. In this way, by matching the rendering area of the back-image content with the physical plane in real space, the effect that the user sees through the head-mounted display device is that the back-image content is displayed superimposed on the physical plane only.
Step S242: and rendering the virtual content and part of the inverted image content according to the rendering position.
After the terminal device obtains the partial back image content in the plane to be displayed, the virtual content can be rendered above the plane to be displayed according to the virtual content and the rendering position of the partial back image content, and the partial back image content is rendered in the plane to be displayed. For example, referring to fig. 12, it can be seen that virtual content 310 is rendered above the plane to be displayed and partial reflection content 340 is rendered in the plane to be displayed.
Step S250: and displaying the virtual content and the inverted image content.
It can be understood that after the terminal device renders the virtual content and the inverted image content, the terminal device may obtain display data of the rendered virtual content and inverted image content, where the display data may include RGB values of each pixel point in the display screen and corresponding pixel point coordinates, and the terminal device may generate the display screen according to the display data, and project the display screen onto the display lens, so as to display the virtual content and the inverted image content. The user can see virtual content and inverted image content to be displayed in the real world in a superimposed manner through the display lens of the head-mounted display device, and the effect of augmented reality is achieved.
In some embodiments, when the terminal device renders the virtual content and the back-image content according to the above steps S241 and S242, the terminal device may display the virtual content and the partial back-image content according to the display data of the virtual content and the partial back-image content. The display data of the partial reflection content may include RGB values of each pixel point of the partial reflection content in the display screen, corresponding pixel point coordinates, and the like. The terminal device can generate a display picture according to the display data of the partial back image content and project the display picture onto the display lens, so that the virtual content and the partial back image content are displayed.
For example, referring to fig. 15, the user can see the virtual content 310 and the superposition display of the partial reflection content 340 and the real space through the wearing head-mounted display device, so as to embody the display effect of the augmented reality of the virtual content and promote the display effect of the virtual content.
Further, in some embodiments, the plane to be displayed may be superimposed on a physical plane of the environment in which the target marker is located, that is, the plane to be displayed may be overlapped with the physical plane in real space. That is, when the terminal device renders the virtual content above the plane to be displayed and the inverted image content is rendered in the plane to be displayed, the user can see the virtual content superimposed and displayed above the physical plane in the real space through the worn head-mounted display device due to the superposition of the plane to be displayed and the physical plane in the real space, the inverted image content is superimposed and displayed in the physical plane, the augmented reality effect is achieved, the illusion that the inverted image virtual image of the virtual content is generated on the physical plane is formed for the user, and the sense of reality is improved.
As an embodiment, when the target marker is provided on the marker plate, the plane to be displayed may be displayed superimposed on the marker plate such that the plane to be displayed overlaps with the marker plate. In one embodiment, the marking plate 2 may include an optical filter disposed over the target marker, and the head-mounted display device may capture an image of the marking plate via an infrared camera. The optical filter of the marking plate can reflect real objects, when virtual contents are displayed on the marking plate in a superimposed manner, the inverted image contents corresponding to the virtual contents are displayed in the marking plate 2 in a superimposed manner, so that the illusion of generating inverted image virtual images of the virtual contents by the marking plate is formed for a user, and the sense of reality is improved. For example, referring to fig. 13, a user scans the marker 200 on the marker panel 201 in real time through the worn head-mounted display device, and can see that the virtual content 310 is displayed above the marker panel 201 in real space while the reflection content 320 is displayed on the marker panel 201, and at this time, the plane to be displayed 330 coincides with the plane of the marker panel 201 in real space.
As another embodiment, the plane to be displayed may not overlap with the plane where the target marker is located, i.e., the plane to be displayed is superimposed and displayed on any physical plane in real space (e.g., on a desktop, on the ground, etc.). For example, referring to fig. 14, when a user scans the marker 200 on the ground in real time through the worn head-mounted display device, the user can see that the virtual content 310 is displayed on the desktop in real space while the virtual content 320 is displayed on the desktop, and at this time, the plane to be displayed 330 is overlapped with the desktop in real space. The display position of the plane to be displayed in the real space may be preset or may be obtained by real-time scanning, for example, after the terminal device obtains the position and posture information of the marker, the terminal device may scan the object plane near the marker and select one of the object planes for displaying the plane to be displayed, and the terminal device may obtain the rendering position of the plane to be displayed in the virtual space according to the position and posture information of the selected object plane relative to the marker, and render and display the plane to be displayed according to the rendering position of the plane to be displayed, so that the plane to be displayed is displayed on the selected object plane in a superimposed manner.
It will be appreciated that when the plane to be displayed is superimposed on the physical plane of the environment in which the target marker is located, the plane to be displayed may be hidden to enhance the sense of realism of the virtual content and the inverted image content superimposed on the plane displayed in real space.
Further, in some embodiments, the terminal device may display different reflection contents according to different materials of the plane. Thus, in some embodiments, after the capturing the reflection content of the virtual content with respect to the specified plane, the virtual content display method may further include:
and performing corresponding image processing on the reflection content according to the material reflection parameters, wherein the material reflection parameters comprise at least one of material reflection parameters of a plane to be displayed and material reflection parameters of a physical plane of the environment where the target marker is located.
In some embodiments, the terminal device may perform corresponding image processing on the reflection content according to the material reflection parameter of the plane to be displayed, so that the reflection content to be displayed meets the reflection characteristic of the plane to be displayed.
The material reflection parameters include reflectivity and a material texture map, wherein the reflectivity is the ratio of the rendering brightness of the reflection content to the rendering brightness of the virtual content, and the material texture map is a planar texture pattern. The reflectivity is 0-1, and can be reasonably set according to the texture map of the plane to be displayed. For example, when the plane to be displayed is a mirror surface material, the reflectance may be set to 1, and when the plane to be displayed is a water surface material, the reflectance may be set to 0.85. It will be appreciated that the greater the value of the reflectivity, the clearer the content of the reflection and the smaller the value, the more blurred the content of the reflection.
In some embodiments, the image processing may be to adjust the transparency of the display of the inverted image content to a specified transparency to enhance the realism of the inverted image content. The value of the designated transparency is between 0 and 1, and can be reasonably set according to the material reflection parameters of the plane to be displayed. For example, when the plane to be displayed is made of wood, the designated transparency can be set to 0.5, that is, 50% transparent, so that the displayed inverted image content is blurred, and when the plane to be displayed is made of mirror surface, the designated transparency can be set to 1, that is, completely opaque, so that the displayed inverted image content is very clear.
In other embodiments, the image processing may be to adjust the color of the inverted image content to a specified color, so as to reduce the offensiveness of the color of the inverted image content and the color of the plane to be displayed, and improve the display effect of the inverted image content. The specified color can be reasonably set according to the material reflection parameters of the plane to be displayed. For example, when the plane to be displayed is made of wood, the designated color can be similar to the color of wood, such as light brown, and when the plane to be displayed is made of metal, the designated color can be similar to the color of metal, such as silver white, so that the sense of reality of the reflection content displayed on the plane to be displayed is improved.
For example, referring to fig. 15, the plane 330 to be displayed is made of wood, the reflection content 320 is blurred, and the color of the reflection content is similar to that of the plane 330 to be displayed.
In other embodiments, the terminal device may perform corresponding image processing on the reflection content according to the material reflection parameter of the physical plane of the environment where the target marker is located, that is, according to the material reflection parameter of the physical plane in the real space, so that the reflection content displayed on the physical plane in a superimposed manner can satisfy the reflection characteristic of the physical plane, and the sense of reality of the reflection content is enhanced. It can be understood that, according to the material reflection parameters of the physical plane in the real space, the specific steps of performing corresponding image processing on the reflection content may refer to the corresponding steps in the above embodiment, which are not described herein in detail.
In some embodiments, the terminal device may collect, in real time, an image of a physical plane in real space according to the image collecting device, and identify the image to obtain a material reflection parameter of the physical plane in real space, so that the terminal device may perform corresponding image processing on the reflection content according to the material reflection parameter of the physical plane in real space.
In addition, since the plane to be displayed can be overlapped with the physical plane, in some embodiments, the terminal device may further set the material reflection parameter of the plane to be displayed according to the plane material of the physical plane, so that the plane to be displayed has the same plane material and the same reflection characteristic as the physical plane, and thus, the inverted image content can be rendered to the same virtual plane as the physical plane, so that when the inverted image content is overlapped and displayed on the physical plane in the real space, the visual effect that the inverted image content is obtained by mapping the virtual content for the physical plane can be generated, and the realism of the inverted image content is improved.
It can be understood that the terminal device can render the inverted image content according to the texture map of the plane to be displayed or according to the texture map of the physical plane in the real space, so that the inverted image content and the plane have corresponding texture, and the realism of the inverted image content display is improved.
Further, in some embodiments, the terminal device may perform gradual attenuation processing on the content of the reflection, that is, by setting the height at which the attenuation of the content of the reflection starts and the height at which the content of the reflection ends (completely disappears), the reflectance of the content of the reflection gradually decreases until the reflectance is attenuated to zero. As an embodiment, the position where the back image content is attenuated at the beginning is set to be zero in back image height, that is, the position where the back image content is attenuated at the beginning is set to be attenuated at the beginning, and the position where the back image content is attenuated at the end is set to be 150 pixels in back image height, that is, the reflectivity of the back image content is attenuated at the end to be zero, where the back image is completely disappeared. In this way, by setting the starting height and the ending height of the attenuation, the gradual change effect of the back image content can be realized, and the display effect of the back image content can be improved.
Further, in some embodiments, in order to prevent the edge of the displayed inverted image content from being in imperfect fit with the plane to be displayed or with the physical plane, the edge of the inverted image content may be subjected to blurring processing. Accordingly, after the virtual content and the back image content are rendered according to the rendering position, the virtual content display method may include:
and adjusting the color of the contour edge area of the plane to be displayed to a preset color, wherein the brightness value of each color component of the preset color is lower than a first threshold value.
The first threshold is a maximum brightness value of each color component of the virtual content when the user cannot observe the virtual content through the head-mounted display device. That is, the terminal device adjusts the color of the contour edge area of the plane to be displayed to a preset color, so that the user cannot observe the reflection content of the contour edge area of the plane to be displayed through the head-mounted display device, and the edge blurring effect of the reflection content is achieved.
When the virtual content is black, the virtual content is not reflected by the lens due to the optical effect of the head-mounted display device, and is not presented to the user, and therefore, the first threshold may be set to 13 luminance, i.e., 95% black, or may be set to 0 luminance, i.e., black.
Further, in other embodiments, after the displaying the virtual content and the inverted image content, the virtual content displaying method may further include:
when the change of the virtual content relative to the display position and the posture information of the plane to be displayed is detected, updating the displayed inverted image content according to the changed display position and posture information.
In some embodiments, the user may change the display position of the virtual content by using a controller connected to the terminal device, and because the object is far away from the mirror surface, the terminal device may acquire the display position and posture information between the virtual content and the plane to be displayed in real time, so as to update the displayed inverted image content according to the changed display position and posture information when detecting that the display position and posture information of the virtual content relative to the plane to be displayed changes.
As an implementation manner, when the terminal device detects that the relative height between the virtual content and the plane to be displayed is large, the rendering position of the inverted image content on the plane to be displayed can be adjusted towards the edge area direction of the plane to be displayed, and when the terminal device detects that the relative height between the virtual content and the plane to be displayed is small, the rendering position of the inverted image content on the plane to be displayed can be adjusted towards the center area direction of the plane to be displayed. For example, referring to fig. 13 and 16, the relative height between the virtual contents and the plane to be displayed becomes smaller, and the reflection contents move forward toward the center area of the plane to be displayed.
Further, in some embodiments, the terminal device may display different reflection contents according to the viewing angle direction of the user, so referring to fig. 6 again, after displaying the virtual contents and the reflection contents, the virtual content display method may further include:
step S260: when the position and posture information of the target marker relative to the terminal device are detected to change, the displayed virtual content and the inverted image content are updated according to the changed position and posture information.
It can be understood that after the virtual content and the inverted image content are displayed according to the position and the posture information of the target marker relative to the terminal device, the relative position and the rotation angle between the terminal device and the target marker can be detected in real time, so that the displayed virtual content and inverted image content are updated when the position and the posture information of the target marker relative to the terminal device are changed.
In some embodiments, the position of the target marker may be fixed, and the position of the terminal device is changed, for example, the user wears the head-mounted display device to move forward, so that the terminal device detects that the position and the gesture of the target marker relative to the terminal device are changed; the position of the target marker may be fixed, for example, the user moves the target marker to the left, so that the position and the gesture of the target marker detected by the terminal device relative to the terminal device are changed, or the position of the target marker and the position of the terminal device are both changed, for example, the user wears the head-mounted display device to approach the target marker, and simultaneously moves the target marker to the front of the user, so that the position and the gesture of the target marker detected by the terminal device relative to the terminal device are changed.
In some embodiments, the terminal device may change the display states such as the display angle, the display size, the display position, and the like of the virtual content according to the position and the posture of the changed target mark relative to the terminal device, so as to update the displayed virtual content. It can be understood that when the position and the posture of the target marker relative to the terminal device are changed, the relative position and the relative rotation angle between the camera view angle of the terminal device and the target marker are also changed, so that the terminal device can redetermine the display states of the virtual content, such as the display angle, the display size, the display position and the like, according to the relative position and the relative rotation angle between the camera view angle of the terminal device and the target marker, and further redetermine the virtual content according to the redetermined display state of the virtual content, and therefore, when a user wears the head-mounted display device to scan the target marker at different view angles, different display effects of the virtual content can be seen. For example, when the terminal device is above the virtual content, the content above the virtual content is displayed, and when the terminal device is beside the virtual content, the content beside the virtual content is displayed.
Similarly, the terminal device can change the display states of the display angle, the display size, the display position and the like of the inverted image content relative to the position and the posture of the terminal device after the change, and update of the displayed inverted image content is realized. It can be understood that the terminal device can obtain the display states such as the display angle, the display size, the display position and the like of the back image content according to the relative position and the relative rotation angle between the camera view angle of the terminal device and the target marker, and further display the back image content according to the display state of the back image content after being determined again, so that when the user wears the head-mounted display device to scan the target marker at different view angles, the displayed back image content can be seen to present different display effects. For example, referring to fig. 17, when the camera view angle of the terminal device is right above the virtual content 310, i.e. the user's line of sight is looking down from above the virtual content, the displayed inverted image content 320 is smaller, and for example, referring to fig. 4, when the camera view angle of the terminal device is at the oblique side of the virtual content (virtual character 304, virtual animal 306), i.e. the user's line of sight is looking from the side of the virtual content, the displayed inverted image content (virtual character inverted image 305 and virtual animal inverted image 307) is elongated.
As an implementation mode, the terminal device can convert the position and posture information of the target mark relative to the terminal device in the real space into the position relation between the virtual camera A and the target mark in the virtual space, wherein the virtual camera is a camera for simulating the visual angle of human eyes in the 3D software system, and then the display states such as the display angle, the display size and the display position of the virtual content can be determined according to the position relation between the virtual camera A and the target mark, and further the display state of the inverted image content is determined according to the display state of the virtual content.
As an implementation manner, by placing a virtual camera B at the bottom of the virtual content, the terminal device may determine the position and the viewing angle direction of the virtual camera B according to the positional relationship between the virtual camera a and the target marker, and then determine the display state such as the display angle, the display size, and the display position of the inverted image content according to the content captured by the virtual camera B. Thus, the terminal device can render and display the virtual content and the back image content in the virtual space according to the display state of the virtual content and the display state of the back image content.
It can be understood that when the position and posture information of the target marker in the real space relative to the terminal device changes, the visual angle, the size and the position of the content captured by the virtual camera a and the virtual camera B respectively also change, so that the rendered virtual content and the inverted image content also change. The terminal equipment can re-render and display the virtual content and the inverted image content according to the changed content captured by the virtual camera A and the virtual camera B, so that the update of the virtual content and the inverted image content is realized. Therefore, when the position and the posture of the target marker relative to the terminal device are detected to change, the terminal device can redefine the display states of the virtual content and the inverted image content according to the changed position and posture information in the mode, so that the displayed virtual content and the inverted image content are updated.
According to the virtual content display method provided by the embodiment of the application, after the position and posture information of the target mark relative to the terminal equipment is obtained, the virtual content and the inverted image content are displayed in the virtual space according to the ray path direction of the ambient light source and the plane material of the virtual content display plane, so that a user can observe the display effect that the virtual content and the inverted image content are overlapped on a real scene.
Referring to fig. 18, a block diagram of a virtual content display apparatus 500 according to an embodiment of the present application is shown, and the apparatus may include: an image recognition module 510, a content acquisition module 520, a location acquisition module 530, a rendering module 540, and a display module 550. The image recognition module 510 is configured to recognize a target marker, and obtain position and posture information of the target marker relative to the terminal device; the content obtaining module 520 is configured to obtain, based on the virtual content to be displayed, a reflection content of the virtual content with respect to a designated plane, where the designated plane is a horizontal plane where a bottom of the virtual content in the virtual space is located; the position obtaining module 530 is configured to obtain a rendering position of the virtual content and the inverted image content in the virtual space according to the position and the gesture information; the rendering module 540 is used for rendering virtual content and back image content according to the rendering position; the display module 550 is used for displaying virtual content and inverted image content.
In some embodiments, the content acquisition module 520 may be specifically configured to: and taking the appointed plane as a specular reflection surface, and acquiring the reflection content of the virtual content relative to the appointed plane by utilizing a specular reflection matrix.
In some embodiments, the location acquisition module 530 may further include: a direction acquisition unit and a position determination unit. The direction acquisition unit is used for determining the display direction of the reflection content according to the light path direction of the light source of the environment where the target marker is located; the position determining unit is used for determining a first rendering position of the virtual content according to the position and the gesture information and determining a second rendering position of the inverted image content according to the position, the gesture information and the display direction.
In some embodiments, rendering module 540 may further comprise: and rendering the display unit. The rendering display unit is used for rendering virtual content and back image content according to the rendering position, wherein the back image content is rendered in a plane to be displayed, and the plane to be displayed is a back image surface corresponding to the virtual content in the virtual space.
In some implementations, the rendering display unit may be specifically configured to: determining partial back image content in a plane to be displayed in the back image content according to the rendering position; according to the rendering position, rendering virtual content and partial back image content; the display module 550 may be specifically configured to: and displaying the virtual content and part of the inverted image content.
In some embodiments, the virtual content display apparatus 500 may further include: and an image processing module. The image processing module is used for carrying out corresponding image processing on the reflection content according to material reflection parameters, wherein the material reflection parameters comprise at least one of material reflection parameters of a plane to be displayed and material reflection parameters of a physical plane of an environment where the target marker is located.
In some embodiments, the virtual content display apparatus 500 may further include: and a contour processing module. The contour processing module is used for adjusting the colors of the contour edge area of the plane to be displayed to preset colors, and the brightness value of each color component of the preset colors is lower than a first threshold value.
In some embodiments, the virtual content display apparatus 500 may further include: and a reflection updating module. The reflection updating module is used for updating the displayed reflection content according to the changed display position and posture information when the change of the virtual content relative to the display position and posture information of the plane to be displayed is detected.
In some embodiments, the plane to be displayed may be superimposed on a physical plane of the environment in which the target marker is located.
In some embodiments, the virtual content display apparatus 500 may further include: and a content updating module. The content updating module is used for updating the displayed virtual content and the inverted image content according to the changed position and posture information when the position and posture information of the target marker relative to the terminal equipment is detected to be changed.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein.
In the several embodiments provided by the present application, the illustrated or discussed coupling or direct coupling or communication connection of the modules to each other may be through some interfaces, indirect coupling or communication connection of devices or modules, electrical, mechanical, or other forms.
In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
In summary, the method and the device for displaying virtual content provided in the embodiments of the present application are applied to a terminal device, by identifying a target marker, obtaining position and posture information of the target marker relative to the terminal device, and obtaining inverted image content of the virtual content relative to a designated plane based on virtual content to be displayed, where the designated plane is a horizontal plane where the bottom of the virtual content in the virtual space is located, then obtaining rendering positions of the virtual content and the inverted image content in the virtual space according to the position and posture information, and finally rendering the virtual content and the inverted image content according to the rendering positions, and displaying the virtual content and the inverted image content, so that a user can observe a display effect that the virtual content and the inverted image content are superimposed on a real scene, and a display effect of the virtual content is improved.
Referring to fig. 19, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a smart phone, a tablet computer, a head mounted display device, or the like capable of running an application program. The terminal device 100 in the present application may include one or more of the following components: processor 110, memory 120, image capture device 130, and one or more application programs, wherein the one or more application programs may be stored in memory 120 and configured to be executed by the one or more processors 110, the one or more program(s) configured to perform the methods as described in the foregoing method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall terminal device 100 using various interfaces and lines, performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 110 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 110 and may be implemented solely by a single communication chip.
The Memory 120 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the terminal device 100 in use, etc.
In an embodiment of the present application, the image capturing device 130 is configured to capture an image of a physical object and capture a scene image of a target scene. The image capturing device 130 may be an infrared camera or a color camera, and the specific camera type is not limited in the embodiment of the present application.
Referring to fig. 20, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 800 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 800 has storage space for program code 810 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (11)

1. A virtual content display method, characterized by being applied to a terminal device, the method comprising:
identifying a target marker and acquiring position and posture information of the target marker relative to the terminal equipment;
obtaining virtual content to be displayed, and obtaining inverted image content of the virtual content relative to a designated plane, wherein the designated plane is a horizontal plane in which the bottom of the virtual content in a virtual space is positioned;
determining the display direction of the reflection content according to the light path direction of a light source of the environment where the target marker is located;
determining a first rendering position of the virtual content according to the position and the gesture information, and determining a second rendering position of the inverted image content according to the position, the gesture information and the display direction;
According to the rendering position, rendering the virtual content and the back-image content, wherein the virtual content is rendered above a plane to be displayed, the back-image content is rendered in the plane to be displayed, and the plane to be displayed is a back-image surface corresponding to the virtual content in a virtual space and is overlapped and displayed on a plane in a real space;
and displaying the virtual content and the inverted image content.
2. The method of claim 1, wherein the obtaining the inverted image content of the virtual content relative to a specified plane comprises:
and taking the appointed plane as a specular reflection surface, and acquiring the reflection content of the virtual content relative to the appointed plane by utilizing a specular reflection matrix.
3. The method of claim 1, wherein the rendering the virtual content and the reflection content according to the rendering location comprises:
determining partial back image content in the plane to be displayed in the back image content according to the rendering position;
rendering the virtual content and the partial reflection content according to the rendering position;
the displaying the virtual content and the inverted image content includes:
And displaying the virtual content and the partial reflection content.
4. The method of claim 1, wherein after the capturing of the inverted content of the virtual content relative to the specified plane, the method further comprises:
and carrying out corresponding image processing on the inverted image content according to material reflection parameters, wherein the material reflection parameters comprise at least one of material reflection parameters of the plane to be displayed and material reflection parameters of a physical plane of the environment where the target marker is located.
5. The method of claim 1, wherein after the rendering of the virtual content and the inverted image content according to the rendering location, the method further comprises:
and adjusting the color of the contour edge area of the plane to be displayed to a preset color, wherein the brightness value of each color component of the preset color is lower than a first threshold value.
6. The method of claim 1, wherein after the displaying the virtual content and the reflection content, the method further comprises:
when the change of the display position and the posture information of the virtual content relative to the plane to be displayed is detected, updating the displayed inverted image content according to the changed display position and posture information.
7. The method according to claim 1, wherein the method further comprises:
and superposing and displaying the plane to be displayed on a physical plane of the environment where the target marker is located.
8. The method of any of claims 1-7, wherein after the displaying the virtual content and the inverted image content, the method further comprises:
when the position and posture information of the target marker relative to the terminal equipment is detected to change, the displayed virtual content and the displayed inverted image content are updated according to the changed position and posture information.
9. A virtual content display apparatus, characterized by being applied to a terminal device, comprising:
the image recognition module is used for recognizing a target marker and acquiring the position and posture information of the target marker relative to the terminal equipment;
the content acquisition module is used for acquiring the reflection content of the virtual content relative to a designated plane based on the virtual content to be displayed, wherein the designated plane is the horizontal plane where the bottom of the virtual content in the virtual space is located;
the position acquisition module is used for determining the display direction of the reflection content according to the light path direction of the light source of the environment where the target marker is located; determining a first rendering position of the virtual content according to the position and the gesture information, and determining a second rendering position of the inverted image content according to the position, the gesture information and the display direction;
The rendering module is used for rendering the virtual content and the back-image content according to the rendering position, wherein the virtual content is rendered above a plane to be displayed, the back-image content is rendered in the plane to be displayed, and the plane to be displayed is a back-image plane corresponding to the virtual content in a virtual space and is overlapped and displayed on a plane in a real space;
and the display module is used for displaying the virtual content and the inverted image content.
10. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-8.
11. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for executing the method according to any one of claims 1-8.
CN201910082681.3A 2018-12-29 2019-01-28 Virtual content display method, device, terminal equipment and storage medium Active CN111563966B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910082681.3A CN111563966B (en) 2019-01-28 2019-01-28 Virtual content display method, device, terminal equipment and storage medium
PCT/CN2019/129222 WO2020135719A1 (en) 2018-12-29 2019-12-27 Virtual content interaction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910082681.3A CN111563966B (en) 2019-01-28 2019-01-28 Virtual content display method, device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111563966A CN111563966A (en) 2020-08-21
CN111563966B true CN111563966B (en) 2023-08-29

Family

ID=72074047

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910082681.3A Active CN111563966B (en) 2018-12-29 2019-01-28 Virtual content display method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111563966B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113674435A (en) * 2021-07-27 2021-11-19 阿里巴巴新加坡控股有限公司 Image processing method, electronic map display method and device and electronic equipment
CN114245015A (en) * 2021-12-21 2022-03-25 维沃移动通信有限公司 Shooting prompting method and device, electronic equipment and medium
CN116778114A (en) * 2022-03-07 2023-09-19 北京百度网讯科技有限公司 Method for operating component, electronic device, storage medium and program product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009020818A (en) * 2007-07-13 2009-01-29 Konami Digital Entertainment:Kk Image generation device, image generation method and program
WO2013040983A1 (en) * 2011-09-20 2013-03-28 深圳Tcl新技术有限公司 Opengl-based inverted image display processing device and method
TWI572846B (en) * 2015-09-18 2017-03-01 國立交通大學 3d depth estimation system and 3d depth estimation method with omni-directional images
CN106652007A (en) * 2016-12-23 2017-05-10 网易(杭州)网络有限公司 Virtual sea surface rendering method and system
CN107330966A (en) * 2017-06-21 2017-11-07 杭州群核信息技术有限公司 A kind of rendering intent and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009020818A (en) * 2007-07-13 2009-01-29 Konami Digital Entertainment:Kk Image generation device, image generation method and program
WO2013040983A1 (en) * 2011-09-20 2013-03-28 深圳Tcl新技术有限公司 Opengl-based inverted image display processing device and method
TWI572846B (en) * 2015-09-18 2017-03-01 國立交通大學 3d depth estimation system and 3d depth estimation method with omni-directional images
CN106652007A (en) * 2016-12-23 2017-05-10 网易(杭州)网络有限公司 Virtual sea surface rendering method and system
CN107330966A (en) * 2017-06-21 2017-11-07 杭州群核信息技术有限公司 A kind of rendering intent and device

Also Published As

Publication number Publication date
CN111563966A (en) 2020-08-21

Similar Documents

Publication Publication Date Title
CN109118569B (en) Rendering method and device based on three-dimensional model
US11694392B2 (en) Environment synthesis for lighting an object
US10223834B2 (en) System and method for immersive and interactive multimedia generation
CN111563966B (en) Virtual content display method, device, terminal equipment and storage medium
CN108780578A (en) Direct light compensation technique for augmented reality system
JP2009020614A (en) Marker unit to be used for augmented reality system, augmented reality system, marker unit creation support system, and marker unit creation support program
US11087545B2 (en) Augmented reality method for displaying virtual object and terminal device therefor
US11967094B2 (en) Detecting device, information processing device, detecting method, and information processing program
US11699259B2 (en) Stylized image painting
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
US20190206109A1 (en) Method, apparatus and device for generating live wallpaper and medium
US20220277512A1 (en) Generation apparatus, generation method, system, and storage medium
US11589024B2 (en) Multi-dimensional rendering
KR102107706B1 (en) Method and apparatus for processing image
CN111651031B (en) Virtual content display method and device, terminal equipment and storage medium
CN111399630B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium
CN111462294B (en) Image processing method, electronic equipment and computer readable storage medium
US10902669B2 (en) Method for estimating light for augmented reality and electronic device thereof
US8994742B2 (en) Systems and methods for seam resolution
CN108921097A (en) Human eye visual angle detection method, device and computer readable storage medium
CN111399631B (en) Virtual content display method and device, terminal equipment and storage medium
US11138807B1 (en) Detection of test object for virtual superimposition
RU2778288C1 (en) Method and apparatus for determining the illumination of an image of the face, apparatus, and data storage medium
JP2013152683A (en) Image processing apparatus, image processing method and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant