CN111399630B - Virtual content interaction method and device, terminal equipment and storage medium - Google Patents

Virtual content interaction method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111399630B
CN111399630B CN201910005562.8A CN201910005562A CN111399630B CN 111399630 B CN111399630 B CN 111399630B CN 201910005562 A CN201910005562 A CN 201910005562A CN 111399630 B CN111399630 B CN 111399630B
Authority
CN
China
Prior art keywords
display
virtual content
terminal device
content
head
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910005562.8A
Other languages
Chinese (zh)
Other versions
CN111399630A (en
Inventor
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910005562.8A priority Critical patent/CN111399630B/en
Priority to PCT/CN2019/130646 priority patent/WO2020140905A1/en
Publication of CN111399630A publication Critical patent/CN111399630A/en
Application granted granted Critical
Publication of CN111399630B publication Critical patent/CN111399630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The embodiment of the application discloses a virtual content interaction method, a virtual content interaction device, terminal equipment and a storage medium, wherein the virtual content interaction method comprises the following steps: acquiring first relative spatial position information of the head-mounted display equipment and the terminal equipment; determining the display position of the virtual content according to the first relative spatial position information; rendering the virtual content according to the display position, and acquiring display data of the virtual content; transmitting the display data to the head mounted display device, the display data for instructing the head mounted display device to display the virtual content. The virtual content interaction method can realize the interaction between the terminal equipment and the head-mounted display equipment, and displays the virtual content according to the spatial position of the terminal equipment.

Description

Virtual content interaction method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a virtual content interaction method, apparatus, terminal device, and storage medium.
Background
With the development of science and technology, machine intellectualization and information intellectualization are increasingly popularized, and the technology of identifying user images through image acquisition devices such as machine vision or virtual vision and the like to realize human-computer interaction is more and more important. Augmented Reality (AR) constructs virtual content that does not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately fuses the virtual content into a real environment by means of an image recognition and positioning technology, fuses the virtual content and the real environment into a whole by means of a display device, and displays the virtual content to a user for real sensory experience. The first technical problem to be solved by the augmented reality technology is how to accurately fuse virtual content into the real world, that is, to make the virtual content appear at the correct position of a real scene in the correct angular pose, so as to generate strong visual reality. Therefore, interaction with virtual content is an important research direction for augmented reality or mixed reality.
Disclosure of Invention
The embodiment of the application provides a virtual content interaction method and device, a terminal device and a storage medium, which can display virtual content according to the spatial position of the terminal device and realize interaction between the terminal device and a head-mounted display device.
In a first aspect, an embodiment of the present application provides a virtual content interaction method, which is applied to a terminal device, where the terminal device is in communication connection with an external head-mounted display device, and the method includes: acquiring first relative spatial position information of the head-mounted display equipment and the terminal equipment; determining the display position of the virtual content according to the first relative spatial position information; rendering the virtual content according to the display position, and acquiring display data of the virtual content; transmitting the display data to the head mounted display device, the display data for instructing the head mounted display device to display the virtual content.
In a second aspect, an embodiment of the present application provides a virtual content interaction apparatus, which is applied to a terminal device, where the terminal device is in communication connection with an external head-mounted display device, and the apparatus includes: the system comprises a position acquisition module, a position determination module, a content rendering module and a data transmission module, wherein the position acquisition module is used for acquiring first relative spatial position information of the head-mounted display device and the terminal device; the position determining module is used for determining the display position of the virtual content according to the first relative spatial position information; the content rendering module is used for rendering the virtual content according to the display position and acquiring display data of the virtual content; and the data sending module is used for transmitting the display data to the head-mounted display equipment, and the display data is used for indicating the head-mounted display equipment to display the virtual content.
In a third aspect, an embodiment of the present application provides a virtual content interaction method, which is applied to a head-mounted display device, where the head-mounted display device is communicatively connected to a terminal device, and the method includes: collecting a marker image containing a marker, wherein the marker is arranged on the terminal equipment; transmitting the marker image to the terminal device, wherein the marker image is used for indicating the terminal device to identify a marker in the marker image, and acquiring first relative spatial position information of the head-mounted display device and the terminal device; and receiving display data sent by the terminal equipment, and displaying virtual content according to the display data, wherein the display data is obtained by rendering the terminal equipment according to the first relative spatial position information.
In a fourth aspect, an embodiment of the present application provides a display system, where the display system includes a terminal device and an external head-mounted display device, where the terminal device is in communication connection with the head-mounted display device, where: the terminal device is used for acquiring first relative spatial position information of the head-mounted display device and the terminal device, determining a display position of virtual content according to the first relative spatial position information, rendering the virtual content according to the display position, acquiring display data of the virtual content, and transmitting the display data to the head-mounted display device, wherein the display data is used for indicating the head-mounted display device to display the virtual content; the head-mounted display equipment is used for receiving the display data sent by the terminal equipment and displaying the virtual content according to the display data.
In a fifth aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the virtual content interaction method provided by the first aspect above.
In a sixth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the virtual content interaction method provided in the first aspect.
According to the scheme provided by the application, the first relative space position information of the head-mounted display device and the terminal device is obtained, the display position of the virtual content is determined according to the first relative space position information, the virtual content is rendered according to the display position, the display data of the virtual content is obtained, and the display data are transmitted to the head-mounted display device, wherein the display data are used for indicating the head-mounted display device to display the virtual content, so that the virtual content is displayed in the virtual space according to the space position of the terminal device, the effect that the virtual content is overlapped in the real world can be observed by a user, and the interaction between the terminal device and the head-mounted display device is realized.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application scenario suitable for use in an embodiment of the present application.
Fig. 2 shows a schematic diagram of another application scenario applicable to the embodiments of the present application.
Fig. 3 shows a schematic diagram of another application scenario applicable to the embodiment of the present application.
FIG. 4 shows a flow diagram of a virtual content interaction method according to one embodiment of the present application.
Fig. 5 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 6 shows another display effect diagram according to an embodiment of the application.
Fig. 7 is a schematic diagram illustrating still another display effect according to an embodiment of the application.
FIG. 8 shows a flow diagram of a virtual content interaction method according to another embodiment of the present application.
Fig. 9 shows a flowchart of step S210 in the virtual content interaction method according to the embodiment of the present application.
Fig. 10 shows a flowchart of step S220 in the virtual content interaction method according to the embodiment of the present application.
Fig. 11 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 12 shows another display effect diagram according to an embodiment of the application.
Fig. 13 is a schematic diagram illustrating still another display effect according to an embodiment of the application.
Fig. 14A-14B are schematic diagrams illustrating still another display effect according to an embodiment of the application.
Fig. 15A to 15C are diagrams illustrating sliding of the manipulation section of the terminal device according to an embodiment of the present application.
Fig. 16 shows a schematic diagram of still another display effect according to an embodiment of the application.
Fig. 17 shows yet another display effect diagram according to an embodiment of the application.
FIGS. 18A-18B illustrate yet another display effect diagram according to an embodiment of the application.
FIG. 19 is a schematic diagram illustrating still another display effect according to an embodiment of the present application
FIG. 20 shows a block diagram of a virtual content interaction device, according to one embodiment of the present application.
FIG. 21 shows a flow diagram of a virtual content interaction method according to yet another embodiment of the present application.
FIG. 22 shows a block diagram of a display system according to one embodiment of the present application.
Fig. 23 is a block diagram of a terminal device for executing a virtual content interaction method according to an embodiment of the present application.
Fig. 24 is a storage unit for storing or carrying program code for implementing a virtual content interaction method according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
With the development of Augmented Reality (AR) technology, AR-related electronic devices gradually come into people's daily lives. AR is a technology for increasing the perception of the real world by a user through information provided by a computer system, and enhances or modifies the perception of the real world environment or data representing the real world environment by superimposing a computer-generated content object, such as a virtual object, a scene, or system hint information, into a real scene. In a conventional AR scene, a user generally needs to wear an AR device such as a head mounted display to scan a marker, and display virtual content according to a spatial position of the marker. In a display mode in which virtual content is displayed by identifying a marker, the marker is usually disposed at a fixed position in a real space, and the displayed virtual content is fixed, so that interaction between a user and the virtual content is restricted.
In view of the above problems, the inventors have developed a virtual content interaction method, apparatus, terminal device and storage medium in the embodiments of the present application, and display virtual content according to a spatial position of the terminal device, so as to implement interaction between the terminal device and a head-mounted display device.
An application scenario of the virtual content interaction method provided by the embodiment of the present application is introduced below.
Referring to fig. 1, a schematic diagram of an application scenario of a virtual content interaction method provided in an embodiment of the present application is shown, where the application scenario includes a display system 10, and the display system 10 includes: a terminal device 100 and a head mounted display device 200 connected to the terminal device 100.
In this embodiment of the application, the terminal device 100 may be held and controlled by a user, and may be an electronic device capable of running an application, such as a mobile phone, a smart watch, a tablet computer, an electronic reader, a notebook computer, and a head-mounted display device.
In this embodiment of the application, the head-mounted display device 200 may be an external head-mounted display device, that is, the head-mounted display device 200 may only include a display module, a communication module, a camera, and the like for displaying, and the processor, the memory, and the like of the terminal device 100 connected to the head-mounted display device 200 are used to control the displayed virtual content. The display module may include a display screen (or a projection device) and a display lens to display the virtual content.
The head-mounted display device 200 connected to the terminal device 100 can interact with the terminal device 100 for information and commands. The information interaction may include, among other things, virtual content displayed by the head mounted display device 200. The terminal device 100 and the head-mounted display device 200 may be connected through Wireless communication modes such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (purple peak technology), and the like, or may be connected through a USB interface in a wired communication manner, for example, referring to fig. 2, when the terminal device 100 is a mobile phone terminal or a tablet computer, the head-mounted display device 200 is connected through a USB interface in a wired communication manner with the tablet computer and the mobile phone terminal. Of course, the connection manner of the terminal device 100 and the head-mounted display apparatus 200 may not be limited in the embodiment of the present application.
In some embodiments, a tag 101 is disposed on the terminal device 100. Wherein the marker 101 may comprise at least one sub-marker having one or more characteristic points. When the marker 101 is within the visual field of the head-mounted display device 200, the head-mounted display device 200 may use the marker 101 within the visual field as a target marker, capture an image including the target marker, and transmit the image to the terminal device 100. The terminal device 100 may obtain spatial position information such as a relative position, an orientation, and the like between the target marker and the head mounted display apparatus 200 from the image, thereby obtaining the relative spatial position information between the terminal device 100 and the head mounted display apparatus 200. The terminal device 100 may render a corresponding virtual object based on spatial position information between the head-mounted display apparatus 200, and transmit a display screen of the virtual object to the head-mounted display apparatus 200, and the head-mounted display apparatus 200 may display the virtual object through the display module. The head-mounted display device 200 may acquire an image of the target marker in real time, thereby implementing positioning and tracking of the terminal device 100. It is to be understood that the specific marker 101 is not limited in the embodiment of the present application, and only needs to be tracked by the head-mounted display device 200.
In some embodiments, the head-mounted display apparatus 200 can also track the shape of the terminal device 100 and determine the relative spatial position relationship between the terminal device 100 and the head-mounted display apparatus 200.
In some embodiments, the head-mounted display apparatus 200 may also determine the relative spatial position relationship between the terminal device 100 and the head-mounted display apparatus 200 according to the light points provided on the terminal device 100.
For example, referring to fig. 1 again, the terminal device 100 is a mobile phone terminal, the mobile phone terminal is in wireless communication connection with the head-mounted display device 200, and a user can scan the marker 101 on the mobile phone terminal through the head-mounted display device 200 worn by the user to see the superimposed display of a virtual space scene containing a plurality of virtual stars and a real space, wherein the virtual space scene corresponds to the space scene displayed by the mobile phone terminal, which embodies the augmented reality display effect of virtual content, and embodies information interaction between the terminal device and the head-mounted display device.
For another example, referring to fig. 3, the terminal device 100 is a tablet computer, the tablet computer is in wired communication with the head-mounted display device 200, and the user can scan the marker 101 on the tablet computer through the head-mounted display device 200 worn by the user, so that the user can see that the virtual medical human body model is superimposed and displayed on the surface of the tablet computer in the real space, thereby showing the augmented reality display effect of the virtual content and showing the information interaction between the terminal device and the head-mounted display device.
Based on the display system, the embodiment of the present application provides a virtual content interaction method, which is applied to the terminal device of the display system, and a specific virtual content interaction method is introduced below.
Referring to fig. 4, an embodiment of the present application provides a virtual content interaction method, which is applied to a terminal device, where the terminal device is in communication connection with an external head-mounted display device, and the head-mounted display device may be the head-mounted display apparatus, and the virtual content interaction method may include:
step S110: first relative spatial position information of the head-mounted display device and the terminal device is obtained.
In the embodiment of the application, when virtual content needs to be displayed in a virtual space, the terminal device may acquire first relative spatial position information between the head mounted display device and the terminal device to obtain a display position of the virtual content in the virtual space. The first relative spatial position information may include relative position information between the head-mounted display device and the terminal device, posture information, and the like, and the posture information may be an orientation and a rotation angle of the terminal device relative to the head-mounted display device.
In some embodiments, the terminal device includes an Inertial Measurement Unit (IMU), and therefore, the terminal device obtains the first relative spatial position information between the terminal device and the head-mounted display device, and may obtain measurement data of the Inertial measurement unit of the terminal device, and then determine the first relative spatial position information between the terminal device and the head-mounted display device according to the measurement data. Of course, the manner of acquiring the first relative spatial position information of the head mounted display device and the terminal device may not be limited in the embodiment of the present application. For example, the first relative spatial position information may be obtained by recognizing a marker on the terminal device.
In some embodiments, a light spot may be further disposed on the terminal device, the head-mounted display device collects a light spot image on the terminal device through the image collecting device, and sends the light spot image to the terminal device, and the terminal device may identify the light spot in the light spot image, and determine first relative spatial position information between the head-mounted display device and the terminal device according to the light spot image. The light spot set on the terminal equipment can be a visible light spot or an infrared light spot, and when the light spot is an infrared light spot, the head-mounted display equipment can be provided with an infrared camera for collecting the light spot image of the infrared light spot. The light spot set on the terminal device may be one light spot or a light spot sequence consisting of a plurality of light spots.
In one embodiment, the light spots may be arranged on the housing of the terminal device, for example around the screen. The light spot can also be arranged on a protective sleeve of the terminal equipment, and the protective sleeve containing the light spot can be sleeved when the terminal equipment is used, so that the terminal equipment can be positioned and tracked. The arrangement of the light spots may be various, and is not limited herein. For example, in order to obtain the posture information of the terminal device in real time, different light spots may be respectively arranged around a screen of the terminal device, for example, different numbers of light spots or light spots of different colors may be arranged around the screen, so that the terminal device determines the relative spatial position with the head-mounted display device according to the distribution of each light spot in the light spot image.
Step S120: and determining the display position of the virtual content according to the first relative spatial position information.
In some embodiments, after obtaining the first relative spatial position information of the head-mounted display device and the terminal device, the display position of the virtual content may be determined according to the first relative spatial position information.
Further, the terminal device obtains first relative space position information between the head-mounted display device and the terminal device in the real space, converts the first relative space position information from the real space into position coordinates in the virtual space, and obtains a spatial position of the virtual content to be displayed in the virtual space relative to the head-mounted display device according to a position relation between the virtual content to be displayed and the terminal device in the virtual space and the position coordinates by taking the head-mounted display device as a reference, so that a display position of the virtual content in the virtual space can be obtained for displaying the virtual content subsequently, wherein the display position refers to three-dimensional space coordinates of the virtual content in the virtual space with the head-mounted display device as an origin (which can also be regarded as a human eye as the origin).
Step S130: rendering the virtual content according to the display position, and acquiring the display data of the virtual content.
After obtaining the display position of the virtual content, the terminal device may render the virtual content at the display position by using the display position as rendering coordinates of the virtual content.
In some embodiments, after obtaining the display position of the virtual content, the terminal device may acquire data of the virtual content to be displayed, then construct the virtual content according to the data of the virtual content, and render the virtual content according to the display position. The data corresponding to the virtual content to be displayed may include model data of the virtual content, where the model data is data used for rendering the virtual content. For example, the model data may include color data, vertex coordinate data, contour data, and the like for establishing correspondence of the virtual content.
In some embodiments, the data of the virtual content may also be obtained by downloading from a server by the terminal device, or obtained by obtaining from other terminals by the terminal device.
Step S140: the display data is transmitted to the head mounted display device, the display data being used to instruct the head mounted display device to display the virtual content.
After rendering the virtual content, the terminal device may obtain display data of the rendered virtual content, where the display data may include RGB values of each pixel point in a display screen, a corresponding pixel point coordinate, and the like, and the display data may be used to generate the display screen. The terminal device can transmit the display data to the head-mounted display device, so that the head-mounted display device can generate a display picture according to the display data and project the display picture onto the display lens, and virtual content is displayed. The user can see the virtual content to be displayed in the real world in an overlapping mode through the display lens of the head-mounted display device, and the effect of augmented reality is achieved. Therefore, the display data of the virtual content to be displayed is transmitted to the head-mounted display equipment through the terminal equipment, so that the head-mounted display equipment displays the corresponding virtual content, the user can observe the effect that the virtual content transmitted by the terminal equipment is superimposed on the real world, the virtual content is displayed according to the spatial position of the terminal equipment, and the interaction between the terminal equipment and the head-mounted display equipment is reflected.
In one embodiment, the virtual content may be display content currently displayed on the screen by the terminal device, for example, in a virtual map scene, the virtual content may be a partial map of a chinese map displayed on the screen of the terminal device. Specifically, when the terminal device performs content display, the terminal device transmits data of display content currently displayed on the screen to the head-mounted display device. In another embodiment, the virtual content may be extended content corresponding to display content currently displayed on the screen by the terminal device, for example, in a virtual game scene, when grass, stones, and the like in a game map are displayed on the screen of the terminal device, the virtual content may be corresponding extended content such as game players, game characters, and the like, and specifically, when the terminal device performs content display, the terminal device transmits data of the extended content corresponding to the display content currently displayed on the screen to the head-mounted display device. In still other embodiments, the virtual content may also include display content currently displayed on the screen by the terminal device and extended content corresponding to the display content currently displayed on the screen by the terminal device, for example, in the virtual map scene, the virtual content may be a complete chinese map, which includes a partial map of the chinese map displayed on the screen of the terminal device and an extended map corresponding to the partial map displayed by the terminal device.
In addition, in some embodiments, different buttons and other controls may be displayed on the screen of the terminal device, and the user may enter different modes by selecting and clicking different buttons, for example, the user may enter an augmented reality mode or exit the augmented reality mode; in the augmented reality mode, a different display mode may be selected, and for example, the same content as the screen of the terminal device may be displayed, an extended content corresponding to the screen of the terminal device may be displayed, a preset content unrelated to the screen content of the terminal device may be displayed, or the like. When the terminal device selects the target mode from the different modes, the display data of the display content corresponding to the target mode can be transmitted to the head-mounted display device, so that the head-mounted display device can display the virtual content according to the display data, wherein the virtual content includes the display content corresponding to the target mode.
Therefore, the terminal device obtains data of a virtual content for rendering, where the data of the virtual content may be display content data displayed on a current screen of the terminal device, may also be extended content corresponding to display content currently displayed on the screen of the terminal device, and may also be complete display content data corresponding to display content currently displayed on the screen of the terminal device, where the complete display content may include display content displayed on the current screen of the terminal device and extended content corresponding to the display content currently displayed on the screen of the terminal device, and the display content displayed on the current screen of the terminal device may be a part of the complete display content, and the extended content refers to a part of the complete display content except the display content currently displayed on the screen of the terminal device. For example, a partial map of a chinese map is displayed on the screen of the terminal device, and the data of the virtual content may be data of the partial map of the chinese map displayed by the terminal device, data of other partial maps except the partial map displayed by the terminal in the chinese map, or data of a complete chinese map, where the data of the chinese map is complete display content data corresponding to the display content on the current screen of the terminal device. That is to say, the terminal device may construct the virtual content according to the display content data displayed on the current screen, may also construct the virtual content according to the extended content data corresponding to the display content on the screen of the terminal device, may also construct the virtual content according to the display content on the screen of the terminal device and the extended content data, and then renders the virtual content according to the display position, so as to display the virtual content through the head-mounted display device, thereby implementing augmented reality display of the display content and the extended content on the screen of the terminal device. Of course, the data of the virtual content is only an example, and does not represent a limitation on the data of the virtual content in the embodiment of the present application.
For example, in a virtual map scene, please refer to fig. 5 and 6, the terminal device 100 is a mobile phone terminal, and when the currently displayed content of the mobile phone terminal is a partial map of a chinese map, if the data of the partial map is transmitted by the mobile phone terminal, please refer to fig. 5, the virtual content 300 that the user can see through the head-mounted display device 200 worn by the user is superimposed with the real space for displaying, where the virtual content 300 is the partial map of the chinese map; if the mobile phone terminal transmits data of a complete chinese map, please refer to fig. 6, the user may superimpose the virtual content 300 seen through the wearable display device 200 and the real space for displaying, wherein the virtual content 300 is the complete chinese map, and for example, in the virtual game scene, please refer to fig. 7, the terminal device 100 is a tablet computer, the currently displayed content of the tablet computer is grass or stone in the game map, the tablet computer transmits data of extended content (game character) corresponding to the grass or stone displayed on the terminal device to the head-mounted display device, so that the virtual content 300 (game character) seen through the head-mounted display device worn by the user is superimposed and displayed with the real space, thereby realizing augmented reality display of the display content and the extended content on the screen of the terminal device, and embodying control of display of the virtual content through the terminal device, the interactivity between the mobile phone terminal and the head-mounted display device is improved, and the problem that the display content is limited by the screen of the mobile phone terminal can be solved.
In one embodiment, the virtual content displayed by the head-mounted display device may not include the display content displayed on the screen of the terminal device, that is, the terminal device only serves as a processor, a memory, and the like of the head-mounted display device to determine the virtual content displayed by the head-mounted display device, the displayed virtual content is not related to the display content currently displayed by the terminal device, the data of the virtual content acquired by the terminal device may be generated in real time according to a real environment or may be pre-stored data, and when the virtual content is displayed by the head-mounted display device, the terminal device may not display any content.
According to the virtual content interaction method provided by the embodiment of the application, first relative spatial position information of a head-mounted display device and a terminal device is obtained, a display position of virtual content is determined according to the first relative spatial position information, then the virtual content is rendered according to the display position, and display data of the virtual content is obtained; and the display data is transmitted to the head-mounted display device, wherein the display data is used for indicating the head-mounted display device to display the virtual content, so that the virtual content is displayed in the virtual space according to the spatial position of the terminal device, a user can observe the effect that the virtual content is superposed on the real world, the interaction between the terminal device and the head-mounted display device is realized, and meanwhile, the manufacturing cost of the head-mounted display device is reduced.
Referring to fig. 8, another embodiment of the present application provides a virtual content interaction method, which is applied to a terminal device, where the terminal device is in communication connection with an external head-mounted display device, where the head-mounted display device may be the head-mounted display apparatus, and the virtual content interaction method may include:
step S210: first relative spatial position information of the head-mounted display device and the terminal device is obtained.
In some embodiments, the terminal device is provided with a marker, so when the first relative spatial position information of the head-mounted display device and the terminal device needs to be acquired, the first relative spatial position information of the head-mounted display device and the terminal device can be acquired by acquiring the marker on the terminal device through the head-mounted display device. The marker can be arranged on the shell of the terminal equipment, can also be displayed on the screen of the terminal equipment in an image mode, can also be an external marker, and can be inserted into the terminal equipment through a USB (universal serial bus) or an earphone hole and the like when in use, so that the terminal equipment can be positioned and tracked. The manner in which the marker is disposed is not limited herein.
Specifically, referring to fig. 9, the obtaining first relative spatial position information of the head mounted display device and the terminal device includes:
step S211: and receiving a marker image sent by the head-mounted display equipment, wherein the marker image is obtained when the head-mounted display equipment collects the marker.
In some embodiments, it is necessary to receive a marker image sent by the head-mounted display device, so as to obtain first relative spatial position information of the head-mounted display device and the terminal device by identifying the marker image. In some embodiments, the head-mounted display device may scan the terminal device in real time through the camera to acquire the marker on the terminal device, so as to obtain the marker image, and then transmit the marker image to the terminal device, so that the terminal device receives the marker image.
In some embodiments, the marker may include at least one sub-marker, and the sub-marker may be a pattern having a shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In addition, the distribution rules of the sub-markers within different markers are different, and thus, each marker may have different identity information. The terminal device may acquire identity information corresponding to the tag by identifying the sub-tag included in the tag, and the identity information may be information that can be used to uniquely identify the tag, such as a code, but is not limited thereto.
In one embodiment, the outline of the marker may be rectangular, but the shape of the marker may be other shapes, and the rectangular region and the plurality of sub-markers in the region constitute one marker. Of course, the marker may also be an object which is composed of a light spot and can emit light, the light spot marker may emit light of different wavelength bands or different colors, and the terminal device acquires the identity information corresponding to the marker by identifying information such as the wavelength band or the color of the light emitted by the light spot marker. Of course, the specific marker is not limited in the embodiment of the present application, and the marker only needs to be recognized by the terminal device.
Step S212: a marker in the marker image is identified, and first relative spatial position information between the head mounted display device and the terminal device is acquired.
After the terminal device obtains the marker image, the marker in the marker image can be identified to obtain first relative spatial position information between the head-mounted display device and the terminal device.
It is understood that after the terminal device identifies the marker in the marker image, the obtained identification result includes spatial position information between the head-mounted display device and the marker, the spatial position information may include position information, posture information and the like, and the posture information may include a rotation direction, a rotation angle and the like of the marker relative to the head-mounted display device. Therefore, first relative spatial position information between the head mounted display device and the terminal device can be obtained with the head mounted display device as a reference according to the position information of the marker on the terminal device, namely the position information between the marker and the terminal device.
Step S220: and determining the display position of the virtual content according to the first relative spatial position information.
In some embodiments, the terminal device needs to determine the display position of the virtual content before displaying the virtual content. As one way, referring to fig. 10, the determining the display position of the virtual content according to the first relative spatial position information includes:
step S221: and acquiring second relative spatial position information of the virtual content and the terminal equipment.
In the embodiment of the application, in order to obtain the display position of the virtual content, second relative spatial position information of the virtual content and the terminal device needs to be obtained. The second relative spatial position information may include position information, attitude information, and the like of the virtual content in the virtual space with respect to the terminal device, and the attitude information is an orientation and a rotation angle of the virtual content with respect to the terminal device. The second relative spatial position information may also be understood as a relative spatial position relationship between the virtual content viewed by the user through the head mounted display device and the terminal device in the real world.
In this embodiment of the application, the second relative spatial position information of the virtual content and the terminal device may be relative spatial position information when the virtual content is overlapped with the terminal device, or may be relative spatial position information when the virtual content is located at an edge or around the terminal device, or may be relative spatial position information when the virtual content and the terminal device are located on different planes, such as relative spatial position information when a plane where the terminal device is located is perpendicular to a plane where the virtual content is located. Of course, the specific virtual content and the second relative spatial position information of the terminal device may not be limited in this embodiment of the application.
In some embodiments, the second relative spatial position information of the virtual content and the terminal device may be stored in the terminal device in advance, or may be specifically set according to the display content currently displayed by the terminal device. Specifically, it is understood that the currently displayed display content of the terminal device is different, and the corresponding second relative spatial position information is different, for example, please refer to fig. 5, in a virtual map display scene, the currently displayed display content of the terminal device 100 is a partial map, and when the virtual content 300 is a complete chinese map, the spatial position of the complete chinese map is located at the upper right of the spatial position of the terminal device, and for example, in an album display scene, please refer to fig. 11, the currently displayed display content of the terminal device 100 is a photo, and when the virtual content 300 is multiple photos, the spatial positions of the multiple photos are located at the upper right of the spatial position of the terminal device 100.
In some embodiments, the second relative spatial position information of the virtual content and the terminal device may be set according to a inclusion relationship between the display content currently displayed by the terminal device and the virtual content. Specifically, when the virtual content includes display content displayed by the terminal device, the second relative spatial position information of the virtual content and the terminal device may be determined according to the position of the display content in the virtual content, so that the display content in the virtual content is overlapped with the display content displayed on the terminal device.
For example, referring to fig. 12, the terminal device 100 is a mobile phone terminal, the currently displayed content of the mobile phone terminal is the hebei province map 110, the virtual content to be displayed is the chinese map 300, and the second relative spatial position information between the chinese map and the mobile phone terminal can be set according to the position of the hebei province in the chinese map, so that the user can view the hebei province map in the chinese map 300 by wearing the display device and overlap the hebei province map 110 displayed on the terminal device currently.
In some embodiments, the second relative spatial location information of the virtual content and the terminal device may also be set according to user requirements and user preferences.
Step S222: and determining the display position of the virtual content according to the first relative spatial position information and the second relative spatial position information.
It can be understood that, since the first relative spatial position information obtained by the terminal device includes the position, the orientation, and the rotation angle of the head-mounted display device relative to the terminal device, a spatial position coordinate of the terminal device in a real space can be obtained, and then the spatial position coordinate can be converted into a spatial coordinate in a virtual space, and then, according to the second relative spatial position information of the virtual content and the terminal device in the virtual space, with the head-mounted display device as a reference, a spatial position of the virtual content relative to the head-mounted display device can be obtained, so that a display coordinate of the virtual content in the virtual space is obtained, and a display position of the virtual content is obtained. The display position may be used as rendering coordinates for the virtual content to enable the virtual content to be rendered at the display position.
In some embodiments, the display location of the virtual content may be in a screen area of the terminal device. For example, referring to fig. 13, the user can see that the virtual medical human body model is displayed overlapping on the screen area of the terminal device through the head-mounted display device.
In other embodiments, the display location of the virtual content may be in a peripheral region of the terminal device. For example, referring to fig. 6, the terminal device 100 is a mobile phone terminal, the virtual content to be displayed is a chinese map 300, and the display position of the virtual content 300 can be determined to be the upper right of the mobile phone terminal according to the second relative spatial position information between the head-mounted display device 200 and the mobile phone terminal and the second relative spatial position information between the chinese map 300 and the mobile phone terminal. Step S230: and rendering the virtual content according to the display position.
In some embodiments, the terminal device obtains a display position of the virtual content in the virtual space, and whether the display position of the virtual content overlaps with the terminal device may be determined according to the display position of the virtual content and the spatial position of the terminal device. The display position can be understood as a position where virtual content seen by the user through the head-mounted display device is displayed in the real world, and then the spatial position of the terminal device can refer to the position of the terminal device in the real world; the display position may also be understood as a three-dimensional space coordinate of the virtual content in the virtual space with the head-mounted display device as an origin, and the spatial position of the terminal device may refer to a spatial position of the terminal device in the virtual space relative to the head-mounted display device. When the display position of the virtual content obtained by the terminal equipment overlaps with the terminal equipment, the terminal equipment can determine a first display area of the display content displayed by the terminal equipment and contained in the virtual content, and perform appointed display processing on the first display area, wherein the appointed display processing can enable the display content displayed by the terminal equipment to shield the display content subjected to the appointed display processing in the virtual content when the head-mounted display equipment displays the virtual content, so that a user can see that the display content currently displayed by the terminal equipment in a real space can be accurately connected with the virtual content displayed by the head-mounted display equipment.
It can be understood that, after obtaining the display position of the virtual content in the virtual space, it is necessary to first determine whether there is an overlap between the display position and the terminal device, if there is no overlap, the terminal device may directly render the virtual content according to the display position, and if there is an overlap, it is necessary to determine a first display area of the display content displayed by the terminal device, which is included in the virtual content, so as to subsequently process the first display area.
The first display area may be an overlapping area between the display position of the virtual content and the terminal device when the display position of the virtual content overlaps the terminal device. The virtual content in the first display area is the display content displayed by the terminal device, that is, the virtual content in the first display area is the same as the display content displayed in the current screen of the terminal device.
In some embodiments, the performing of the specified display processing on the first display region may be adjusting a color of the first display region to a specified color, may be adjusting a color of the display content in the virtual content to a specified color, may be adjusting a transparency of the first display region to a specified transparency, and may be adjusting a transparency of the display content in the virtual content to a specified transparency. Wherein the brightness value of each color component of the designated color is lower than a first threshold value, and the designated transparency is lower than a second threshold value.
The first threshold value is a maximum brightness value of each color component of the virtual content when the user cannot observe the virtual content through the head-mounted display device. In one way, the first threshold may be set to 13 brightness, i.e., 95% black, or 0 brightness, i.e., black. The second threshold is a maximum transparency value of the virtual content when the user cannot observe the virtual content through the head-mounted display device. In one way, the second threshold may be set to 1, i.e., 90% transparent, or may be set to 0, i.e., 100% transparent. Therefore, in the embodiment of the present application, the designated color may be set to black so that the user cannot observe the display content after the display content in the virtual content after the designated display processing is optically displayed by the head-mounted display device. Of course, the specified transparency may be set to 0 to achieve the above-described effects.
Through the above manner, the first display area in the virtual content is subjected to the designated display processing, so that the user cannot observe the virtual content in the first display area, and further the user can observe the effect that the virtual content outside the first display area is overlapped and displayed on the display content of the terminal equipment in the real space. For example, referring to fig. 9, the user can see the virtual content 300 superimposed on the display content 110 displayed by the terminal device 100 in the real space through the head-mounted display device 200, so that the display effect of the virtual content is improved.
Furthermore, the display content currently displayed by the terminal device and the display position of the virtual content have a corresponding relationship, that is, different display contents correspond to different virtual content display positions. Wherein the corresponding relation may be stored in the terminal device. And when the display content currently displayed by the terminal equipment is detected to be changed, updating the display position of the virtual content according to the changed display content.
It can be understood that, after the terminal device transmits the virtual content to the head-mounted display device for display, the display content currently displayed by the terminal device needs to be detected in real time, so that when it is detected that the display content currently displayed by the terminal device changes, the display position after updating the virtual content can be obtained according to the changed display content and the corresponding relationship.
In some embodiments, after the virtual content is overlappingly displayed on the terminal device, the user may change the display content currently displayed on the terminal device by sliding the display content on the terminal device. When the terminal device detects that the display content displayed on the current screen changes, the display position of the virtual content can be obtained again according to the latest display content, so that the user can see the change of the position of the displayed virtual content through the head-mounted display device. Specifically, the updating the display position of the virtual content according to the changed display content includes: determining a second display area of the changed display content in the virtual content according to the changed display content; and determining the updated display position of the virtual content according to the second display area.
It can be understood that different display contents may correspond to the same virtual content, for example, in a map display scene, the terminal device may only display a map of beijing city due to screen limitation, and by sliding the display contents of the terminal device, a map of hebeijing province may be displayed, and it can be seen from the figure that different display contents of the terminal device all correspond to the same virtual content (chinese map). Therefore, the display position of the virtual content can be determined according to the display area of the different display contents in the virtual content. Specifically, when the change of the display content currently displayed by the terminal device is detected, the second display area of the changed display content in the virtual content is determined according to the changed display content.
The second display area is an overlapping area between the display position of the virtual content and the terminal device, and the virtual content in the second display area is the display content displayed by the terminal device, that is, the display content of the virtual content in the second display area is the same as the display content currently displayed by the terminal device.
After determining the second display area, the terminal device may determine an updated display position of the virtual content according to the second display area. That is, the updated display position of the virtual content may be determined so that the display content in the virtual content overlaps with the display content on the terminal device, according to the positional relationship of the changed display content in the virtual content.
And after the terminal equipment obtains the updated display position, re-rendering the virtual content according to the updated display position. Therefore, the user can observe that the display position of the virtual content changes along with the change of the display content of the terminal equipment, and the display effect of the virtual content is improved.
For example, referring to fig. 14A and 14B, the terminal device 100 is a mobile phone terminal, and when the display content 110 (fig. 14A) of the mobile phone terminal is slid to the right to the display content 120 (fig. 14B), the user can see that the display position of the virtual content 300 is also moved through the head-mounted display device 200, and can see that the virtual content 300 is always superimposed on the display content displayed by the head-mounted display device 200 in the real space, so that the display effect of the virtual content is improved.
Step S240: the virtual content is transmitted to the head mounted display device, and the virtual content is used for instructing the head mounted display device to display the virtual content.
In some embodiments, the content of step S240 may refer to the content of the foregoing embodiments, and is not described herein again.
Further, after the terminal device transmits the virtual content to the head-mounted display device for display, the terminal device may control a display state of the virtual content. Therefore, referring to fig. 5 again, after the virtual content is transmitted to the head-mounted display device and the virtual content is used to instruct the head-mounted display device to display the virtual content, the virtual content interaction method may further include:
step S250: and when the control operation is received, generating a control instruction according to the control operation.
In the embodiment of the application, after the terminal device displays the virtual content through the head-mounted display device, the display of the virtual content can be controlled according to the control operation of the user. Specifically, the terminal device may generate the control instruction according to the control operation of the user when receiving the control operation of the user. The received control operation includes one or more of:
in some embodiments, the receiving of the control operation may be determining that the control operation is received when the manipulation area of the terminal device detects the manipulation operation.
As an embodiment, the terminal device includes a manipulation area, and thus it may be determined that the control operation is received when the manipulation area of the terminal device detects a manipulation operation of a user. As one way, the manipulation area may include at least one of a touch screen and a key, wherein the manipulation operation of the user may include, but is not limited to, a single-finger slide, a click, a press, a multi-finger fit slide, etc. acting on the manipulation area of the terminal device.
Through the method, the terminal equipment receives the control operation and then can generate the control instruction according to the control operation. The control instruction comprises a moving instruction, an amplifying instruction, a reducing instruction, a rotating instruction, a selecting instruction and the like so as to realize the display effects of controlling the movement, the scaling, the rotation and the selection of the virtual content. Of course, the above control commands are merely examples, and do not represent a limitation on the control commands in the embodiments of the present application.
Further, the generating the control instruction according to the control operation may include:
and generating a control instruction according to one or more of the detected number of fingers when the control operation is executed, the gesture action when the control operation is executed and the finger sliding track when the control operation is executed.
As an embodiment, the control instruction may be generated according to the number of fingers of the user performing the control operation in the manipulation area of the terminal device. Specifically, the number of fingers of a user in performing control operation in the control area of the terminal device can be detected in real time, so that different control instructions can be generated according to different numbers of fingers. For example, please refer to fig. 15A, when detecting that the user performs a control operation of single-finger sliding in the control area of the terminal device, a control instruction for moving the virtual content is generated, for example, please refer to fig. 14A and 14B, where the control instruction is to control the head-mounted display device to move the currently displayed virtual map to the right relative to the viewing angle of the user; for another example, please refer to fig. 15B, when it is detected that the user performs a control operation of relatively shrinking and merging the distance between the two fingers in the manipulation area of the terminal device, a control instruction for shrinking the currently displayed virtual content is generated, for example, refer to fig. 6 and 17, where the control instruction is for controlling the head-mounted display device to shrink the currently displayed virtual map from the perspective of the user. As one mode, the number of fingers and the control instruction have a corresponding relationship, and the corresponding relationship may be stored in the terminal device in advance, and may be set reasonably according to the specific use condition of the user, so that when the terminal device detects the number of fingers used by the user to perform the control operation, the control instruction may be generated according to the corresponding relationship.
As another implementation manner, the control instruction may be generated according to a finger sliding track of the user when the user performs the control operation in the manipulation area of the terminal device. Specifically, the terminal device may detect a finger sliding trajectory of the user when the user performs the control operation in real time, so as to generate different control instructions according to different finger sliding trajectories. For example, when a gesture of sliding left in the manipulation area of the terminal device is detected, a control instruction of rotating the virtual content left is generated, and when a gesture of sliding up in the manipulation area of the terminal device is detected, a control instruction of turning the virtual content up is generated, for example, referring to fig. 16, the terminal device 100 is a tablet computer, the virtual content 300 is a virtual medical human body, and when a user performs a right sliding operation in the touch screen area of the tablet computer by a finger, a control instruction of rotating the virtual medical human body right again is generated.
In addition, in some embodiments, the control operation received may also be determined to be received when the user gesture acquired by the head-mounted display device is a preset gesture. Specifically, the user can be scanned in real time through a camera of the head-mounted display device, the gesture of the user is collected, and when the collected gesture of the user is a preset gesture, it is determined that the control operation is received.
The preset gesture is a gesture motion which is required to be met by controlling the virtual content to correspondingly display. The preset gesture can be stored in the terminal device in advance and can be set according to the preference and the requirement of the user. For example, the preset gesture can be set to rise, fall, wave left or right, and the like.
Therefore, as an embodiment, the control instruction may be generated according to a gesture motion of the user when performing a control operation in the manipulation area of the terminal device. Specifically, the gesture actions of the user can be collected in real time through the head-mounted display device, and different control instructions are generated according to different gesture actions. For example, when a gesture of swinging a hand to the left of the user is captured, a control command for rotating the virtual content to the left is generated, and when a gesture of swinging a hand upwards of the user is captured, a control command for turning the virtual content upwards is generated. As one mode, the gesture motion and the control instruction have a corresponding relationship, so that the corresponding relationship between the gesture motion and the control instruction can be stored in the terminal device in advance, and can be set reasonably according to the specific use condition of the user. That is, when it is detected that the control operation performed by the user in the manipulation area of the terminal device is in any of the above cases, the control instruction may be generated.
In addition, the terminal equipment can be connected with the controller, and the control operation is determined to be received by detecting the control operation of the user in the control area of the controller. The manipulation operation of the user may include, but is not limited to, a single-finger slide, a click, a press, a multi-finger fit slide, etc. acting on the manipulation area of the controller. In some embodiments, the same control operation may correspond to different control instructions based on different virtual content. Therefore, when the control operation is received, a control instruction corresponding to the control operation can be generated based on the virtual content and the control operation. For example, when the virtual content is a chinese map, if the manipulation action detected by the manipulation region of the terminal device is a single click, a control instruction displayed in the selected region of the chinese map is generated; and when the control action detected in the control area is used as the distance between the two fingers is relatively enlarged and far away, generating a control instruction for amplifying and displaying the Chinese map. For another example, when the virtual content is a 3D medical human body model, and the manipulation action detected by the manipulation region is a click, a control instruction for rotating the 3D medical human body model is generated; and when the distance of the control action detected by the control area as the double fingers is relatively enlarged and far away, generating a control instruction for disassembling the 3D medical human body model. Step S260: and controlling the display of the virtual content according to the control instruction.
In some embodiments, after the terminal device generates the control instruction, the display state of the virtual content may be adjusted according to the control instruction, so that the adjusted virtual content is transmitted to the head-mounted display device, and the head-mounted display device displays the adjusted virtual content.
In some embodiments, if the virtual content corresponds to the currently displayed display content of the terminal device, the display of the display content of the terminal device may be further controlled according to the control instruction. That is, the terminal device may control display of the display content of the terminal device and control display of the virtual content according to the control instruction. For example, referring to fig. 18A, the display content of the terminal device is a map of a city such as beijing city, shanxi province, etc., the virtual content corresponding to the display content is a chinese map, and if the control command is an enlargement display command, referring to fig. 18B, the enlarged chinese map viewed by the user through the head-mounted display device and the map of the city such as beijing city, shanxi province, etc., displayed by the terminal device are enlarged and displayed to the map of beijing city.
Further, the display content of the terminal device may be changed according to the control operation of the user on the virtual content. Specifically, the user can be scanned in real time through the camera of the head-mounted display device, the gesture of the user is collected, the head-mounted display device transmits the collected gesture image of the user to the terminal device, the terminal device identifies the gesture in the gesture image of the user, the control operation of the user on the virtual content can be obtained, and then the terminal device can change the display content of the terminal device according to the control operation. The gesture of the user may be raising, falling, left-right waving, and the like, and the control operation on the virtual content may be selecting a designated virtual content, moving the virtual content, enlarging the virtual content, and the like, and the specific control operation on the virtual content is not limited in the implementation of the present application.
For example, when the virtual content is a chinese map, if the user selects a beijing city area of the chinese map through a gesture, the head-mounted display device transmits the acquired user gesture to the terminal device, and the terminal device changes the currently displayed display content to be the beijing city map according to the user gesture, please refer to fig. 19, where the terminal device 100 is a mobile phone terminal. When the area selected by the user on the virtual content 300 is the map of Beijing city, the user can see that the map of Beijing city is synchronously displayed on the mobile phone terminal.
It is understood that the terminal device and the head-mounted display device may be connected in wired communication. That is, the terminal device may transmit the display data of the virtual content to the head mounted display apparatus through the USB interface so that the head mounted display apparatus displays the virtual content. In addition, in some embodiments, the power supply can be directly provided for the head-mounted display device through the terminal equipment, so that the light weight of the head-mounted display device is kept, and the manufacturing cost of the head-mounted display device is reduced.
In some embodiments, the virtual content interaction method provided in the embodiments of the present application may also be performed in the head-mounted display apparatus, that is, the terminal device is disposed in the head-mounted display apparatus.
The virtual content interaction method provided by the embodiment of the application acquires the second relative spatial position information of the virtual content and the terminal equipment to determine the display position of the virtual content, and performs a specified display process on the first display area when there is an overlap between the display position of the virtual content and the terminal device, wherein the first display area is an area of display content displayed by the terminal equipment in the virtual content, the processed virtual content is transmitted to the head-mounted display equipment to be displayed, thus, the user can only observe the virtual content outside the overlapping area, but can not observe the virtual content in the overlapping area, thereby allowing the user to observe the effect of the display contents of the terminal device in which the virtual contents are displayed in superimposition in the real space, therefore, the virtual content can be displayed in the virtual space according to the spatial position of the terminal equipment, so that the user can observe the effect that the virtual content is superimposed on the real world. Furthermore, the display of the virtual content can be controlled according to the control operation of the user received by the terminal equipment, so that the display of the virtual content is controlled through the terminal equipment, and the interactivity between the terminal equipment and the head-mounted display equipment is improved.
Referring to fig. 20, a block diagram of a virtual content interaction apparatus 400 provided in an embodiment of the present application is shown, which is applied to a terminal device in a display system, where the display system includes the terminal device and a head-mounted display device, and the terminal device is communicatively connected to the head-mounted display device, and the apparatus may include: a location acquisition module 410, a location determination module 420, a content rendering module 430, and a data transmission module 440. The position obtaining module 410 is configured to obtain first relative spatial position information of the head mounted display device and the terminal device; the position determining module 420 is configured to determine a display position of the virtual content according to the first relative spatial position information; the content rendering module 430 is configured to render the virtual content according to the display position and obtain display data of the virtual content; the data sending module 440 transmits display data to the head-mounted display device, where the display data is used to instruct the head-mounted display device to display virtual content.
In this embodiment, the position determining module 420 may be specifically configured to: acquiring second relative spatial position information of the virtual content and the terminal equipment; and determining the display position of the virtual content according to the first relative spatial position information and the second relative spatial position information.
In this embodiment, the virtual content interaction apparatus 400 may further include: the display device comprises an instruction generation module and a display control module. The instruction generation module is used for generating a control instruction according to the control operation when the control operation is received; and the display control module is used for controlling the display of the virtual content according to the control instruction.
In some embodiments, the receiving the control operation may include one or more of the following: detecting a control operation in a control area of the terminal equipment; the user gesture collected by the head-mounted display device or the terminal device is a preset gesture; the controller connected with the terminal device receives the control operation.
In some embodiments, the generating the control instruction according to the control operation may include:
and generating a control instruction according to one or more of the detected number of fingers when the control operation is executed, the gesture action when the control operation is executed and the finger sliding track when the control operation is executed.
In this embodiment, the position obtaining module 410 may be specifically configured to: receiving a marker image sent by head-mounted display equipment, wherein the marker image is obtained when the head-mounted display equipment collects a marker; a marker in the marker image is identified, and first relative spatial position information between the head mounted display device and the terminal device is acquired.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling between the modules may be electrical, mechanical or other type of coupling.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
To sum up, the virtual content interaction method and apparatus provided in the embodiments of the present application are applied to a terminal device in a display system, and the method includes obtaining first relative spatial position information between a head-mounted display device and the terminal device, determining a display position of virtual content according to the first relative spatial position information, rendering the virtual content according to the display position, obtaining display data of the virtual content, and transmitting the display data to the head-mounted display device, where the virtual content is used to instruct the head-mounted display device to display the virtual content, so that the virtual content is displayed in a virtual space according to the spatial position of the terminal device, and a user can observe an effect that the virtual content is superimposed on a real world, thereby achieving interaction between the terminal device and the head-mounted display device.
Referring to fig. 21, an embodiment of the present application provides a virtual content interaction method, which is applied to a head-mounted display device, where the head-mounted display device is communicatively connected to a terminal device, and the virtual content interaction method may include:
step S310: and collecting a marker image containing a marker, wherein the marker is arranged on the terminal equipment.
In this embodiment, the head-mounted display device may scan the terminal device in real time through the camera to acquire the marker on the terminal device, so as to obtain the marker image. It can be understood that when the marker image needs to be acquired, the spatial position of the terminal device can be adjusted so that the marker on the terminal device is within the visual field of the camera of the head-mounted display device, so that the camera acquires the marker image including the marker on the terminal device. The field of view of the camera may be determined by the size of the field of view angle.
Step S320: the method comprises the steps of transmitting a marker image to a terminal device, wherein the marker image is used for instructing the terminal device to identify a marker in the marker image, and acquiring first relative space position information of the head-mounted display device and the terminal device.
After the head-mounted display device collects the marker image, the marker image can be transmitted to the terminal device, so that the terminal device receives the marker image and identifies the marker in the marker image, and the terminal device obtains first relative spatial position information of the head-mounted display device and the terminal device.
Step S330: and receiving display data sent by the terminal equipment, and displaying the virtual content according to the display data, wherein the display data is obtained by rendering the terminal equipment according to the first relative spatial position information.
When the virtual content needs to be displayed, the head-mounted display device may receive display data sent by the terminal device, and display the virtual content according to the display data, where the display data is rendered by the terminal device according to the first relative spatial position information.
It can be understood that the head-mounted display device needs to receive the display data in real time and receive the changed display data sent by the terminal device so as to update the displayed virtual content.
The virtual content interaction method provided by the embodiment of the application is applied to a head-mounted display device, and comprises the steps of acquiring a marker image containing a marker on a terminal device, transmitting the marker image to the terminal device, enabling the terminal device to obtain first relative spatial position information of the head-mounted display device and the terminal device, and displaying virtual content according to received display data sent by the terminal device, wherein the display data is obtained by rendering the terminal device according to the first relative spatial position information, so that a user can observe the effect that the virtual content is superimposed on the real world, and interaction between the terminal device and the head-mounted display device is realized.
Referring to fig. 22, which shows a schematic structural diagram of a display system provided in an embodiment of the present application, the display system 10 may include: terminal equipment 11 and with terminal equipment 11 communication connection's head mounted display device 12, wherein:
the terminal device 11 is configured to acquire first relative spatial position information of the head mounted display device 12 and the terminal device 11, determine a display position of the virtual content according to the first relative spatial position information, render the virtual content according to the display position, and acquire display data of the virtual content, and transmit the display data to the head mounted display device 12, the display data being used to instruct the head mounted display device 12 to display the virtual content.
The head-mounted display device 12 is configured to receive the display data sent by the terminal device 11 and display the virtual content according to the display data.
Referring to fig. 23, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be an electronic device capable of running an application, such as a smart phone, a tablet computer, an electronic book, a notebook computer, and a head-mounted display device. The terminal device 100 in the present application may include one or more of the following components: a processor 120, a memory 130, and one or more applications, wherein the one or more applications may be stored in the memory 130 and configured to be executed by the one or more processors 120, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 120 may include one or more processing cores. The processor 120 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 130 and calling data stored in the memory 130. Alternatively, the processor 120 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 120 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 120, but may be implemented by a communication chip.
The Memory 130 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 130 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 130 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
Referring to fig. 24, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (9)

1. A virtual content interaction method is applied to a terminal device, the terminal device is in communication connection with an external head-mounted display device, the terminal device comprises a marker, the marker is arranged on a shell of the terminal device, and the method comprises the following steps:
receiving a marker image sent by the head-mounted display device, wherein the marker image is obtained when the head-mounted display device collects the marker;
identifying a marker in the marker image, and acquiring first relative spatial position information between the head-mounted display device and the terminal device;
acquiring virtual content and second relative spatial position information of the terminal device, wherein the virtual content comprises display content currently displayed on a screen by the terminal device and extended content corresponding to the display content, the display content is different from the marker, and the second relative spatial position information is determined according to the position of the display content in the virtual content, so that the display content in the virtual content is overlapped and displayed on the terminal device;
determining the display position of the virtual content according to the first relative spatial position information and the second relative spatial position information;
rendering the virtual content according to the display position, and acquiring display data of the virtual content;
transmitting the display data to the head-mounted display device, the display data being used to instruct the head-mounted display device to display the virtual content.
2. The method of claim 1, wherein after the transmitting the virtual content to the head mounted display device, the virtual content instructing the head mounted display device to display the virtual content, the method further comprises:
when receiving a control operation, generating a control instruction according to the control operation;
and controlling the display of the virtual content according to the control instruction.
3. The method of claim 2, wherein the received control operations comprise one or more of:
the control area of the terminal equipment detects control operation;
the user gestures collected by the head-mounted display equipment or the terminal equipment are preset gestures;
and the controller connected with the terminal equipment receives the control operation.
4. The method of claim 2, wherein generating control instructions according to the control operations comprises:
and generating a control instruction according to one or more of the detected number of fingers when the control operation is executed, the gesture action when the control operation is executed and the finger sliding track when the control operation is executed.
5. A virtual content interaction method is applied to a head-mounted display device, wherein the head-mounted display device is in communication connection with a terminal device, and comprises the following steps:
collecting a marker image containing a marker, wherein the marker is arranged on a shell of the terminal equipment;
transmitting the marker image to the terminal device, wherein the marker image is used for indicating the terminal device to identify a marker in the marker image, and acquiring first relative spatial position information of the head-mounted display device and the terminal device;
receiving display data sent by the terminal device, displaying virtual content according to the display data, wherein the display data is obtained by the terminal device obtaining second relative spatial position information of the virtual content and the terminal device, determining a display position of the virtual content according to the first relative spatial position information and the second relative spatial position information, and then rendering according to the display position, wherein the virtual content comprises display content currently displayed on a screen by the terminal device and extended content corresponding to the display content, and the second relative spatial position information is determined according to the position of the display content in the virtual content, so that the display content in the virtual content is displayed on the display content on the terminal device in an overlapping manner.
6. The utility model provides a virtual content interaction device which characterized in that is applied to terminal equipment, terminal equipment and external head-mounted display device communication connection, terminal equipment includes the marker, the marker set up in on terminal equipment's the shell, include:
the position acquisition module is used for receiving a marker image sent by the head-mounted display equipment, wherein the marker image is obtained when the head-mounted display equipment acquires the marker; identifying a marker in the marker image, and acquiring first relative spatial position information between the head-mounted display device and the terminal device;
the position determining module is used for acquiring second relative spatial position information of the virtual content and the terminal equipment; determining the display position of the virtual content according to the first relative spatial position information and the second relative spatial position information, wherein the virtual content comprises display content currently displayed on a screen by the terminal device and extended content corresponding to the display content, and the second relative spatial position information is determined according to the position of the display content in the virtual content, so that the display content in the virtual content is overlapped and displayed on the display content on the terminal device;
the content rendering module is used for rendering the virtual content according to the display position and acquiring display data of the virtual content;
and the data sending module is used for transmitting the display data to the head-mounted display equipment, and the display data is used for indicating the head-mounted display equipment to display the virtual content.
7. The utility model provides a display system, its characterized in that, display system includes terminal equipment and external head-mounted display device, terminal equipment with head-mounted display device communication connection, terminal equipment includes the marker, the marker set up in on terminal equipment's the shell, wherein:
the terminal device is used for receiving a marker image sent by the head-mounted display device, wherein the marker image is obtained when the head-mounted display device collects the marker; identifying a marker in the marker image, acquiring first relative spatial position information between the head-mounted display device and the terminal device, acquiring second relative spatial position information between virtual content and the terminal device, determining a display position of the virtual content according to the first relative spatial position information and the second relative spatial position information, rendering the virtual content according to the display position, acquiring display data of the virtual content, and transmitting the display data to the head-mounted display device, wherein the display data is used for indicating the head-mounted display device to display the virtual content, the virtual content comprises display content currently displayed on a screen of the terminal device and extended content corresponding to the display content, the display content is different from the marker, and the second relative spatial position information determines the virtual content according to the position of the display content in the virtual content Displaying the display content in the virtual content in a manner of overlapping the display content on the terminal equipment;
and the head-mounted display equipment is used for receiving the display data sent by the terminal equipment and displaying the virtual content according to the display data.
8. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-4.
9. A computer-readable storage medium, in which a program code is stored, which program code can be called by a processor to perform the method according to any of claims 1-4 and/or the method according to claim 5.
CN201910005562.8A 2019-01-03 2019-01-03 Virtual content interaction method and device, terminal equipment and storage medium Active CN111399630B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910005562.8A CN111399630B (en) 2019-01-03 2019-01-03 Virtual content interaction method and device, terminal equipment and storage medium
PCT/CN2019/130646 WO2020140905A1 (en) 2019-01-03 2019-12-31 Virtual content interaction system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910005562.8A CN111399630B (en) 2019-01-03 2019-01-03 Virtual content interaction method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111399630A CN111399630A (en) 2020-07-10
CN111399630B true CN111399630B (en) 2022-05-31

Family

ID=71428346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910005562.8A Active CN111399630B (en) 2019-01-03 2019-01-03 Virtual content interaction method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111399630B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112965773B (en) * 2021-03-03 2024-05-28 闪耀现实(无锡)科技有限公司 Method, apparatus, device and storage medium for information display

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN104076513A (en) * 2013-03-26 2014-10-01 精工爱普生株式会社 Head-mounted display device, control method of head-mounted display device, and display system
US9767613B1 (en) * 2015-01-23 2017-09-19 Leap Motion, Inc. Systems and method of interacting with a virtual object

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170061700A1 (en) * 2015-02-13 2017-03-02 Julian Michael Urbach Intercommunication between a head mounted display and a real world object
US10601950B2 (en) * 2015-03-01 2020-03-24 ARIS MD, Inc. Reality-augmented morphological procedure

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103460256A (en) * 2011-03-29 2013-12-18 高通股份有限公司 Anchoring virtual images to real world surfaces in augmented reality systems
CN104076513A (en) * 2013-03-26 2014-10-01 精工爱普生株式会社 Head-mounted display device, control method of head-mounted display device, and display system
US9767613B1 (en) * 2015-01-23 2017-09-19 Leap Motion, Inc. Systems and method of interacting with a virtual object

Also Published As

Publication number Publication date
CN111399630A (en) 2020-07-10

Similar Documents

Publication Publication Date Title
US10698535B2 (en) Interface control system, interface control apparatus, interface control method, and program
CN111766937B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
CN111158469A (en) Visual angle switching method and device, terminal equipment and storage medium
JP6801263B2 (en) Display control program, display control method and display control device
JP6177872B2 (en) I / O device, I / O program, and I / O method
US9933853B2 (en) Display control device, display control program, and display control method
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
JP6250024B2 (en) Calibration apparatus, calibration program, and calibration method
CN111383345B (en) Virtual content display method and device, terminal equipment and storage medium
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
WO2014128751A1 (en) Head mount display apparatus, head mount display program, and head mount display method
CN111083463A (en) Virtual content display method and device, terminal equipment and display system
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
JP6250025B2 (en) I / O device, I / O program, and I / O method
CN111563966B (en) Virtual content display method, device, terminal equipment and storage medium
CN111818326B (en) Image processing method, device, system, terminal device and storage medium
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium
CN111913674A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN111399630B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN111913639B (en) Virtual content interaction method, device, system, terminal equipment and storage medium
CN111651031B (en) Virtual content display method and device, terminal equipment and storage medium
CN111913560A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN111399631B (en) Virtual content display method and device, terminal equipment and storage medium
CN111913565B (en) Virtual content control method, device, system, terminal device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant