CN111459263B - Virtual content display method and device, terminal equipment and storage medium - Google Patents

Virtual content display method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111459263B
CN111459263B CN201910060758.7A CN201910060758A CN111459263B CN 111459263 B CN111459263 B CN 111459263B CN 201910060758 A CN201910060758 A CN 201910060758A CN 111459263 B CN111459263 B CN 111459263B
Authority
CN
China
Prior art keywords
shaking
virtual content
content
interaction
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910060758.7A
Other languages
Chinese (zh)
Other versions
CN111459263A (en
Inventor
卢智雄
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910060758.7A priority Critical patent/CN111459263B/en
Publication of CN111459263A publication Critical patent/CN111459263A/en
Application granted granted Critical
Publication of CN111459263B publication Critical patent/CN111459263B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Graphics (AREA)
  • Computer Hardware Design (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the application discloses a method, a device, a terminal device and a storage medium for displaying virtual content, wherein the method for displaying the virtual content is applied to the terminal device, the terminal device is connected with an interaction device, and the method for displaying the virtual content comprises the following steps: acquiring position and posture information of the interaction equipment relative to the terminal equipment; displaying the virtual content according to the position and posture information; detecting a motion state of the interaction equipment according to at least one of the change information of the position and the gesture of the interaction equipment; when the interaction equipment is in a shaking state, shaking parameters of the interaction equipment are obtained; and controlling the display of the virtual content according to the shaking parameters so that the displayed virtual content corresponds to the shaking state of the interaction equipment. The method can better realize the interaction with the virtual content.

Description

Virtual content display method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a method, an apparatus, a terminal device, and a storage medium for displaying virtual content.
Background
In recent years, with the advancement of technology, technologies such as augmented reality (AR, augmented Reality), which is a technology for increasing the perception of the real world by a user through information provided by a computer system, superimposes a virtual object generated by a computer, a scene, or a content object such as system prompt information into the real scene to enhance or modify the perception of the real world environment or data representing the real world environment, have become a hotspot for research at home and abroad. In augmented reality display technology, interaction with display content is a key issue affecting technology applications.
Disclosure of Invention
The embodiment of the application provides a method, a device, terminal equipment and a storage medium for displaying virtual content, so as to better realize interaction with the displayed content.
In a first aspect, an embodiment of the present application provides a method for displaying virtual content, which is applied to a terminal device, where the terminal device is connected to an interaction device, and the method includes: acquiring position and posture information of the interaction equipment relative to the terminal equipment; displaying the virtual content according to the position and posture information; detecting a motion state of the interaction equipment according to at least one of the change information of the position and the gesture of the interaction equipment; when the interaction equipment is in a shaking state, shaking parameters of the interaction equipment are obtained; and controlling the display of the virtual content according to the shaking parameters so that the displayed virtual content corresponds to the shaking state of the interaction equipment.
In a second aspect, an embodiment of the present application provides a display apparatus for virtual content, which is applied to a terminal device, where the terminal device is connected to an interaction device, and the apparatus includes: the system comprises a position acquisition module, a content display module, a state detection module, a parameter acquisition module and a content control module, wherein the position acquisition module is used for acquiring position and posture information of the interaction equipment relative to the terminal equipment; the content display module is used for displaying virtual content according to the position and posture information; the state detection module is used for detecting the motion state of the interaction equipment according to at least one of the position and the change information of the gesture of the interaction equipment; the parameter acquisition module is used for acquiring shaking parameters of the interaction equipment when the interaction equipment is in a shaking state; and the content control module is used for controlling the display of the virtual content according to the shaking parameters so that the displayed virtual content corresponds to the shaking state of the interaction equipment.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more application programs configured to perform the method of displaying virtual content provided in the first aspect.
In a fourth aspect, an embodiment of the present application provides a storage medium having stored therein program code that is callable by a processor to perform the method for displaying virtual content provided in the first aspect.
The scheme provided by the application is applied to the terminal equipment, the terminal equipment is connected with the interaction equipment, the virtual content is displayed by acquiring the position and posture information of the interaction equipment relative to the terminal equipment, so that a user can observe the effect of overlapping the virtual content on the real world, and when the interaction equipment is in a shaking state according to at least one of the position and/or posture change information of the interaction equipment, the shaking parameters of the interaction equipment are acquired, the display of the virtual content is controlled according to the shaking parameters, the interaction with the displayed virtual content is better realized, and the interactivity is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic diagram of an application scenario suitable for use in an embodiment of the present application.
Fig. 2 shows a schematic structural diagram of an interaction device according to an embodiment of the present application.
Fig. 3 shows a flow chart of a method of displaying virtual content according to an embodiment of the application.
Fig. 4 shows a display effect diagram according to an embodiment of the application.
Fig. 5 shows a flowchart of a method for displaying virtual contents according to another embodiment of the present application.
Fig. 6 shows a flowchart of a method of displaying virtual content according to still another embodiment of the present application.
Fig. 7 shows a display effect diagram according to still another embodiment of the present application.
Fig. 8 shows a display effect diagram according to still another embodiment of the present application.
Fig. 9 shows a display effect diagram according to still another embodiment of the present application.
Fig. 10 shows a display effect diagram according to still another embodiment of the present application.
Fig. 11 shows a display effect diagram according to still another embodiment of the present application.
Fig. 12 shows a display effect diagram according to still another embodiment of the present application.
Fig. 13 shows a display effect diagram according to still another embodiment of the present application.
Fig. 14 shows a flowchart of a method of displaying virtual content according to still another embodiment of the present application.
Fig. 15 shows a display effect diagram according to still another embodiment of the present application.
Fig. 16 shows a display effect diagram according to still another embodiment of the present application.
Fig. 17 shows a block diagram of a display device of virtual content according to an embodiment of the present application.
Fig. 18 is a block diagram of a terminal device for performing a display method of virtual contents according to an embodiment of the present application.
Fig. 19 is a storage unit for storing or carrying program code for implementing a display method of virtual contents according to an embodiment of the present application.
Detailed Description
In order to enable those skilled in the art to better understand the present application, the following description will make clear and complete descriptions of the technical solutions according to the embodiments of the present application with reference to the accompanying drawings.
With the development of display technology, the display technology of augmented reality (AR, augmented Reality) is gradually going deep into people's life. AR technology may enable superimposing computer-generated virtual objects, scenes, or content objects such as system cues into a real scene to enhance or modify the perception of the real-world environment or data representing the real-world environment. At present, virtual images can be displayed at corresponding positions on a display screen of a mobile terminal or a display component of a head-mounted display, so that the virtual images and a real scene are displayed in a superimposed manner, and a user can enjoy science fiction type viewing experience.
The inventor finds that in the traditional AR display technology, when the interaction with the virtual content is realized, the interaction with the virtual content is usually realized by an additional controller or by changing the orientation of devices such as a head-mounted display device by rotating the direction of the head, and the interactivity is poor. Based on the above problems, the inventor proposes a method, a device, a terminal device and a storage medium for displaying virtual content in the embodiments of the present application, so as to better implement interaction with the displayed virtual content.
The application scenario of the virtual content display method provided by the embodiment of the application is described below.
Referring to fig. 1, an application scenario diagram of a virtual content display method provided by an embodiment of the present application is shown, where the application scenario includes a display system 10, and the display system 10 includes: a terminal device 100 and an interaction device 200, wherein the terminal device 100 is connected with the interaction device 200.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone or a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may be an intelligent terminal such as a mobile phone connected to an external/plug-in head-mounted display device, that is, the terminal device 100 may be used as a processing and storage device of the head-mounted display device, and plugged into or connected to the external head-mounted display device, so as to display virtual content in the head-mounted display device.
In an embodiment of the present application, the interactive apparatus 200 may be a polyhedral tag, which may include a plurality of faces, a plurality of sides, and a plurality of vertices. Wherein the interactive apparatus 200 comprises a plurality of marker facets, and wherein at least two of the non-coplanar marker facets are provided with markers. In some embodiments, the markers may include at least one sub-marker having one or more characteristic points.
In the embodiment of the present application, the specific morphological structure of the interaction device 200 is not limited, and may be a polyhedron combining a plane and a curved surface, or a polyhedron combining a curved surface and a curved surface. In some implementations, the interactive device 200 may be a combination of any one or more of the following structures: pyramids, prisms, pyramids, and polyhedrons may also be spheres, although spheres are understood to be polyhedrons formed from a myriad of facets.
In the embodiment of the present application, the image of the above-described marker is stored in the terminal device 100. The markers may include at least one sub-marker having one or more characteristic points. When the above-described marker is within the field of view of the terminal device 100, the terminal device 100 may acquire an image containing the marker. When the image containing the marker is acquired, the acquired image containing the marker can be identified, and spatial position information such as the position and orientation of the marker relative to the terminal device 100, and identification results such as identity information of the marker can be obtained. The terminal device 100 can locate and track the interactive device 200 according to the markers. The terminal device 100 may display the corresponding virtual content based on information such as a spatial position of the interactive device 200 with respect to the terminal device 100. It should be understood that the specific interactive device 200 and the marker are not limited in this embodiment of the present application, and may be identified and tracked by the terminal device 100.
In some embodiments, the different markers on the interactive device 200 may rotate or/and displace within the field of view of the terminal device 100, so that the terminal device 100 may identify the information of the markers on the interactive device 200 in real time, and further obtain the spatial location information of the interactive device 200, so that the terminal device 100 displays corresponding virtual content according to the spatial location information.
Referring to fig. 2, fig. 2 shows a schematic diagram of an interactive apparatus according to an embodiment of the present application, where the interactive apparatus 200 is twenty-hexahedral, and includes eighteen square surfaces and eight triangular surfaces, wherein the eighteen square surfaces are marking surfaces, each marking surface is provided with a marker, and patterns of the markers on each surface are different from each other. As one way, the interactive apparatus 200 has a first surface 220 provided with a first marker 211 and a second surface 230 provided with a second marker 212 that is distinct from the first marker 211. The terminal device 100 recognizes either one or both of the first marker 211 and the second marker 212, and further acquires the spatial position information of the interactive device 200.
In addition, the terminal device 100 may recognize a change in the above-described spatial position information of the interactive device 200 according to the above-described marker 210, thereby detecting a movement state (e.g., a shaking state, a moving state, etc.) of the interactive device 200. The terminal device 100 may also detect a motion state (e.g., a shaking state, etc.) of the interactive device 200 based on the six-degree-of-freedom information detected by the inertial measurement unit (Inertial measurement unit, IMU) of the interactive device 200.
Based on the display system, the embodiment of the application provides a virtual content display method, which is applied to terminal equipment of the display system, and is used for displaying virtual content through position and posture information of interaction equipment relative to the terminal equipment, and controlling display of the virtual content according to a control instruction when the interaction equipment is determined to be in a shaking state according to shaking parameters of the interaction equipment, so that interaction with the virtual content is better realized. A specific method for displaying virtual contents is described below.
Referring to fig. 3, an embodiment of the present application provides a method for displaying virtual content, which may be applied to the terminal device, where the method for displaying virtual content may include:
step S110: and acquiring the position and posture information of the interaction equipment relative to the terminal equipment.
In the embodiment of the application, the terminal equipment can acquire the position and the gesture information of the interaction equipment relative to the terminal equipment so as to display the virtual content according to the position and the gesture information. The gesture information may be an orientation and a rotation angle of the interaction device relative to the terminal device.
In the embodiment of the present application, the interaction device is a polyhedral tag with a polyhedral structure, which may be a tetrahedral tag, a hexahedral tag, a twenty-hexahedral tag, or the like, and may be a polyhedral tag with other surface numbers at present, which is not listed here. The polyhedral marker comprises a plurality of marking surfaces, and at least one marking surface is provided with a marker.
Further, the marker on the interactive device can be identified by the terminal device, and the position and posture information of the interactive device relative to the terminal device can be obtained.
In some embodiments, the above-described markers may include at least one sub-marker, which may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, where the shape of the feature points is not limited, and may be a dot, a ring, or a triangle, or other shapes. In addition, the distribution rules of the sub-markers in different markers are different, so each marker can have different identity information. The terminal device may acquire the identity information corresponding to the tag by identifying the sub-tag included in the tag, and the identity information may be information such as a code that can be used to uniquely identify the tag, but is not limited thereto.
As an embodiment, the outline of the marker may be rectangular, however, the shape of the marker may be other shapes, which are not limited herein, and the rectangular area and the plurality of sub-markers in the area form one marker. The shape, style, size, color, number of feature points, and distribution of the specific marker are not limited in this embodiment, and the marker may be identified and tracked by the terminal device.
In the embodiment of the application, the interaction device can be arranged at a position within the visual field of the terminal device. When the terminal device needs virtual content to be displayed, the image acquisition device can acquire images of the interaction device and at least one marker of the interaction device, which are located in the visual field of the terminal device, namely images containing the interaction device and at least one marker of the interaction device. The field of view of the terminal device refers to the field of view of an image acquisition device of the terminal device, and the field of view of the image acquisition device can be determined by the size of the field of view angle.
When the image containing the interactive device and at least one marker of the interactive device is actually acquired, the spatial position of the interactive device can be adjusted so that the interactive device and the at least one marker of the interactive device are in the visual field of the image acquisition device of the terminal device, and therefore the terminal device can acquire and identify the image of the interactive device and at least the marker of the interactive device.
In some embodiments, the interactive device may include at least two different markers that are not coplanar. When an image containing the marker on a certain surface of the interactive device needs to be acquired, the orientation and the rotation angle of the interactive device relative to the terminal device can be changed by rotating the interactive device, so that the terminal device can acquire the marker on the certain surface of the interactive device. Likewise, by rotating the interactive device, markers on multiple sides of the interactive device may also be collected.
After the terminal device collects the marker image containing the interaction device and at least one marker of the interaction device, the terminal device can identify the image so as to acquire the position and posture information of the interaction device relative to the terminal device.
It will be appreciated that, since the interactive apparatus has at least one marking surface provided with the markers, the number of markers in the image may be 1 or more. As a way, when the number of the markers in the image is at least 1, the position and posture information between the interaction device and the terminal device may be obtained by identifying the spatial position of one marker in the image and according to the pre-stored positions of the marker and other markers of the interaction device. Alternatively, when the number of the markers in the image is plural, the position and posture information of the interaction device relative to the terminal device may be obtained by identifying the spatial position of each of the plural markers and according to the spatial position of each of the plural markers. Of course, the position and posture information of the interactive device relative to the terminal device may be accurately acquired by acquiring an image containing a plurality of markers on the interactive device.
Step S120: and displaying the virtual content according to the position and posture information.
In some embodiments, after the position and posture information of the interaction device relative to the terminal device are obtained, a display position of the virtual content to be displayed may be obtained according to the position and posture information, and the virtual content to be displayed may be displayed. The display position may be a position of the virtual content that the user can see through the terminal device, that is, a rendering coordinate of the virtual content in the virtual space.
Further, the terminal device may obtain the display position of the virtual content according to the position relative relation between the virtual content to be displayed and the interactive device, and the position and posture information of the interactive device relative to the terminal device. It can be understood that when the virtual content is superimposed on the real world where the interactive device is located, the spatial coordinates of the interactive device in the real space may be obtained, where the spatial coordinates may be used to represent a positional relationship between the interactive device and the image capturing device on the head-mounted display device, or may be used to represent a positional relationship between the interactive device and the terminal device.
After the position and posture information of the interaction equipment relative to the terminal equipment are acquired, the space coordinates of the interaction equipment in the real space can be acquired, the space coordinates of the interaction equipment in the real space are converted into the virtual coordinates of the virtual space, and then the rendering coordinates of the virtual content to be displayed in the virtual space are acquired according to the position relative relation between the virtual content to be displayed and the interaction equipment, so that the display position of the virtual content is acquired, and the virtual content is displayed.
After the display position of the virtual content is obtained, the virtual content can be rendered according to the data of the virtual content which is required to be displayed and the obtained display position. The data of the virtual content may include model data of the virtual content, where the model data is data for rendering the virtual content. For example, the model data may include color data, vertex coordinate data, contour data, and the like for creating virtual content correspondence. The data of the virtual content may be stored in the terminal device or may be acquired from other electronic devices such as the interactive device and the server. In some embodiments, the data of the virtual content may be obtained according to the identity information of at least one tag of the interaction device, that is, the data of the corresponding virtual content may be read through the identity information of the tag, so that the displayed virtual content corresponds to the identity information of a certain tag of the interaction device.
Therefore, the virtual content can be displayed in the virtual space, and a user can see the virtual content and the superposition of the real world for display through the terminal equipment, so that the augmented reality display effect of the virtual content is realized, and the display effect of the virtual content is improved. For example, as shown in fig. 4, the user can see the interactive apparatus 200 in the real world through the terminal apparatus, and can see the virtual content displayed superimposed on the corresponding position of the interactive apparatus 200 in the virtual space, the virtual content being the crystal ball 30.
Step S130: detecting a motion state of the interactive device according to at least one of the change information of the position and the gesture of the interactive device.
In the embodiment of the application, after the virtual content is displayed according to the position and posture information of the entity object relative to the terminal equipment, the terminal equipment can detect the motion state of the interaction equipment so as to control the display of the virtual content when the motion state of the interaction equipment is the target state. The terminal device may detect a motion state of the interaction device according to at least one of the change information of the position and the posture of the interaction device. It can be understood that the change information of the position of the interaction device refers to the change of the position of the interaction device relative to the terminal device, and the change information of the gesture refers to the change information of the gesture of the interaction device relative to the terminal device. In addition, the information about the change in the position and the posture of the interactive device may be identified and tracked by the terminal device, so as to obtain the information about the change in the position and the posture of the interactive device.
In some embodiments, the motion state of the interaction device may be detected according to the change information of the position of the interaction device, or the motion state of the interaction device may be detected according to the change information of the gesture of the interaction device, or the motion state of the interaction device may be detected according to the change information of the position and the change information of the gesture of the interaction device.
The terminal device may determine that the interaction device is in a motion state when detecting a change in the position and/or posture of the interaction device when detecting the motion state of the interaction device according to at least one of the change information of the position and the posture of the interaction device. And according to the specific change of the position and/or the gesture of the interaction device, the specific motion state of the interaction device can be determined, such as uniform motion, shaking, acceleration motion, deceleration motion and the like. That is, according to the specific change of the position and/or the gesture of the interaction device, the motion parameters such as the motion speed, the motion direction and the like of the interaction device can be determined, and according to the motion parameters, the specific motion state of the interaction device can be determined. Of course, the manner in which the motion state of the interactive device is specifically detected may not be limiting.
Step S140: and when the interaction equipment is in a shaking state, obtaining shaking parameters of the interaction equipment.
In the embodiment of the application, by detecting the motion state of the interaction equipment, when the interaction equipment is determined to be in a shaking state, the triggering condition of the terminal equipment for controlling the display of the virtual content can be triggered, and the display of the virtual content can be controlled. The shaking state refers to shaking movement of the interaction device according to certain movement parameters such as movement frequency, movement amplitude, movement direction and the like, for example, the interaction device performs reciprocating movement according to certain frequency within a certain distance in the horizontal direction.
Further, when the interaction device is detected to be in the shaking state, shaking parameters of the interaction device can be obtained, so that the terminal device controls display of the virtual content according to the shaking parameters, and control content of the virtual content accords with the shaking parameters of the interaction device. Wherein, the shaking parameters may include: at least one of shaking frequency, shaking direction and shaking amplitude. The shaking frequency may refer to the number of shaking times of the interaction device within a certain time, for example, the number of shaking times within 1S; the shaking direction refers to the moving direction of the interaction equipment during shaking; the shaking amplitude refers to a position change range, a gesture angle change range and the like when the interaction equipment shakes.
In the embodiment of the present application, the shaking parameter may be determined according to the change information of the position and/or the posture of the interaction device in a certain period of time, for example, the change information of the position and/or the posture in a set period of time before the interaction device is detected to be in a shaking state. Of course, the manner of specifically acquiring the shake parameters of the interactive device is not limited.
Step S150: and controlling the display of the virtual content according to the shaking parameters so that the displayed virtual content corresponds to the shaking state of the interaction equipment.
After the terminal equipment acquires the shaking parameters of the interaction equipment, the terminal equipment can control the display of the virtual content according to the shaking parameters so as to achieve the purpose of correspondingly controlling the displayed virtual content through shaking of the interaction equipment.
In the embodiment of the application, different shaking parameters correspond to different control effects on the virtual content, and the control effects can control the virtual content to display different effects. For example, the moving direction of the virtual content may be controlled according to the shaking direction to achieve a control effect in which the moving direction and the shaking direction of the virtual content are kept the same, for example, the speed of updating the virtual content may be controlled according to the shaking frequency to achieve a control effect in which the speed of updating the virtual content is kept the same as the shaking frequency, for example, different display effects on the virtual content may be triggered according to the shaking amplitude, for example, and the shaking effect may be displayed on the virtual content according to the shaking direction, the shaking frequency, the shaking amplitude, and the like. Of course, the control content of the virtual content corresponding to the specific shake parameter may not be limited in the embodiment of the present application.
The virtual content display method provided by the embodiment of the application is applied to the terminal equipment, the virtual content is displayed according to the position and posture information of the interaction equipment relative to the terminal equipment, so that a user can observe the effect of the virtual content superimposed on the real world, when the interaction equipment is in a shaking state according to at least one of the position and/or posture change information of the interaction equipment, the shaking parameters of the interaction equipment are obtained, and the virtual content is controlled to be displayed according to the shaking parameters, so that the virtual content is controlled according to the shaking of the interaction equipment, the good interaction with the displayed virtual content is achieved, and the interactivity in the display of the virtual content is improved.
Referring to fig. 5, another method for displaying virtual content is provided in an embodiment of the present application, which may be applied to the terminal device, and the method for displaying virtual content may include:
step S210: and acquiring the position and posture information of the interaction equipment relative to the terminal equipment.
Step S220: and displaying the virtual content according to the position and posture information.
In the embodiment of the present application, the step S210 and the step S220 may refer to the content of the above embodiment, and are not described herein.
Step S230: and judging whether the change frequency of the position and/or the gesture of the interaction device in the appointed duration is greater than a frequency threshold value.
In the embodiment of the application, after the terminal equipment displays the virtual content according to the position and posture information of the interaction equipment relative to the terminal equipment, the movement state of the interaction equipment can be detected according to at least one of the change information of the position and posture of the interaction equipment so as to determine the shaking state of the interaction equipment.
In some embodiments, it may be determined whether the change of the position and the posture of the interactive device is changed according to a certain frequency through the acquired change information of the position and the posture of the interactive device. When the position and posture change is determined to be according to a certain frequency, the frequency can be obtained and used as the change frequency of the position and posture of the interaction equipment.
Further, it may be determined whether a frequency of change in the position of the interactive device is greater than a frequency threshold to determine whether the interactive device is in a sloshing state. It will be appreciated that the interactive device, when in a shaking state, moves at a frequency that is greater than the frequency threshold. Thus, when the frequency of change in the position of the interaction device is greater than the frequency threshold, it may be determined that the interaction device is in a shaking state. And when the change frequency of the position of the interaction device is not greater than the frequency threshold, the interaction device is not in a shaking state.
And judging whether the change frequency of the gesture of the interaction device is larger than the frequency threshold, wherein the change frequency of the gesture of the interaction device is larger than the frequency threshold similarly to the judgment of whether the change frequency of the position of the interaction device is larger than the frequency threshold, the interaction device is in a shaking state when the change frequency of the gesture of the interaction device is larger than the frequency threshold, and the interaction device is not in the shaking state when the change frequency of the gesture of the interaction device is not larger than the frequency threshold.
In addition, the shaking state of the interaction device can be determined by combining the change frequency of the position and the change frequency of the gesture of the interaction device, that is, when the change frequency of the position and the change frequency of the gesture of the interaction device are both greater than the frequency threshold, the interaction device is determined to be in the shaking state.
In the embodiment of the present application, the specific value of the frequency threshold may not be limited, and for example, the frequency threshold may be 1 time per second, 2 times per second, 3 times per second, or the like.
Step S240: and when the change frequency of the position and/or the gesture is greater than the frequency threshold, determining that the interaction device is in a shaking state.
In the embodiment of the present application, when it is detected in step S230 that the frequency of the change in the position and/or posture of the interaction device is greater than the frequency threshold, it may be determined that the interaction device is in a shaking state.
In some embodiments, when it is determined that the change frequency of the position and/or the gesture of the interaction device is greater than the frequency threshold, it may be further determined whether the change range of the position of the interaction device is within a certain range, whether the change direction of the position of the interaction device is a preset direction, whether the change range of the gesture of the interaction device (for example, the change range of the gesture angle) is within a certain range, and when the change of the position and/or the gesture of the interaction device meets at least one of the above determination conditions, it may be determined that the interaction device is in a shaking state, so that the shaking state of the interaction device may be more accurately detected.
Step S250: when the interactive equipment is in a shaking state, acquiring the attitude parameters of the interactive equipment in a preset time period, and determining the variation range of the attitude parameters of the interactive equipment.
When the interaction equipment is detected to be in a shaking state, shaking parameters of the interaction equipment can be obtained, so that the terminal equipment can control the display of the virtual content according to the shaking parameters, and control content of the virtual content is matched with the shaking parameters of the interaction equipment.
In the embodiment of the application, the gesture parameters of the interaction equipment in the preset time period can be obtained, and the variation range of the gesture parameters of the interaction equipment can be determined so as to determine the shaking parameters of the interaction equipment. The preset time period may be a time period of a specified duration after the detection of the interaction device being in the shaking state, for example, a time period of 2S after a time point when the detection of the interaction device being in the shaking state, a time period of 3S after the time point, a time period of 5S after the time point, and the like.
In some embodiments, acquiring the gesture parameters of the interaction device in a preset time period includes: and acquiring a marker image containing at least one marker set by the interactive equipment in a preset time period, and acquiring the attitude parameters of the interactive equipment according to the marker image. The gesture parameters may include parameters such as a gesture angle, a rotation direction, an angular velocity, and an acceleration of the interaction device, and the specific gesture parameters may not be limited in the embodiment of the present application.
It can be understood that the terminal device can identify the marker on the interactive device in real time in the preset time period, so as to obtain real-time spatial position information of the interactive device in the preset time period, wherein the spatial position information comprises information such as gesture parameters and positions of the interactive device, so that the gesture parameters of the interactive device in the preset time period can be obtained. The manner in which the terminal device identifies the tag on the interaction device may refer to the content in the foregoing embodiment, which is not described herein.
In some embodiments, acquiring the gesture parameters of the interaction device in a preset time period includes: and receiving gesture parameters detected by the interaction equipment, wherein the gesture parameters are sent by the interaction equipment in a preset time period.
It will be appreciated that the interaction device may comprise an inertial measurement unit (Inertial measurement unit, IMU), wherein the inertial measurement unit may detect six degrees of freedom information of the interaction device, which may include a degree of freedom of movement and a degree of freedom of rotation of the interaction device along three orthogonal coordinate axes (X, Y, Z axes) in space, which may constitute attitude parameters of the interaction device. Therefore, the terminal device obtains the gesture parameters of the interaction device in the preset time period, which may be the gesture parameters detected by the IMU in the preset time period and sent by the interaction device.
Of course, in the embodiment of the present application, the specific manner of acquiring the gesture parameters of the interaction device may not be limited.
After the attitude parameters of the interactive equipment in the preset time period are obtained, the variation range of the attitude parameters of the interactive equipment can be determined. Wherein the range of the attitude angle, for example, the range of 20 ° to 170 °, the range of the attitude direction, that is, the attitude of the interactive device is changed from one attitude direction to another attitude direction, the range of the mark surface, the change from one mark surface to another mark surface, etc., can be determined. Of course, the range of variation of the acceleration and the like may be determined, and the range of variation of the specific determined posture parameter may not be limited.
Step S260: and determining the shaking parameters of the interaction equipment based on the variation range of the gesture parameters.
After the variation range of the gesture parameters is obtained, the terminal device can determine the shaking parameters of the interaction device according to the variation range of the gesture parameters of the interaction device. In some embodiments, the change range of the gesture angle may be used as the shaking amplitude of the interaction device, and the shaking direction of the interaction device may be determined according to the change range of the gesture direction. In addition, according to the change times of the gesture of the interaction device in the preset time period, the shaking frequency of the interaction device can be determined, so that shaking parameters such as the shaking frequency, the shaking amplitude, the shaking direction and the like of the interaction device can be obtained. Of course, the manner of specifically determining the shake parameter of the interaction device may not be limited in the embodiment of the present application, and the gesture parameter of the interaction device may also be used as the shake parameter of the interaction device.
Step S270: and generating a control instruction corresponding to the shaking parameter according to the corresponding relation between the shaking parameter and the control instruction, and controlling the display of the virtual content according to the control instruction so as to enable the displayed virtual content to be matched with the position and/or posture change of the entity object.
After the terminal equipment acquires the shaking parameters of the interaction equipment, the terminal equipment can control the display of the virtual content according to the shaking parameters so as to achieve the purpose of correspondingly controlling the displayed virtual content through shaking of the interaction equipment.
In the embodiment of the application, the terminal equipment can generate the control instruction corresponding to the shaking parameter according to the shaking parameter, and control the virtual content according to the control instruction. Specifically, the corresponding relation between the shaking parameters and the control instructions is pre-stored in the terminal equipment, and the corresponding relation can be set by a user, can be defaulted when the terminal equipment leaves the factory, and can also be obtained from a server by the terminal equipment.
After a control command is generated according to the acquired shaking parameters, the display of the virtual content can be controlled according to the control command. Wherein, different control instructions correspond to different control effects, and the control effects can control the virtual content to display different effects. For example, the speed of updating the virtual content is faster according to the control command generated at the higher shaking frequency, and the rendering effect added to the virtual content is different according to the control command generated at the different shaking amplitude. Of course, the above control effects are merely examples, and the control effects of the virtual content corresponding to the specific control instructions may not be limited in the embodiment of the present application.
The virtual content display method provided by the embodiment of the application is applied to the terminal equipment, the virtual content is displayed according to the position and posture information of the interaction equipment relative to the terminal equipment, so that a user can observe the effect that the virtual content is overlapped on the real world, when the interaction equipment is in a shaking state according to the change frequency of the position and/or posture of the interaction equipment and the frequency threshold value, the shaking parameter of the interaction equipment is determined by utilizing the posture change range of the interaction equipment, a control instruction is generated according to the shaking parameter, and the display of the virtual content is controlled according to the control instruction, thereby realizing the control of the virtual content according to the shaking of the interaction equipment, achieving good interaction with the displayed virtual content and improving the interactivity in the display of the virtual content.
Referring to fig. 6, an embodiment of the present application provides a method for displaying virtual content, which may be applied to the terminal device, where the method for displaying virtual content may include:
step S310: and acquiring the position and posture information of the interaction equipment relative to the terminal equipment.
Step S320: and displaying the virtual content according to the position and posture information.
Step S330: detecting a motion state of the interactive device according to at least one of the change information of the position and the gesture of the interactive device.
Step S340: and when the interaction equipment is in a shaking state, obtaining shaking parameters of the interaction equipment.
In the embodiment of the present application, the steps S310 to S340 may refer to the content of the above embodiment, and are not described herein.
Step S350: and controlling the virtual content to perform at least one of content interaction, content addition, movement, rotation, content selection and scaling adjustment according to the shaking parameters.
After the shake parameters of the interaction device are obtained, at least one of content interaction, content addition, movement, rotation, content selection and scaling adjustment of the virtual content can be controlled according to the shake parameters, and of course, other control may also be performed on the virtual content, for example, copying, splitting, and the like of the virtual content.
The control of the virtual content to perform content interaction may refer to interaction between virtual contents on display effects. When the virtual content includes a plurality of parts of virtual content, the virtual content performs interaction, that is, interaction on display effects is performed between different parts of virtual content, for example, one part of the virtual content applies a display effect to another part of the virtual content. Of course, the interaction between the virtual contents on the display effect may be an interaction between the virtual contents displayed by the terminal device and the virtual contents displayed by other terminal devices.
In some embodiments, the virtual content displayed by the terminal device may include: the first virtual content and the second virtual content. It is understood that the above-mentioned virtual contents displayed by the terminal device may be constituted by two virtual contents, that is, may be constituted by a first virtual content and a second virtual content. Of course, other virtual contents may also be included in the virtual contents.
In addition, the shaking parameters of the interaction device may at least include a shaking direction, where the shaking direction is a moving direction of the interaction device during shaking, for example, the shaking direction is a horizontal direction, a vertical direction, and the like.
As one way, controlling virtual content to perform content interaction according to a shake parameter includes:
according to the shaking parameters, controlling the virtual content to display shaking effects according to shaking directions, wherein the first virtual content corresponds to a first shaking effect, and the second virtual content corresponds to a second shaking effect; and controlling the first virtual content and the second virtual content to execute interactive operation according to the first shaking effect and the second shaking effect.
It can be understood that when the virtual content is controlled to perform content interaction according to the shaking parameters, the virtual content can be controlled to display a shaking effect according to the shaking direction, that is, the direction corresponding to the shaking effect is consistent with the shaking direction, so that a user can see that the shaking direction of the virtual content is consistent with the shaking direction of the interaction device. In addition, the above-mentioned shaking effect may correspond to specific virtual contents, that is, when the first virtual contents and the second virtual contents are different, the corresponding shaking effects are also different. Therefore, when the first virtual content and the second virtual content display the shaking effect, different shaking effects may be displayed, specifically, the first virtual content may display the first shaking effect according to the shaking direction, and the second virtual content may display the second shaking effect according to the shaking direction. For example, referring to fig. 7, the displayed virtual contents include a first virtual content, a second virtual content and a third virtual content, wherein the first virtual content is a virtual sea 31, the second virtual content is a virtual boat 32 located on the virtual sea 31, the third virtual content is a virtual crystal ball 30, the virtual sea 31 and the virtual boat 32 are located in the virtual crystal ball 30, a user can see that the virtual crystal ball 30 is displayed in a superimposed manner on an interactive device through a terminal device, and when the user holds the interactive device for interaction, the visual feeling of the user is that the virtual crystal ball 30 is held for interaction because the virtual crystal ball 30 is displayed in a superimposed manner on the interactive device. As shown in fig. 8, when the interaction device is in a shaking state, the terminal device may control the first virtual content and the second virtual content to display a shaking effect according to the shaking parameters, that is, may control the virtual sea 31 and the virtual ship 32 to display the shaking effect, respectively. Referring to fig. 9, the first virtual content is a virtual sea 31, the second virtual content is a virtual ship 32, the first shaking effect corresponding to the first virtual content may be a shaking effect of waves generated on the water surface of the virtual sea 31, and the second shaking effect corresponding to the second virtual content may be a shaking effect of the virtual ship 32 shaking along with the water surface of the virtual sea 31. Of course, the above is merely an example, and the application scenario is not limited thereto.
Of course, the terminal device may also determine the shake effect corresponding to the virtual content by combining other parameters in the shake parameters. For example, the shaking effect may be determined based on the shaking frequency and/or the shaking amplitude.
In addition, the terminal device may control the first virtual content and the second virtual content to execute corresponding interactive operations according to the first shaking effect and the second shaking effect. It is understood that after the first virtual content and the second virtual content display different shaking effects, a corresponding interaction operation may be generated between the first virtual content and the second virtual content. The interactive operation may be that the first virtual content applies a corresponding display effect to the second virtual content, and the second virtual content also applies a corresponding display effect to the first virtual content, where the display effect applied by the first virtual content may correspond to the first virtual content, the first shaking effect, and the second shaking effect, and the display effect applied by the second virtual content may correspond to the second virtual content, the first shaking effect, and the second shaking effect. For example, in the application scenario shown in fig. 9, the first virtual content is the virtual sea 31, the second virtual content is the virtual ship 32, the wave shaking effect generated by the water surface of the virtual sea 31 can be controlled, and after the virtual ship 32 is controlled to display the shaking effect along with the shaking of the water surface of the virtual sea 31, the virtual sea 31 can spray water to the hull of the virtual ship 32, and at the same time, the virtual ship 32 can also shake along with the shaking to make the sea water wave.
For another example, in the application scenario of the simulated chemical experiment, the first virtual content is the chemical solid 1, the second virtual content is the chemical liquid 1, and according to the first shaking effect of the chemical solid 1 and the second shaking effect of the chemical liquid 1, the chemical solid 1 and the chemical liquid 1 can generate a chemical reaction, and the display effect of the chemical reaction is displayed. Of course, the above is merely an example, and the application scenario is not limited thereto.
According to the shaking parameters, the virtual content is controlled to be added, namely, other virtual content is added in the virtual space on the basis of the virtual content currently displayed by the terminal equipment, so that the virtual content currently displayed by the terminal equipment and the added other virtual content are displayed together.
In some embodiments, the additional virtual content may be an extended content corresponding to the virtual content currently displayed by the terminal device, where the extended content is related to the virtual content currently displayed, and the data corresponding to the additional virtual content may be stored in the terminal device in advance, or may be acquired from another device (for example, a server or the like). For example, in an application scenario simulating frying, when the displayed virtual content is a virtual dish material, the virtual content added may be a seasoning, other dish materials, or the like when the virtual content is added. For example, when the virtual content displayed is a map of a certain place, the added display content may be a map around the current map, and the current map may be displayed together with the expanded map, thereby increasing the content of the displayed map. For another example, when the displayed virtual content is the chemical solid 2, the added virtual content may be the chemical liquid 2, so that the subsequent chemical solid 2 may chemically react with the chemical liquid 2, thereby achieving the effect of simulating the chemical experiment. Of course, the above is merely an example, and the application scenario is not limited thereto.
Controlling the virtual content to move according to the shaking parameter may refer to moving the virtual content or a part of the virtual content in any direction. For example, in a game scene, the movement of the game object can be controlled according to the shaking parameter, so that the game object can be positioned at different positions in the virtual space by shaking the interactive device, and the virtual object is controlled to execute different operations. Of course, the above is merely an example, and the application scenario is not limited thereto.
In some embodiments, the shaking parameter may include at least a shaking frequency, where the shaking frequency may refer to a number of times the interaction device shakes within a certain time.
The controlling the virtual content to move according to the shake parameter may include: according to the shaking parameters, moving the virtual content according to the corresponding speed of the shaking frequency; and when the shaking frequency reaches a preset threshold value, controlling the virtual content to change from the current first display state to the second display state.
It is understood that, when the virtual content is controlled to move according to the shake parameter, a speed corresponding to the shake frequency in the shake parameter may be determined, and the speed is used as a moving speed when the virtual content is moved. The terminal device may control the virtual content to move at the speed according to the determined speed. The corresponding relation between the shaking frequency and the moving speed of the virtual content may be a proportional relation between the moving speed of the virtual content and the shaking frequency of the interactive device, that is, the higher the shaking frequency is, the higher the moving speed of the virtual content is. Thus, the user can control the moving speed of the virtual content by controlling the shaking frequency of the interactive device.
In addition, whether the shaking frequency reaches a preset threshold value can be detected, so that the display state of the virtual content is changed when the shaking frequency reaches the preset threshold value. Specifically, the virtual content may be controlled to change from a current first display state to a second display state. In this case, the virtual content may be controlled to be changed to the second display state, for example, please refer to fig. 9 and 10, and in the application scenario shown in fig. 9, the virtual rainwater 33, the virtual cloud 34, the virtual lightning 35, etc. may be added to display the rainy display effect. The virtual content may be controlled to change to the second display state, or the color, size, or the like of the virtual content may be changed. The control virtual content may be changed to the second display state, or the control virtual content may be controlled to display a specific display effect, for example, a fire effect, a smoke effect, or the like. Of course, the display state to which the specific virtual content changes may not be limited in the embodiment of the present application.
Controlling the virtual content to rotate according to the shaking parameter may refer to rotating the virtual content in a specified direction (e.g., a horizontal direction, a vertical direction, or a free direction) in a two-dimensional plane or a three-dimensional space, that is, rotating the virtual content along a rotation axis in the specified direction, so as to change the posture (orientation direction, etc.) of the displayed virtual content. In some embodiments, the direction in which the virtual content rotates may be set to a specified direction, that is, after detecting that the interactive apparatus is in a shaking state, the virtual content may be controlled to rotate in the specified direction. In some embodiments, the direction in which the virtual content rotates may correspond to the shaking direction, that is, after detecting that the interactive apparatus is in the shaking state, the virtual content may be controlled to rotate in the direction corresponding to the shaking direction. For example, when the displayed virtual content is an earth model, rotation of the earth model can be controlled according to the shaking parameters so as to display different perspectives of the earth model.
According to the shaking parameters, the virtual content is controlled to be selected, which may mean that the virtual content or a part of the virtual content is selected in a two-dimensional plane, so that the virtual content or the part of the virtual content is in a selected state. In one application scenario, when the virtual content is a plurality of virtual option contents for a menu selected by a user, when one of the virtual option contents is at a designated position, and when the shaking parameter is a designated parameter, for example, the shaking direction is a designated direction and the shaking frequency is higher than a set frequency, the virtual option content can be controlled to be in a selected state.
Controlling the virtual content to perform scaling adjustment according to the shake parameter may refer to adjusting the model of the virtual content by an enlargement ratio or a reduction ratio, where the enlargement ratio and the reduction ratio are ratios of the size of the displayed virtual content relative to the original size of the virtual content. In some embodiments, the scaling up or scaling down of the model of the virtual content may be determined by the shake direction, e.g., scaling up the model of the virtual content when the shake direction is a first direction and scaling down the model of the virtual content when the shake direction is a second direction. In addition, the scale of the model of the virtual content to be enlarged or reduced may be determined according to the shake frequency and/or the shake amplitude, for example, the higher the shake frequency is, the larger the scale of the model to be enlarged or reduced is, and for example, the higher the shake amplitude is, the larger the scale of the model to be enlarged or reduced is.
In some embodiments, the above specific control on the virtual content may also be combined to complete the control on the virtual content. In one application scenario, referring to fig. 7, virtual content may include a virtual crystal ball 30, a virtual sea 31, and a virtual boat 32. Referring to fig. 8 and 9, when the terminal device detects that the interaction device is in a shaking state, the terminal device can control the virtual sea 31 and the virtual ship 32 to perform content interaction, so that the water surface of the virtual sea 31 generates a wave shaking effect along with the shaking direction, and the virtual ship 32 generates a shaking effect along with the shaking of the water surface; referring to fig. 10, when the shaking frequency reaches a preset threshold, content may be added, for example, virtual content such as virtual cloud 34, virtual rainwater 33, virtual lightning 35, etc. may be added, so as to generate a display effect of a raining event; as shown in fig. 11, after triggering the above-described rain event, content addition may be continued, a virtual monster 36 may be added in the virtual sea 31, and a display effect that the virtual monster 36 protrudes from the sea surface to whisker attack the virtual ship 32 may be displayed; referring to fig. 12 and 13, the user may shake the interactive device, so that the terminal device may move the virtual content according to the detected current shake parameter of the interactive device, and may specifically control the virtual boat 32 to move in the virtual sea 31, so as to avoid the attack of the virtual sea monster 36.
Of course, the control of the virtual content according to the shake parameter is not limited to the above, and for example, the virtual content may be controlled to be copied according to the shake parameter. In addition, the terminal device can play audio corresponding to the virtual content according to the shaking parameters.
According to the virtual content display method provided by the embodiment of the application, the terminal equipment displays the virtual content according to the position and posture information of the interaction equipment relative to the terminal equipment, so that a user can observe the effect of the virtual content superimposed on the real world, when the interaction equipment is in a shaking state according to the position and/or posture change information of the interaction equipment, the shaking parameters of the interaction equipment are determined by utilizing the posture change range of the interaction equipment, and the virtual content is controlled to perform content interaction, content addition, movement, rotation, content selection, scaling and the like according to the shaking parameters, thereby achieving good interaction with the displayed virtual content and improving the interactivity in the display of the virtual content.
Referring to fig. 14, an embodiment of the present application provides a method for displaying virtual content, which may be applied to the terminal device, where the method for displaying virtual content may include:
Step S410: and acquiring the position and posture information of the interaction equipment relative to the terminal equipment.
Step S420: and displaying the virtual content according to the position and posture information.
Step S430: and detecting the motion state of the interactive equipment according to at least one of the position and the posture change information of the interactive equipment when a control trigger instruction sent by the interactive equipment is received, wherein the control trigger instruction is generated by the interactive equipment according to the control operation detected by the control area.
In an embodiment of the present application, the interaction device may be provided with at least one manipulation area, and the manipulation area may include at least one of a key and a touch screen. The control area of the interactive device can detect a control operation, and the control operation can be a key operation of a user on a key or a touch operation on a touch screen.
When the control operation is detected by the control area of the interactive device, the interactive device can generate a control trigger instruction according to the control operation. The control trigger instruction is used for triggering the terminal equipment to control the virtual content according to the shaking parameter, that is, the terminal equipment can control the virtual content according to the shaking parameter when the shaking state is detected after receiving the control trigger instruction sent by the interaction equipment.
Therefore, when the terminal device receives the control trigger instruction sent by the interaction device, the terminal device may detect the motion state of the interaction device according to at least one of the interaction device and the change information of the gesture, and the specific manner of detecting the motion state of the interaction device may refer to the content of the foregoing embodiment, which is not described herein.
Step S440: and when the interaction equipment is in a shaking state, obtaining shaking parameters of the interaction equipment.
In the embodiment of the present application, step S440 may refer to the content of the above embodiment, and is not described herein.
Step S450: and receiving a control instruction sent by the interaction equipment according to the control operation detected by the control area.
In the embodiment of the application, the terminal device can also receive the control instruction sent by the interactive device, after the control instruction is detected by the interactive device, the control instruction is generated according to the detected control operation, and the control instruction is sent to the terminal device, and is used for correspondingly controlling the virtual content by the terminal device. After receiving the control instruction, the terminal device can then control the virtual content correspondingly according to the control instruction.
Step S460: and performing first control on the first content according to the shaking parameters, and performing second control on the second content according to the control instruction.
When the terminal equipment acquires the shaking parameters of the interactive equipment and receives the control instruction sent by the interactive equipment, the terminal equipment can control the virtual content according to the shaking parameters and the control instruction.
In the embodiment of the present application, the virtual content displayed by the terminal device may include a first content and a second content. That is, the virtual content may be composed of the first content and the second content. Of course, other contents may also be included in the virtual contents displayed by the terminal device.
When the terminal device controls the virtual content according to the shaking parameter and the control instruction, the terminal device may control the first content and control the second content according to the shaking parameter and the control instruction. Specifically, the terminal device may perform a first control on the first content according to the shake parameter, and perform a second control on the second content according to the manipulation instruction. The first control and the second control may be the same control as the virtual content or may be different control, and the specific control content is not limited in the embodiment of the present application. Therefore, the virtual content can be controlled jointly through the shaking of the interaction equipment and the control operation detected by the interaction equipment, and a better interaction effect is achieved.
In one application scenario, when the control trigger instruction of the interaction device is detected, the terminal device may detect the motion state of the interaction device, as shown in fig. 15, and when the control trigger instruction of the interaction device is detected, may display that virtual seawater is injected into the virtual crystal ball 30, and form a virtual sea 31. As shown in fig. 7, after the virtual sea 31 is displayed, a virtual ship 32 may be displayed on the virtual sea 31. Referring to fig. 8 and 9, when the interaction device is detected to be in a shaking state, the wave shaking effect can be generated on the water surface of the virtual sea 31 according to the shaking parameters, and the virtual ship 32 can be controlled to display the shaking effect along with the shaking of the water surface of the virtual sea 31. As shown in fig. 10, when the shaking frequency is greater than the preset threshold, virtual contents such as virtual clouds 34, virtual rainwater 33, virtual lightning 35 and the like are displayed, resulting in a display effect of a raining event. As shown in fig. 11, after the virtual contents of the virtual clouds 34, the virtual rainwater 33, the virtual lightning 35, and the like described above are displayed, a virtual monster 36 may be added to the virtual sea 31, and the display effect of the virtual monster 36 protruding from the sea surface to whisker attack the virtual ship 31 may be displayed. In addition, as shown in fig. 16, the terminal device may receive a manipulation instruction transmitted by the interaction device according to the detected control operation, and control the above-mentioned virtual lightning 35 to attack the virtual monster 36 according to the manipulation instruction. Of course, the application scenario is not limited thereto, and other scenarios are also possible.
In another application scenario, the virtual content may be a chemical object, and the chemical object may be controlled by displaying the chemical liquid 3 and the chemical solid 3, or by a control instruction sent by the interaction device and a shaking parameter of the interaction device. For example, according to the shaking parameters, the chemical solid 3 can be controlled to display the shaking effect, and according to the control instruction, the chemical liquid 3 is simulated to be heated, so that the effect of simulating a chemical experiment is achieved.
Of course, the application scenario of the virtual content display method provided by the embodiment of the present application is not limited to this, and may be other application scenarios.
According to the virtual content display method provided by the embodiment of the application, the terminal equipment displays the virtual content according to the position and posture information of the interaction equipment relative to the terminal equipment, so that a user can observe the effect that the virtual content is overlapped on the real world. After receiving a control trigger instruction sent by the interactive equipment, detecting the shaking state of the interactive equipment, determining shaking parameters of the interactive equipment when the interactive equipment is determined to be in the shaking state, receiving a control instruction sent by the interactive equipment, controlling the virtual content together according to the shaking parameters and the control instruction, and improving the interaction effect between the virtual content and the displayed virtual content and the interestingness in the virtual content display.
Referring to fig. 17, a block diagram of a virtual content display apparatus 400 according to the present application is shown. The display apparatus 400 of virtual contents is applied to a terminal device connected to an interactive device. The display device 400 of the virtual content includes: a location acquisition module 410, a content display module 420, a status detection module 430, a parameter acquisition module 440, and a content control module 450. The position obtaining module 410 is configured to obtain position and posture information of the interaction device relative to the terminal device; the content display module 420 is configured to display virtual content according to the position and posture information; the state detection module 430 is configured to detect a motion state of the interaction device according to at least one of the change information of the position and the gesture of the interaction device; the parameter obtaining module 440 is configured to obtain shake parameters of the interaction device when the interaction device is in a shake state; the content control module 450 is configured to control display of the virtual content according to the shake parameter, so that the displayed virtual content corresponds to a shake state of the interactive device.
In the embodiment of the application, the shaking parameter is obtained according to at least one of the change information of the position and the gesture of the interaction equipment. The content control module 450 may be specifically configured to: and generating a control instruction corresponding to the shaking parameter according to the corresponding relation between the shaking parameter and the control instruction, and controlling the display of the virtual content according to the control instruction so as to enable the displayed virtual content to be matched with the position and/or posture change of the entity object.
In an embodiment of the present application, the content control module 450 may be specifically configured to: and controlling the virtual content to perform at least one of content interaction, content addition, movement, rotation, content selection and scaling adjustment according to the shaking parameters.
In some implementations, the shake parameters include a shake direction, and the virtual content includes a first virtual content and a virtual second content. The content control module 450 controls the virtual content to perform content interaction according to the shake parameters, including: according to the shaking parameters, controlling the virtual content to display shaking effects according to shaking directions, wherein the first virtual content corresponds to a first shaking effect, and the second virtual content corresponds to a second shaking effect; and controlling the first virtual content and the second virtual content to execute interactive operation according to the first shaking effect and the second shaking effect.
In some embodiments, the shaking parameter comprises a shaking frequency. The content control module 450 controls the virtual content to move according to the shake parameter, including: moving the virtual content according to the speed corresponding to the shaking frequency according to the control parameter; and when the shaking frequency reaches a preset threshold value, controlling the virtual content to change from the current first display state to the second display state.
In an embodiment of the present application, the status detection module 430 may be specifically configured to: judging whether the change frequency of the position and/or the gesture of the interactive equipment in the appointed duration is greater than a frequency threshold value or not; and when the change frequency of the position and/or the gesture is greater than the frequency threshold, determining that the interaction device is in a shaking state.
In an embodiment of the present application, the parameter obtaining module 440 may be specifically configured to: acquiring attitude parameters of the interactive equipment in a preset time period, and determining the variation range of the attitude parameters of the interactive equipment; and determining the shaking parameters of the interaction equipment based on the variation range of the gesture parameters.
In the embodiment of the present application, the parameter obtaining module 440 obtains the gesture parameters of the interactive apparatus in the preset time period, including: acquiring a marker image containing at least one marker set by the interactive equipment in a preset time period, and acquiring attitude parameters of the interactive equipment according to the marker image; or receiving gesture parameters detected by the interaction equipment, which are sent by the interaction equipment in a preset time period.
In an embodiment of the present application, the status detection module 430 may be specifically configured to: and detecting the motion state of the interactive equipment according to at least one of the position and the posture change information of the interactive equipment when a control trigger instruction sent by the interactive equipment is received, wherein the control trigger instruction is generated by the interactive equipment according to the control operation detected by the control area.
In an embodiment of the present application, the content control module 450 may be specifically configured to: receiving a control instruction sent by the interaction equipment according to the control operation detected by the control area; and performing first control on the first content according to the shaking parameters, and performing second control on the second content according to the control instruction.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the apparatus and modules described above may refer to the corresponding process in the foregoing method embodiment, which is not repeated herein. In several embodiments provided by the present application, the coupling of the modules to each other may be electrical, mechanical, or other. In addition, each functional module in each embodiment of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules may be integrated into one module. The integrated modules may be implemented in hardware or in software functional modules.
In summary, the scheme provided by the application is applied to the terminal equipment, the terminal equipment is connected with the interaction equipment, the virtual content is displayed by acquiring the position and posture information of the interaction equipment relative to the terminal equipment, so that a user can observe the effect that the virtual content is overlapped on the real world, and when the interaction equipment is determined to be in a shaking state according to at least one of the position and/or posture change information of the interaction equipment, the shaking parameters of the interaction equipment are acquired, the display of the virtual content is controlled according to the shaking parameters, the interaction with the displayed virtual content is better realized, and the interactivity is improved.
Referring to fig. 18, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a smart phone, a tablet computer, a head mounted display device, or the like capable of running an application program. The terminal device 100 in the present application may include one or more of the following components: processor 110, memory 120, image capture device 130, and one or more application programs, wherein the one or more application programs may be stored in memory 120 and configured to be executed by the one or more processors 110, the one or more program(s) configured to perform the methods as described in the foregoing method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the overall terminal device 100 using various interfaces and lines, performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120, and invoking data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware in at least one of digital signal processing (Digital Signal Processing, DSP), field programmable gate array (Field-Programmable Gate Array, FPGA), programmable logic array (Programmable Logic Array, PLA). The processor 110 may integrate one or a combination of several of a central processing unit (Central Processing Unit, CPU), an image processor (Graphics Processing Unit, GPU), and a modem, etc. The CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for being responsible for rendering and drawing of display content; the modem is used to handle wireless communications. It will be appreciated that the modem may not be integrated into the processor 110 and may be implemented solely by a single communication chip.
The Memory 120 may include a random access Memory (Random Access Memory, RAM) or a Read-Only Memory (Read-Only Memory). Memory 120 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described below, etc. The storage data area may also store data created by the terminal 100 in use, etc.
In an embodiment of the present application, the image capturing device 130 is configured to capture an image of the marker. The image capturing device 130 may be an infrared camera or a color camera, and the specific camera type is not limited in the embodiment of the present application.
Referring to fig. 19, a block diagram of a computer readable storage medium according to an embodiment of the present application is shown. The computer readable medium 800 has stored therein program code which is callable by a processor to perform the method described in the method embodiments described above.
The computer readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Optionally, the computer readable storage medium 800 comprises a non-volatile computer readable medium (non-transitory computer-readable storage medium). The computer readable storage medium 800 has storage space for program code 810 that performs any of the method steps described above. The program code can be read from or written to one or more computer program products. Program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be appreciated by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not drive the essence of the corresponding technical solutions to depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (10)

1. A method for displaying virtual content, applied to a terminal device, the terminal device being connected to an interactive device, the method comprising:
acquiring position and posture information of the interaction equipment relative to the terminal equipment;
displaying the virtual content according to the position and posture information;
detecting a motion state of the interaction equipment according to at least one of the change information of the position and the gesture of the interaction equipment;
when the interaction equipment is in a shaking state, shaking parameters of the interaction equipment are obtained;
controlling the virtual content to perform at least one of content interaction, content addition, movement, rotation, content selection and scaling adjustment according to the shaking parameters so that the displayed virtual content corresponds to the shaking state of the interaction equipment;
the shaking parameters comprise shaking directions, the virtual contents comprise first virtual contents and second virtual contents, the virtual contents are controlled to perform content interaction according to the shaking parameters, and the method comprises the following steps:
according to the shaking parameters, controlling the virtual content to display shaking effects according to the shaking direction, wherein the first virtual content corresponds to a first shaking effect, and the second virtual content corresponds to a second shaking effect; and controlling the first virtual content and the second virtual content to execute interactive operation according to the first shaking effect and the second shaking effect.
2. The method of claim 1, wherein the shaking parameter is obtained according to at least one of a position and a posture change information of the interactive device;
the controlling the display of the virtual content according to the shaking parameter includes:
and generating a control instruction corresponding to the shaking parameter according to the corresponding relation between the shaking parameter and the control instruction, and controlling the display of the virtual content according to the control instruction so as to enable the displayed virtual content to be matched with the position and/or posture change of the entity object.
3. The method of claim 1, wherein the shake parameter comprises a shake frequency, and wherein the controlling the virtual content to move according to the shake parameter comprises:
according to the shaking parameters, moving the virtual content according to the speed corresponding to the shaking frequency;
and when the shaking frequency reaches a preset threshold value, controlling the virtual content to change from the current first display state to the second display state.
4. The method of claim 1, wherein detecting the motion state of the interactive device based on at least one of the change information of the position and the posture of the interactive device comprises:
Judging whether the change frequency of the position and/or the gesture of the interactive equipment in the appointed duration is greater than a frequency threshold value or not;
and when the change frequency of the position and/or the gesture is greater than the frequency threshold, determining that the interaction equipment is in a shaking state.
5. The method of claim 1, wherein the obtaining the shake parameters of the interaction device comprises:
acquiring attitude parameters of the interactive equipment in a preset time period, and determining the variation range of the attitude parameters of the interactive equipment;
and determining the shaking parameters of the interaction equipment based on the variation range of the gesture parameters.
6. The method of claim 5, wherein the obtaining the gesture parameters of the interaction device over the preset time period comprises:
a marker image comprising at least one marker set by the interactive device is acquired over a preset period of time,
acquiring attitude parameters of the interaction equipment according to the marker image; or alternatively
And receiving gesture parameters detected by the interaction equipment, wherein the gesture parameters are sent by the interaction equipment in a preset time period.
7. The method of claim 1, wherein the interactive device comprises a manipulation zone, wherein the virtual content comprises a first content and a second content, wherein controlling the display of the virtual content according to the shake parameter comprises:
Receiving a control instruction sent by the interaction equipment according to the control operation detected by the control area;
and performing first control on the first content according to the shaking parameters, and performing second control on the second content according to the control instruction.
8. A display apparatus of virtual contents, applied to a terminal device connected to an interactive device, comprising: a position acquisition module, a content display module, a state detection module, a parameter acquisition module and a content control module, wherein,
the position acquisition module is used for acquiring position and posture information of the interaction equipment relative to the terminal equipment;
the content display module is used for displaying virtual content according to the position and posture information;
the state detection module is used for detecting the motion state of the interaction equipment according to at least one of the position and the change information of the gesture of the interaction equipment;
the parameter acquisition module is used for acquiring shaking parameters of the interaction equipment when the interaction equipment is in a shaking state;
the content control module is used for controlling the virtual content to perform at least one of content interaction, content addition, movement, rotation, content selection and scaling adjustment according to the shaking parameters so that the displayed virtual content corresponds to the shaking state of the interaction equipment;
The content control module is further configured to control the virtual content to display a shaking effect according to the shaking direction according to the shaking parameter, wherein the first virtual content corresponds to a first shaking effect, and the second virtual content corresponds to a second shaking effect; and controlling the first virtual content and the second virtual content to execute interactive operation according to the first shaking effect and the second shaking effect.
9. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more applications configured to perform the method of any of claims 1-7.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a program code, which is callable by a processor for executing the method according to any one of claims 1-7.
CN201910060758.7A 2019-01-21 2019-01-21 Virtual content display method and device, terminal equipment and storage medium Active CN111459263B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910060758.7A CN111459263B (en) 2019-01-21 2019-01-21 Virtual content display method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910060758.7A CN111459263B (en) 2019-01-21 2019-01-21 Virtual content display method and device, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111459263A CN111459263A (en) 2020-07-28
CN111459263B true CN111459263B (en) 2023-11-03

Family

ID=71682283

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910060758.7A Active CN111459263B (en) 2019-01-21 2019-01-21 Virtual content display method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111459263B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112346580B (en) * 2020-10-27 2024-01-12 努比亚技术有限公司 Motion gesture detection method, device and computer readable storage medium
CN113359988B (en) * 2021-06-03 2022-11-29 北京市商汤科技开发有限公司 Information display method and device, computer equipment and storage medium
CN113687717A (en) * 2021-08-10 2021-11-23 青岛小鸟看看科技有限公司 VR (virtual reality) interaction method and system based on position change
CN114764327B (en) * 2022-05-09 2023-05-05 北京未来时空科技有限公司 Method and device for manufacturing three-dimensional interactive media and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103019569A (en) * 2012-12-28 2013-04-03 西安Tcl软件开发有限公司 Interactive device and interactive method thereof
US9383895B1 (en) * 2012-05-05 2016-07-05 F. Vinayak Methods and systems for interactively producing shapes in three-dimensional space
CN205644439U (en) * 2016-04-22 2016-10-12 邻元科技(北京)有限公司 Human machine interactive device of dice shape
CN106662926A (en) * 2014-05-27 2017-05-10 厉动公司 Systems and methods of gestural interaction in a pervasive computing environment
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space
CN109240484A (en) * 2017-07-10 2019-01-18 北京行云时空科技有限公司 Exchange method, device and equipment in a kind of augmented reality system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9383895B1 (en) * 2012-05-05 2016-07-05 F. Vinayak Methods and systems for interactively producing shapes in three-dimensional space
CN103019569A (en) * 2012-12-28 2013-04-03 西安Tcl软件开发有限公司 Interactive device and interactive method thereof
CN106662926A (en) * 2014-05-27 2017-05-10 厉动公司 Systems and methods of gestural interaction in a pervasive computing environment
CN205644439U (en) * 2016-04-22 2016-10-12 邻元科技(北京)有限公司 Human machine interactive device of dice shape
CN109240484A (en) * 2017-07-10 2019-01-18 北京行云时空科技有限公司 Exchange method, device and equipment in a kind of augmented reality system
CN108269307A (en) * 2018-01-15 2018-07-10 歌尔科技有限公司 A kind of augmented reality exchange method and equipment
CN108958471A (en) * 2018-05-17 2018-12-07 中国航天员科研训练中心 The emulation mode and system of virtual hand operation object in Virtual Space

Also Published As

Publication number Publication date
CN111459263A (en) 2020-07-28

Similar Documents

Publication Publication Date Title
CN111459263B (en) Virtual content display method and device, terminal equipment and storage medium
CN108027653B (en) Haptic interaction in virtual environments
JP2022521324A (en) Game character control methods, devices, equipment and storage media
EP2371434B1 (en) Image generation system, image generation method, and information storage medium
EP4057109A1 (en) Data processing method and apparatus, electronic device and storage medium
US8882593B2 (en) Game processing system, game processing method, game processing apparatus, and computer-readable storage medium having game processing program stored therein
CN107890664A (en) Information processing method and device, storage medium, electronic equipment
US11513657B2 (en) Method and apparatus for controlling movement of virtual object, terminal, and storage medium
CN111383345B (en) Virtual content display method and device, terminal equipment and storage medium
CN111198608A (en) Information prompting method and device, terminal equipment and computer readable storage medium
CN111223187A (en) Virtual content display method, device and system
CN111771180A (en) Hybrid placement of objects in augmented reality environment
US20190362559A1 (en) Augmented reality method for displaying virtual object and terminal device therefor
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
CN112313605A (en) Object placement and manipulation in augmented reality environments
CN110737414B (en) Interactive display method, device, terminal equipment and storage medium
CN108553895A (en) User interface element and the associated method and apparatus of three-dimensional space model
CN111273777A (en) Virtual content control method and device, electronic equipment and storage medium
CN110737326A (en) Virtual object display method and device, terminal equipment and storage medium
Bikos et al. An interactive augmented reality chess game using bare-hand pinch gestures
US11100723B2 (en) System, method, and terminal device for controlling virtual image by selecting user interface element
CN110908508B (en) Control method of virtual picture, terminal device and storage medium
CN114341773A (en) Method and apparatus for adaptive augmented reality anchor generation
CN111913564A (en) Virtual content control method, device and system, terminal equipment and storage medium
CN111857364B (en) Interaction device, virtual content processing method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant