CN111766937A - Virtual content interaction method and device, terminal equipment and storage medium - Google Patents

Virtual content interaction method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN111766937A
CN111766937A CN201910263441.3A CN201910263441A CN111766937A CN 111766937 A CN111766937 A CN 111766937A CN 201910263441 A CN201910263441 A CN 201910263441A CN 111766937 A CN111766937 A CN 111766937A
Authority
CN
China
Prior art keywords
virtual
wearable device
terminal device
picture
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910263441.3A
Other languages
Chinese (zh)
Inventor
卢智雄
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910263441.3A priority Critical patent/CN111766937A/en
Publication of CN111766937A publication Critical patent/CN111766937A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The embodiment of the application discloses an interaction method and device of virtual content, terminal equipment and a storage medium, and relates to the technical field of display. The interaction method of the virtual content is applied to the terminal equipment, and comprises the following steps: displaying the virtual picture; acquiring first position and posture information of the wearable device relative to the terminal device; acquiring the position information of the virtual picture relative to the terminal equipment; acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information; and carrying out processing operation corresponding to the target area. The method can utilize the wearable device to control the displayed virtual content, and improves the interactivity of the user and the virtual content.

Description

Virtual content interaction method and device, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to an interaction method and apparatus for virtual content, a terminal device, and a storage medium.
Background
With the development of science and technology, machine intellectualization and information intellectualization are increasingly popularized, and the technology of identifying user images through image acquisition devices such as machine vision or virtual vision and the like to realize human-computer interaction is more and more important. Augmented Reality (AR) constructs virtual content that does not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately fuses the virtual content into a real environment by means of an image recognition and positioning technology, fuses the virtual content and the real environment into a whole by means of a display device, and displays the virtual content to a user for real sensory experience. The first technical problem to be solved by the augmented reality technology is how to accurately fuse virtual content into the real world, that is, to make the virtual content appear at the correct position of the real scene with the correct angular pose, so as to generate strong visual reality. In the conventional technology, display of augmented reality or mixed reality or the like is performed by superimposing virtual content in an image of a real scene, and interactive control with the virtual content is an important research direction of augmented reality or mixed reality.
Disclosure of Invention
In view of the foregoing problems, embodiments of the present application provide a virtual content interaction method, apparatus, terminal device, and storage medium, which can control displayed virtual content by using a wearable device, and improve interactivity between a user and the virtual content.
In a first aspect, an embodiment of the present application provides an interaction method for virtual content, which is applied to a terminal device, where the terminal device is in communication connection with a wearable device, and the method includes: displaying the virtual picture; acquiring first position and posture information of the wearable device relative to the terminal device; acquiring the position information of the virtual picture relative to the terminal equipment; acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information; and carrying out processing operation corresponding to the target area.
In a second aspect, an embodiment of the present application provides an interaction device for virtual content, which is applied to a terminal device, where the terminal device is in communication connection with a wearable device, and the device includes: the system comprises a display control module, a relative position acquisition module, a position information acquisition module, a target area acquisition module and a processing execution module, wherein the display control module is used for displaying a virtual picture; the relative position acquisition module is used for acquiring first position and posture information of the wearable device relative to the terminal device; the position information acquisition module is used for acquiring the position information of the virtual picture relative to the terminal equipment; the target area acquisition module is used for acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information; the processing execution module is used for carrying out processing operation corresponding to the target area.
In a third aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of interacting with virtual content as provided by the first aspect above.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be called by a processor to execute the interaction method for virtual content provided in the first aspect.
The scheme provided by the embodiment of the application is applied to the terminal equipment, after the virtual picture is displayed, the first position and the posture information of the wearable equipment relative to the terminal equipment are obtained, the position information of the virtual picture relative to the terminal equipment is obtained, the target area selected by the wearable equipment in the virtual picture is obtained according to the position information, the first position and the posture information, and the processing operation corresponding to the target area is carried out, so that the target area in the virtual picture is selected according to the spatial position of the wearable equipment, and the target area is operated, the interaction between the wearable equipment and the terminal equipment is realized, and the interactivity and the interaction convenience of a user and virtual content are improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application environment suitable for the embodiment of the present application.
Fig. 2 shows a schematic structural diagram of a wearable device suitable for use in an embodiment of the present application.
FIG. 3 shows a flow diagram of an interaction method for virtual content according to one embodiment of the present application.
Fig. 4 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 5 shows another display effect diagram according to an embodiment of the application.
Fig. 6 shows a schematic diagram of another display effect according to an embodiment of the application.
FIG. 7 shows a flow diagram of an interaction method for virtual content according to another embodiment of the present application.
Fig. 8 shows a flowchart of step S210 in the interaction method of virtual content according to the embodiment of the present application.
Fig. 9 shows a flowchart of step S211 in the interaction method of virtual content according to the embodiment of the present application.
Fig. 10 shows a flowchart of step S240 in the interaction method of virtual content according to an embodiment of the present application.
Fig. 11 shows a schematic diagram of a display effect according to an embodiment of the application.
Fig. 12 shows another display effect diagram according to an embodiment of the application.
Fig. 13 shows a flowchart of step S250 in the interaction method of the virtual content according to the embodiment of the present application.
Fig. 14 shows another flowchart of step S250 in the interaction method of the virtual content according to the embodiment of the present application.
Fig. 15 is a schematic diagram illustrating still another display effect according to an embodiment of the application.
Fig. 16 is a schematic diagram illustrating a further display effect according to an embodiment of the application.
Fig. 17 shows a schematic diagram of still another display effect according to an embodiment of the application.
Fig. 18 shows yet another display effect diagram according to an embodiment of the application.
Fig. 19 shows a schematic diagram of yet another display effect according to an embodiment of the application.
Fig. 20 shows a schematic diagram of yet another display effect according to an embodiment of the application.
Fig. 21 shows yet another display effect diagram according to an embodiment of the application.
Fig. 22 shows a schematic diagram of still another display effect according to an embodiment of the application.
Fig. 23 shows a flowchart of step S254 in the interaction method of virtual content according to the embodiment of the present application.
Fig. 24 is a schematic diagram illustrating still another display effect according to an embodiment of the application.
Fig. 25 shows yet another display effect diagram according to an embodiment of the application.
26A-26B illustrate yet another display effect schematic according to an embodiment of the application.
FIG. 27 shows a block diagram of an interactive device for virtual content, according to one embodiment of the present application.
Fig. 28 is a block diagram of a terminal device for executing an interaction method of virtual content according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In recent years, with the development of Augmented Reality (AR) technology, AR-related electronic devices have gradually entered into people's daily lives. AR is a technology for increasing the user's perception of the real world through information provided by a computer system, and superimposes computer-generated content objects such as virtual objects, scenes, or system cues into a real scene to enhance or modify the perception of the real world environment or data representing the real world environment. At present, a virtual image can be displayed at a corresponding position on a display screen of a mobile terminal or a display component of a head-mounted display, so that the virtual image and a real scene are displayed in an overlapping manner, and a user can enjoy a science-fiction type viewing experience.
The inventor finds that, through long-term research, in the conventional AR display technology, when interaction with virtual content is realized, a handheld controller is generally required to be used for interacting with an AR head display, so that interaction with the virtual content is realized, the interactivity is poor, the AR head display is not convenient to carry about, and the use occasions are limited. Based on the above problems, the inventor proposes an interaction method, an interaction device, a terminal device, and a storage medium for virtual content in the embodiments of the present application, so as to conveniently implement interaction with displayed virtual content.
An application scenario of the interaction method for virtual content provided in the embodiment of the present application is introduced below.
Referring to fig. 1, a schematic diagram of an application scenario of an interaction method of virtual content provided in an embodiment of the present application is shown, where the application scenario includes an interaction system 10. The interactive system 10 includes: terminal device 100 and wearable device 200.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone and a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may also be an intelligent terminal such as a mobile phone connected to an external/access head-mounted display device, that is, the terminal device 100 may be inserted or accessed into the external head-mounted display device as a processing and storage device of the head-mounted display device, and display virtual content on the head-mounted display device.
In this embodiment of the application, the wearable device 200 may be an intelligent wearable device such as an intelligent watch or an intelligent bracelet, or may be a conventional wearable device such as a conventional watch that has only a data display function.
In some embodiments, wearable device 200 may include a wireless communication module, and wearable device 200 may establish a communication connection with terminal device 100 through the wireless communication module. The Wireless communication module may be a module such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (ZigBee technology), and the like. The wearable device 200 connected to the terminal device 100 in communication can interact information and instructions with the terminal device 100.
In some embodiments, markers 201 are provided on the wearable device 200. Wherein the marker 201 may comprise at least one sub-marker having one or more feature points. In some embodiments, the marker 201 may be integrated into the wearable device 200, may be attached to the wearable device 200, or may be displayed on the display screen of the wearable device 200. When the marker 201 is within the visual field of the terminal device 100, the terminal device 100 may use the marker 201 within the visual field as a target marker, recognize an image of the target marker, and obtain spatial position information such as a position and an orientation of the target marker with respect to the terminal device 100, thereby obtaining relative spatial position information between the terminal device 100 and the wearable device 200. The terminal device 100 may display a corresponding virtual object based on the spatial position information of the target marker relative to the terminal device 100, and may realize the positioning and tracking of the wearing device 200 according to the target marker. It is to be understood that the specific marker 201 is not limited in the embodiment of the present application, and only needs to be identifiable and traceable by the terminal device 100.
In some embodiments, the terminal device 100 may also track the shape of the wearable device 200, and determine the position and posture information of the wearable device 200 relative to the terminal device 100.
In some embodiments, terminal device 100 may further determine position and posture information of wearable device 200 relative to terminal device 100 according to a light spot provided on wearable device 200.
In some embodiments, wearable device 200 includes an Inertial Measurement Unit (IMU). The terminal device 100 may accurately obtain the position and posture information of the wearable device 200 relative to the terminal device 100 according to the measurement data of the IMU of the wearable device 200 and the tracked shape information of the wearable device 200. The terminal device 100 can accurately obtain the position and posture information of the wearable device 200 relative to the terminal device 100 according to the measurement data of the IMU of the wearable device 200 and the light spot set on the wearable device 200.
In some embodiments, at least one of a key, a touch screen, and a dial may be included on wearable device 200. For example, set up the carousel in the dial plate of intelligent wrist-watch, set up the button in the side of the dial plate of intelligent wrist-watch, set up the touch screen on the surface of the dial plate of intelligent wrist-watch. The wearable device 200 may generate a manipulation action parameter according to a manipulation operation of the user on the key, the touch screen, and the dial, and transmit the manipulation action parameter to the terminal device 100. The terminal device 100 may control the display of the virtual content (e.g., control the virtual content to rotate, displace, zoom, etc.) according to the manipulation motion parameter.
In some embodiments, a camera is provided on the wearable device 200 and may be used to capture images of the user or the surrounding environment. The camera can be set to a rotatable mode and can rotate in a circle mode to acquire images in different visual field ranges. For example, contain the camera on the intelligence wrist-watch, the camera can set up the side at the dial plate, through rotating the dial plate, can rotate the position of camera, makes the camera can gather the image in the different field of vision scope.
For example, referring to fig. 1 again, the terminal device 100 is a head-mounted display device, the wearable device 200 is a smart watch, and the user can scan the marker 201 on the wearable device 200 in real time through the head-mounted display device worn by the user, and can see that the virtual automobile model 401 is superimposed on the wearable device 200 displayed in the real space, so that the display effect of augmented reality of virtual content is embodied, and the interaction between the terminal device and the wearable device is embodied.
Referring to fig. 2, fig. 2 shows a schematic diagram of a wearable device provided in an embodiment of the present application, where the wearable device 200 is a smart watch, and includes a main body 220 of the smart watch and a watch band 240, and the main body 220 of the smart watch includes a watch face 221. The dial 221 has a key 222 on its side, a touch screen 223 on its surface, and a turntable 224 on the periphery of the dial 221. The marker 201 is provided on the outer periphery of the dial 221 and fixed to the dial. It is understood that the wearable device is not limited to the configuration shown in fig. 2, and may be in other configurations, which are not limited herein.
Based on the interactive system, the embodiment of the application provides an interactive method of virtual content, which is applied to terminal equipment and wearable equipment of the interactive system. A specific interaction method of the virtual content will be described below.
Referring to fig. 3, an embodiment of the present application provides an interaction method for virtual content, which is applicable to the terminal device, where the terminal device is in communication connection with a wearable device, and the interaction method for virtual content may include:
step S110: and displaying the virtual picture.
In the traditional augmented reality display technology, when the interaction with virtual content is realized, a handheld controller is usually needed to be used, the interaction with the AR head display can be carried out, the interactivity is poor, the carrying is not facilitated, and the use occasion is limited. Therefore, the wearable device worn on the user can interact with the displayed virtual content, so that the interactivity and the carrying convenience between the user and the displayed virtual content are improved.
In some embodiments, the displaying of the virtual screen may be that the terminal device first obtains a relative spatial position relationship between the wearable device and the terminal device, and then displays the virtual screen according to the relative spatial position relationship. The relative spatial position relationship may include relative position information between the wearable device and the terminal device, posture information, and the like, and the posture information may be an orientation and a rotation angle of the wearable device relative to the terminal device.
In some embodiments, a marker may be disposed on the wearable device, such as on a dial of a smart watch. Therefore, when the relative spatial position relationship between the wearable device and the terminal device needs to be acquired, the terminal device can obtain the relative spatial position relationship between the wearable device and the terminal device by identifying the marker on the wearable device. Specifically, the terminal device may acquire an image including the marker through the image acquisition device, and then identify the marker in the image, so as to obtain spatial position information of the marker with respect to the terminal device, where the spatial position information may include position information, a rotation direction, a rotation angle, and the like of the marker with respect to the terminal device. Therefore, the terminal device can acquire the relative spatial position relationship between the wearable device and the terminal device according to the specific position of the marker on the wearable device and the spatial position information of the marker relative to the terminal device. In some modes, the spatial position information of the marker relative to the terminal device can also be directly used as the relative spatial position relationship between the wearable device and the terminal device.
The terminal device collects the image containing the marker, the spatial position of the terminal device in the real space can be adjusted, and the spatial position of the wearable device in the real space can also be adjusted, so that the marker on the wearable device is located in the visual field range of the image collecting device of the terminal device, and the terminal device can collect the image of the marker and recognize the image. The field of view of the image capturing device may be determined by the size of the field of view.
In some embodiments, the marker may include several sub-markers, and the sub-markers may be shaped patterns. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In addition, the distribution rules of the sub-markers within different markers are different, and thus, each marker may have different identity information. The terminal device may acquire identity information corresponding to the tag by identifying the sub-tag included in the tag, and the identity information may be information that can be used to uniquely identify the tag, such as a code, but is not limited thereto.
As an implementation, the outline of the marker may be rectangular, or may also be circular, and may also conform to the actual shape of the wearable device, such as a circle that conforms to the actual shape of the dial of the smart watch, or a square that conforms to the actual shape of the smart bracelet. Of course, the shape of the marker may be other shapes, and is not limited herein, and the shape region and the plurality of sub-markers in the region constitute one marker. Of course, the marker may also be an object which is composed of a light spot and can emit light, the light spot marker may emit light of different wavelength bands or different colors, and the terminal device acquires the identity information corresponding to the marker by identifying information such as the wavelength band or the color of the light emitted by the light spot marker. It should be noted that the shape, style, size, color, number of feature points, and distribution of the specific marker are not limited in this embodiment, and only the marker needs to be recognized and tracked by the terminal device.
In some embodiments, the number of markers on the wearable device may be multiple, and each marker is fixed with respect to the position information, the rotation direction, and the rotation angle of the wearable device, that is, there may be a rigid structural relationship between the marker and the wearable device. In some embodiments, the number of markers in the image acquired by the terminal device may be one. In one embodiment, the spatial position information of one marker in the image with respect to the terminal device is recognized, and the relative spatial position relationship between the wearable device and the terminal device may be obtained based on the position information, the rotation direction, and the rotation angle of the marker with respect to the wearable device, which are stored in advance. In other embodiments, the number of markers in the image acquired by the terminal device may be multiple. As one mode, the relative spatial position relationship between the wearable device and the terminal device may be obtained by identifying spatial position information of each of the plurality of markers with respect to the terminal device, and based on the position information, the rotation direction, and the rotation angle of each marker with respect to the wearable device.
In some embodiments, the marker may be disposed on a housing of the wearable device, such as on an outer ring of a dial plate of the smart watch (fixed to the dial plate), or may be displayed on a screen of the wearable device in the form of an image, so as to locate and track the wearable device. Further, when the marker is disposed on the shell of the wearable device, a filter layer may be disposed on the marker to hide the marker.
Further, in order to increase the tracking range of the terminal device to the wearable device, if the user still can acquire the relative spatial position relationship between the wearable device and the terminal device when turning over the wrist, a marker can be set on the part of the wearable device close to the user, for example, a marker can be set on a watchband of the smart watch close to the user. Markers may also be provided around the wearing device, such as, but not limited to, left or right side markers relative to the user when the wearing device is worn.
In some embodiments, the wearable device may include an Inertial measurement sensor including an Inertial Measurement Unit (IMU). The IMU can detect six-degree-of-freedom information of the wearable device and can only detect three-degree-of-freedom information of the wearable device. The three-degree-of-freedom information can include the rotational freedom of the wearable device along three orthogonal coordinate axes (X, Y, Z axes) in space, the six-degree-of-freedom information can include the mobile freedom and the rotational freedom of the wearable device along three orthogonal coordinate axes in space, the mobile freedom and the rotational freedom corresponding to the three orthogonal coordinate axes can form the position information of the wearable device, and the corresponding rotational freedom can form the posture information of the wearable device. Therefore, the terminal device can obtain the attitude information or the detected position and attitude information of the wearable device detected by the IMU by receiving the sensing data of the inertial measurement sensor sent by the wearable device, and further obtain the relative spatial position relationship between the wearable device and the terminal device.
In some embodiments, in order to accurately acquire the position and posture information of the wearable device, the terminal device may acquire an image including the wearable device and sensing data of the inertial measurement sensor, so as to obtain the position and posture information of the wearable device according to the identification data of the image and the detection data of the IMU. The terminal device acquires an image containing the wearable device, may acquire the image containing the wearable device through an image acquisition device, and may also acquire the image containing the wearable device through other sensor devices, for example, a sensor device having an image acquisition function, such as an image sensor and an optical sensor.
After the terminal device obtains the image containing the wearable device, the wearable device in the image can be identified to obtain the profile data of the wearable device, and the profile state of the wearable device relative to the terminal device is obtained. Therefore, the position information of the wearable device relative to the terminal device can be obtained according to the contour data. Therefore, the terminal equipment can acquire the position information of the wearable equipment according to the profile data of the wearable equipment and acquire the posture information of the wearable equipment according to the inductive data of the IMU, so that the relative spatial position relation between the wearable equipment and the terminal equipment can be acquired. In some embodiments, if the sensing data of the IMU includes the degrees of freedom of movement and the degrees of freedom of rotation of the wearable device along three orthogonal coordinate axes in space, the relative spatial position relationship between the wearable device and the terminal device may also be directly obtained according to the sensing data of the IMU.
In some embodiments, to enable real-time tracking of the spatial position of the body of the wearable device, the profile data may be profile data of the body of the wearable device. For example, when the wearable device is a smart watch, the profile data may be profile data of a dial; when wearing equipment was intelligent bracelet, above-mentioned profile data was the profile data of bracelet body.
In some embodiments, the wearable device may further include a light spot, and the terminal device may collect a light spot image on the wearable device through the image collecting device to identify the light spot in the light spot image, and determine a relative spatial position relationship between the wearable device and the terminal device according to the light spot image, so as to position and track the wearable device. The light spot that sets up on the wearing equipment can be visible light spot, also can be infrared light spot etc. and when the light spot was infrared light spot, can be provided with infrared camera on the terminal equipment for gather the light spot image of this infrared light spot. The light spot set on the wearable device can be one or a light spot sequence consisting of a plurality of light spots.
In one embodiment, the light spot may be disposed on a housing of the wearable device, for example around a dial when the wearable device is a smart watch. The arrangement of the light spots may be various, and is not limited herein. For example, in order to obtain the posture information of the wearable device in real time, different light spots may be respectively arranged around the main body of the wearable device, for example, different numbers of light spots may be arranged around the dial of the smart watch, or light spots of different colors may be arranged around the dial, so that the terminal device determines the relative spatial position relationship between the wearable device and the terminal device according to the distribution of each light spot in the light spot image.
In some embodiments, the terminal device may also accurately acquire a relative spatial position relationship between the wearable device and the terminal device according to the light spot image and the measurement data of the IMU.
Of course, the manner of obtaining the relative spatial position relationship between the wearable device and the terminal device may not be limited in this embodiment of the application.
Further, in some embodiments, when the terminal device displays the virtual screen, the terminal device needs to acquire content data of the virtual screen to be displayed. The content data may include model data of a virtual screen to be displayed, where the model data is data for rendering the virtual screen. For example, the model data may include color data, vertex coordinate data, contour data, and the like for establishing correspondence of the virtual picture. In addition, the model data of the virtual screen to be displayed may be stored in the terminal device, or may be acquired from other electronic devices such as a wearable device and a server.
In some embodiments, the content data of the virtual image to be displayed may be obtained according to the identity information of the marker on the wearable device, that is, the content data of the corresponding virtual image to be displayed may be read according to the identity information of the marker, so that the displayed virtual image corresponds to the identity information of the marker on the wearable device.
In other embodiments, the content data of the virtual picture to be displayed may be obtained according to a virtual object that is already displayed in the virtual space by the terminal device, that is, the content data of the corresponding virtual picture to be displayed may be obtained by a virtual object that is currently displayed in the virtual space by the terminal device, where the virtual picture to be displayed may be an extended content of the virtual object. For example, in order not to interfere with the field of view of the user, the virtual notification information displayed in the AR glasses is relatively simple, and if the user needs to view detailed notification information, the terminal device may acquire the virtual detailed notification information according to the currently displayed virtual notification information, and use the data of the virtual detailed notification information as the content data of the virtual screen to be displayed.
Of course, the manner of acquiring the content data of the virtual screen to be displayed is only an example, and the specific content data of the virtual screen may not be limited in this embodiment of the application. For example, the content data of the virtual picture to be displayed may be acquired according to the captured scene image of the environment in which the terminal device is located.
In some embodiments, after the terminal device acquires the content data, a virtual screen to be displayed may be generated according to the content data and the relative spatial position relationship. The terminal device generates a virtual picture according to the content data and the relative spatial position relationship, or constructs a virtual picture according to the content data, and acquires a rendering position of the virtual picture according to the relative spatial position relationship between the wearable device and the terminal device, so as to render the virtual picture according to the rendering position.
In some embodiments, since the terminal device already obtains the relative spatial position relationship between the wearable device and the terminal device, the terminal device may obtain spatial position coordinates of the wearable device in the real space, and convert the spatial position coordinates into spatial coordinates in the virtual space. The virtual space can include a virtual camera, the virtual camera is used for simulating human eyes of a user, and the position of the virtual camera in the virtual space can be regarded as the position of the terminal equipment in the virtual space. According to the position relation between the virtual picture to be displayed in the virtual space and the wearable device, the virtual camera is used as a reference, the space position of the virtual picture relative to the virtual camera can be obtained, so that the rendering coordinate of the virtual picture in the virtual space is obtained, and the rendering position of the virtual picture is obtained, wherein the rendering position can be used as the rendering coordinate of the virtual picture, so that the virtual picture is rendered at the rendering position. The rendering coordinates may be three-dimensional space coordinates of the virtual image in the virtual space with the virtual camera as an origin (which may also be regarded as the origin of the human eye).
It can be understood that after the terminal device obtains rendering coordinates for rendering a virtual picture in a virtual space, the terminal device may construct a three-dimensional virtual picture according to content data corresponding to the obtained virtual picture to be displayed, and render the virtual picture according to the rendering coordinates, where the rendering of the virtual picture may obtain RGB values and corresponding coordinates of each vertex in the three-dimensional virtual picture.
In some embodiments, after the terminal device generates the virtual screen, the virtual screen may be displayed. Specifically, after the terminal device constructs and renders a virtual picture, the rendered virtual picture can be converted into a display picture to obtain corresponding display data, the display data can include RGB values of each pixel point in the display picture, corresponding pixel point coordinates and the like, the terminal device can generate the display picture according to the display data, and the display picture is projected onto the display lens through a display screen or a projection module, so that the virtual picture is displayed. The user can see that the three-dimensional virtual picture is superposed and displayed on the wearing equipment in the real world through the display lens of the head-mounted display device, and the effect of augmented reality is achieved. Therefore, the corresponding virtual picture is displayed in the virtual space according to the spatial position of the marker on the wearable device worn with the user, so that the user can observe the effect that the virtual picture is superimposed on the real world, and the display effect of the virtual picture is improved.
For example, referring to fig. 1, the terminal device 100 is a head-mounted display device, the wearable device 200 is a smart watch, and the user can scan the marker 201 on the wearable device 200 in real time through the head-mounted display device worn by the user, and can see that the three-dimensional virtual automobile model 401 is superimposed on the wearable device 200 displayed in real space. For another example, referring to fig. 4, in order not to interfere with the visual field of the user, the virtual notification information 300 displayed by the terminal device is relatively simple, and when the user needs to view more detailed mail information, the user can scan the marker on the smart watch in real time through the head-mounted display device worn by the user, and can see that the detailed virtual mail information 402 is superimposed and displayed on the smart watch in real space, thereby embodying the augmented reality effect of the virtual content.
In some embodiments, the position relationship between the virtual screen and the wearable device may be fixed, for example, the virtual screen is fixedly displayed in a predetermined area on the wearable device, or may be related to specific virtual content in the virtual screen, for example, when the virtual screen is a User Interface (UI), the virtual screen is displayed in front of the wearable device, and when the virtual screen is mail information, the virtual screen is displayed above the wearable device, which is not limited herein.
Step S120: the first position and the posture information of the wearable device relative to the terminal device are obtained.
In the embodiment of the application, after the terminal device displays the virtual picture and needs to interact with the virtual picture, the terminal device can realize interaction with the virtual picture by acquiring the first position and posture information of the wearable device relative to the terminal device in real time and according to the position and posture information of the wearable device. The posture information may be an orientation and a rotation angle of the wearable device relative to the terminal device.
In some embodiments, the terminal device obtains the first position and the posture information of the wearable device relative to the terminal device, which may refer to the above-mentioned manner of obtaining the relative spatial position relationship between the wearable device and the terminal device, and is not described herein again.
Step S130: and acquiring the position information of the virtual picture relative to the terminal equipment.
When the end point device realizes the interaction with the virtual content, the virtual content for interaction needs to be determined. Therefore, when interaction with the virtual picture is required, the terminal device can acquire the position information of the virtual picture relative to the terminal device, so as to determine the virtual content to be interacted as the virtual picture according to the position information of the virtual picture, and further realize interaction between a user and the virtual picture.
In some embodiments, when the terminal device displays the virtual screen, the terminal device needs to acquire position information of the virtual screen relative to the terminal device, so as to render and generate the virtual screen according to the position information. Therefore, when the endpoint device needs to interact with the displayed virtual picture, the endpoint device can directly acquire the position information of the virtual picture acquired before relative to the terminal device.
Step S140: and acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information.
In the embodiment of the application, the terminal device may determine the target area selected by the wearable device in the virtual picture according to the position information of the virtual picture relative to the terminal device and the first position and posture information of the wearable device relative to the terminal device. When the wearable device is opposite to the virtual picture, the target area is an area pointed by the wearable device in the virtual picture and can correspond to the position and posture information of the wearable device. The target area can be determined according to the intention of the user, that is, the user can determine the target area selected by the wearable device in the virtual screen by changing the position and posture information of the wearable device.
In the embodiment of the application, the angle between the direction pointed by the wearable device and the plane where the wearable device is located is fixed, and when the terminal device obtains the first position and the posture information of the wearable device relative to the terminal device, the spatial position information of the wearable device can be obtained, so that the current pointed direction of the wearable device can be obtained according to the angle fixed relation between the pointed direction and the wearable device and the spatial position information of the wearable device, and the target area selected by the wearable device in the virtual picture can be obtained according to the position information of the virtual picture relative to the terminal device and the current pointed direction of the wearable device.
Step S160: processing operations corresponding to the target area are performed.
In the embodiment of the application, after the terminal device acquires the target area selected by the wearable device in the virtual screen, the terminal device can respond to the target area and perform processing operation corresponding to the target area. Therefore, different areas in the virtual picture can be selected according to the position and posture information of the wearable device, the interaction between the wearable device and the virtual content is embodied, and the convenience of interaction between a user and the virtual content is improved. For example, referring to fig. 5, the user changes the position and posture information of the smart watch by rotating the wrist, thereby changing the selected target area 410 in the virtual screen 400.
In some embodiments, after the terminal device displays the virtual screen, it may detect whether a selected target area exists in the displayed virtual screen in real time, so that when the target area is detected, the terminal device may perform a relevant processing operation on the target area, thereby achieving a purpose that a user selects the area to perform a relevant processing. The processing operation may be a content selection operation, a content switching operation, a content moving operation, a content rotation operation, and the like, and may be set reasonably according to a specific application scenario, which is not limited herein.
The performing of the processing operation corresponding to the target area may be performing different processing operations according to different virtual contents in the target area. For example, referring to fig. 6, a displayed virtual screen 400 is a virtual UI interface of an intelligent desk lamp, a selected target area 410 is a brightness setting, and a terminal device can slide with different brightness setting values; for another example, when the virtual content in the target area is "return to the previous level", the terminal device may switch the virtual screen. Of course, the above-described execution of the processing operation corresponding to the target area is merely an example, and does not represent a limitation on the execution of the processing operation corresponding to the target area.
According to the interaction method of the virtual content, after the virtual picture is displayed, the first position and posture information of the wearable device relative to the terminal device is obtained, the position information of the virtual picture relative to the terminal device is obtained, the target area selected by the wearable device in the virtual picture is obtained according to the position information, the first position and the posture information, and the processing operation corresponding to the target area is carried out, so that the target area in the virtual picture is selected according to the spatial position of the wearable device, the target area is operated, the interaction between the wearable device and the terminal device is realized, and the interactivity and the interaction convenience of a user and the virtual content are improved.
Referring to fig. 7, another embodiment of the present application provides an interaction method for virtual content, which is applicable to a terminal device, where the terminal device is in communication connection with a wearable device, and the interaction method for virtual content may include:
step S210: and displaying the virtual picture.
In the embodiment of the application, when the terminal device displays the virtual picture, the terminal device needs to acquire content data of the virtual picture to be displayed.
In some embodiments, the content data of the virtual picture to be displayed may be associated with a real object, that is, when the terminal device identifies a different real object, the displayed virtual picture is different. Specifically, referring to fig. 8, the displaying the virtual frame includes:
step S211: and identifying the entity object in the real space to obtain the picture data corresponding to the entity object.
In some embodiments, when the terminal device needs to interact with a real object, the terminal device may identify an entity object in a real space, and obtain picture data corresponding to the entity object based on the identification result. Wherein the entity object can be any physical entity in real space.
In some embodiments, the picture data may include model data (e.g., color data, vertex coordinate data, contour data, etc.) of a virtual picture to be displayed, the model data being data for rendering the virtual picture. The screen data may be stored in the terminal device or may be acquired from another electronic device such as a server. In some embodiments, the screen data corresponding to the entity object may be UI data of the entity object, or may be data related to the entity object, such as video data, image data, and the like, which is not limited herein.
In some embodiments, the identifying the entity object in the real space may be that an image including the entity object is collected by a camera of the wearable device, the wearable device sends the image to the terminal device, and the terminal device identifies the entity object in the image. The terminal device may also directly acquire an image containing the physical object, and then identify the physical object in the image, which is not limited herein.
Step S212: generating a virtual picture according to the picture data;
when the terminal device needs to interact with the entity object in the real space, a virtual picture corresponding to the entity object can be generated according to the acquired picture data. The virtual screen may be a UI of the entity object or related information of the entity object. After the terminal device obtains the picture data, a virtual picture can be constructed according to the picture data, and the virtual picture can be rendered according to the position information of the virtual picture relative to the terminal device. The rendering of the virtual picture can obtain the RGB value of each pixel point in the virtual picture, the corresponding pixel point coordinates, and the like.
Step S213: and displaying the virtual picture.
In this embodiment, after the terminal device generates the virtual screen corresponding to the entity object, the virtual screen may be displayed. Specifically, after the terminal device constructs and renders the virtual image, display data of the rendered virtual image may be acquired, where the display data may include RGB values of each pixel point in the display image, and corresponding pixel point coordinates, and the terminal device may generate the display image according to the display data, and project the display image onto the display lens through the display screen or the projection module, thereby displaying the virtual image. The user can see the virtual picture corresponding to the real object and superpose and display on the wearing equipment in the real world through wearing display device's display lens, realizes augmented reality's effect. Therefore, interaction between the terminal equipment and a real object is realized, and the display effect of the virtual picture is improved. For example, referring to fig. 5, the terminal device identifies the smart table lamp in the real space to display a virtual frame 400 in the virtual space, where the virtual frame 400 is a virtual UI interface of the smart table lamp, and the user can see the virtual frame 400 corresponding to the real object superimposed on the smart watch in the real world by wearing a display lens of the display device.
Further, when there are multiple entity objects in the real space, the terminal device needs to select one entity object for interaction. In some embodiments, referring to fig. 9, the identifying the physical object in the real space and obtaining the frame data corresponding to the physical object may include:
step S2111: the method comprises the steps of collecting a scene image of an environment where the terminal equipment is located, wherein the environment comprises at least one entity object.
In some embodiments, the environment in which the terminal device is located includes at least one entity object, and the terminal device may acquire a scene image of the environment in which the terminal device is located through the image acquisition device, so as to select the entity object to be interacted according to the scene image. Wherein, at least one entity object can be included in the scene image.
In some embodiments, the terminal device collects the scene image, and may collect the scene image through an image collecting device of the terminal device, or acquire the collected scene image from the wearable device after collecting the scene image by using a camera of the wearable device.
Step S2112: and acquiring second position and posture information of the wearable device relative to the terminal device.
In some embodiments, the terminal device may select the entity object by using the wearable device. Therefore, the terminal device needs to acquire the second position and posture information of the wearable device relative to the terminal device, so that the entity object can be selected by using the position and posture information of the wearable device. The terminal device obtains the second position and the posture information of the wearable device relative to the terminal device, and the manner of obtaining the relative spatial position relationship between the wearable device and the terminal device by the terminal device in the above embodiments may be referred to, which is not described herein again.
Step S2113: and acquiring the selected entity object in the scene image according to the second position and the posture information.
When the terminal device acquires the scene image, the spatial position information between each entity object in the scene image and the terminal device can be obtained, so that the spatial position information between each entity object in the scene image and the wearing device can be obtained according to the second position and the posture information of the wearing device relative to the terminal device. Because the direction in which the wearable device is pointed is fixed relative to the wearable device, the selected entity object in the scene image can be determined according to the pointing direction of the wearable device and the spatial position information between each entity object and the wearable device.
In some embodiments, the wearable device is provided with a light emitting device, and the light emitting direction is fixed relative to the wearable device, so that the light emitting direction can be determined according to the position and posture information of the wearable device and the fixed relation between the light emitting direction and the wearable device. The user changes the position and the posture information of the wearable device through moving, rotating and the like, so that the direction of light is changed, and the light spot can be displayed on the entity object to be selected. The terminal device may determine whether the physical object in the scene image is the selected physical object by determining whether there is a light spot on the physical object. In some embodiments, the light in the light emitting device may be visible light or infrared light, and is not limited herein.
Step S2114: and identifying the selected entity object to obtain the picture data corresponding to the selected entity object.
When the terminal device acquires the selected entity object in the scene image, the entity object can be identified to obtain the picture data corresponding to the selected entity object, so that the virtual picture can be generated according to the picture data.
In some embodiments, the screen data corresponding to the entity object may be stored in a server or a terminal device. When the picture data is stored in the server, the terminal device obtains the picture data, and may download the corresponding picture data from the server based on the identity information after identifying the identity information of the entity object.
Further, a virtual screen may be displayed near the physical object to reduce the number of times the eyes move around. Therefore, in some embodiments, the generating a virtual screen from the screen data includes:
acquiring a first spatial position relation between the virtual picture and the entity object and a second spatial position relation between the entity object and the terminal equipment; and generating a virtual picture according to the picture data, the first spatial position relation and the second spatial position relation.
In some embodiments, the first spatial positional relationship between the virtual screen and the physical object is fixed, i.e., the virtual screen is fixedly displayed in the vicinity of the physical object. Therefore, the terminal device may obtain a first spatial position relationship between the virtual picture and the entity object and a second spatial position relationship between the entity object and the terminal device, so as to obtain a spatial position of the virtual picture relative to the terminal device according to the first spatial position relationship and the second spatial position relationship, thereby obtaining a rendering coordinate of the virtual picture in the virtual space. The terminal device can construct a virtual picture according to the picture data and render the virtual picture according to the rendering coordinates. Thereby realizing that the virtual picture is displayed near the entity object.
In other embodiments, the content data of the virtual content to be displayed may correspond to the wearable device. Specifically, the acquiring content data of the virtual image to be displayed may include:
acquiring an identity corresponding to the wearable device; accessing the stored data corresponding to the identity according to the identity; and acquiring the content data of the virtual picture to be displayed from the stored data.
The identity mark is a mark capable of uniquely marking the wearable device. The specific identification is not limited herein. For example, the identification may be a string of data composed of numbers and/or english words, or a pattern having a certain shape such as a two-dimensional code or a barcode. In some embodiments, the identity may be stored in the wearable device, and when the wearable device is in communication connection with the terminal device, the wearable device sends the identity to the terminal device, so that the terminal device may obtain the identity corresponding to the wearable device.
In some embodiments, the identification mark may correspond to a marker on the wearable device, so that the terminal device may acquire the identification mark corresponding to the wearable device by recognizing the marker on the wearable device. Specifically, after the terminal device identifies the marker on the wearing device, the identity corresponding to the marker can be acquired according to the corresponding relationship between the identity and the marker on the wearing device. In some embodiments, the correspondence between the identification and the marker on the wearable device may be stored in the terminal device, and the correspondence may be set by the user, may be default when the terminal device leaves a factory, or may be acquired by the terminal device from a server.
In some embodiments, when there are a small number of terminal devices (e.g., AR devices such as head mounted display devices), multiple wearable devices may be configured so that different users may access their own data when using the same terminal device. Specifically, the terminal device may access the stored data corresponding to the identity identifier according to the identity identifier, where the identity identifier may be understood as information such as an account number and a password required for accessing the data. Therefore, when different users use the same terminal device, the users can log in the terminal device through the wearing devices to access data related to the users.
The storage data may be data related to the wearable device, such as electric quantity information, function information, specification parameters, and the like of the wearable device, or data related to a user wearing the wearable device, such as walking steps, real-time heart rate, incoming call data, short message mail information, and the like of the user, or data stored by the user, such as files, audio, and the like. In some embodiments, the storage data may be storage data in the terminal device or storage data in the server.
In some embodiments, the terminal device may be connected to the cloud server, and the terminal device may log in the cloud server through the identity identifier corresponding to the wearable device to access the storage data in the corresponding storage space in the cloud server.
In some embodiments, when the terminal device needs to display the virtual screen, the content data of the virtual screen to be displayed may be acquired from the accessed stored data, so as to display the data related to the terminal device. The content data of the virtual screen to be displayed, which is acquired from the stored data, may be selected by the user or acquired by default by the terminal device.
Further, the terminal device may display the virtual screen only when the user performs a wrist-lifting motion. Specifically, the terminal device may determine whether the content data needs to be acquired according to an execution action of the user, and generate and display a virtual screen according to the content data when it is determined that the content data needs to be acquired. Therefore, before the above-mentioned acquiring the content data of the virtual screen to be displayed, the method for interacting the virtual content may further include:
judging whether the wearable equipment is in a specified action state or not according to the relative spatial position relation; and when the virtual screen is in the designated action state, executing the step of acquiring the content data of the virtual screen to be displayed.
When the wrist of the user is put down, the wearable device is out of the visual field of the image acquisition device of the terminal device, and the terminal device cannot acquire the spatial position information of the wearable device, so that the virtual picture cannot be displayed according to the spatial position information of the wearable device. Therefore, when the wrist of the user is put down, the wearable device is in a descending state, and the terminal device may not display the virtual content.
In some embodiments, the terminal device may display the virtual content only when the wearable device is in the lifted state. Specifically, the terminal device may determine whether the wearable device is in a designated motion state according to the relative spatial position relationship, and when the wearable device is in the designated motion state, the terminal device may execute the step of acquiring the content data of the virtual picture to be displayed to perform subsequent display of the virtual picture, and when the wearable device is not in the designated motion state, the terminal device may not execute the step of acquiring the content data of the virtual picture to be displayed, so that the virtual picture is not displayed. When the wearable device is in the designated action state, the step of acquiring the content data of the virtual content to be displayed is executed, so that the terminal device can display the virtual picture only when the wearable device is in the lifting state, and a user can lift the wrist to enable the wearable device to be in the lifting state, so that the display of the virtual picture is controlled, and the interactivity between the user and the virtual picture is improved.
In this embodiment of the application, the specified action state is a state that the wearable device needs to maintain when the terminal device acquires the content data, and may be implemented by a specified action of a user. The designated motion may be a wrist raising motion, such as a user raising the wrist from the body side to the front, raising the wrist from the body side to the chest, and the like. It can be understood that when the user performs a wrist-lifting action, the wearable device is in a lifted state, and the main body part of the wearable device is located in the visual field range of the image acquisition device of the terminal device.
Because the relative spatial position relationship obtained by the terminal device may include the position information, the rotation direction, the rotation angle, and the like of the wearable device relative to the terminal device, in some embodiments, the terminal device may determine whether the main body portion of the wearable device is within the visual field of the image acquisition device of the terminal device according to the position information, the rotation direction, and the rotation angle of the wearable device relative to the terminal device, so as to determine whether the wearable device is in the lifted state. It can be understood that, when the main body portion of the wearable device is within the visual field of the image capture device of the terminal device, the terminal device may determine that the wearable device is in the lifted state, so that the above step of acquiring the content data of the virtual picture to be displayed may be performed.
In other embodiments, the terminal device may directly determine whether the wearable device is in the lifted state according to the sensing data of the sensor of the wearable device. In some embodiments, the sensor may be an acceleration sensor, a gravity sensor, or the like, that is, it may be determined whether the wearable device is in the lifted state by determining whether an acceleration value of the wearable device is greater than a preset threshold, or it may be determined whether the wearable device is in the lifted state by determining whether a change in gravity of the wearable device satisfies a preset condition.
The preset threshold is a value greater than 9.8 (gravity acceleration), such as 12, 14, and the like, and the preset threshold is a minimum value that the wearable device is in a lifted state and the acceleration value of the wearable device needs to meet. Can set up according to user's specific in service behavior, if can be when using wearing equipment for the first time, gather and record the acceleration information when the user makes the action of lifting the wrist to preset threshold value's settlement according to this record information. It can be understood that the larger the preset threshold is set, the faster the user needs to trigger the hand-raising action of the judgment. Similarly, the above preset condition is that the gravity variation range of the wearable device is in the lifted state, and can be set according to the specific use condition of the user. If can be when using wearing equipment for the first time, gather and record the gravity change information when the user makes the action of lifting the wrist to carry out the settlement of preset condition according to this record information.
In some embodiments, when the user performs the specified action, the terminal device may acquire content data of a virtual picture to be displayed, so that the terminal device may construct a virtual picture according to the content data, acquire a rendering position of the virtual picture according to a relative spatial position relationship between the wearable device and the terminal device, render the virtual picture according to the rendering position, and display the virtual picture in a virtual space. The display method has the advantages that the user can see the virtual picture to be displayed on the wearable device when the user lifts the wrist, the display effect of the virtual content is improved, and the interactivity between the user and the virtual content is improved. For example, referring to fig. 1, when the user lifts the wrist to the front of the chest, the head-mounted display device worn by the user may scan the marker 201 on the wearable device 200 worn on the wrist in real time, and it can be seen that the three-dimensional virtual automobile model 401 is superimposed and displayed on the wearable device 200 in real space, which embodies the augmented reality display effect of the virtual content and embodies the interaction between the terminal device and the wearable device.
Step S220: the first position and the posture information of the wearable device relative to the terminal device are obtained.
Further, in some embodiments, a lock function may be set for the displayed virtual screen. Because the virtual picture can be displayed only when the user lifts the wrist and the arm of the user is sore due to long-time wrist lifting, when the user lifts the wrist to check the virtual picture for more than a certain time, the displayed virtual picture can be locked, so that when the wrist is put down, the terminal equipment can still display the virtual picture, and the position information of the virtual picture relative to the terminal equipment is fixed. Specifically, the terminal device may acquire a duration that a variation value of the first position and posture information is smaller than a preset threshold, so as to determine a wrist-raising duration when the user raises the wrist to view the virtual content according to the duration, determine whether the virtual content may be locked by determining whether the duration reaches the preset duration, and when determining that the duration reaches the preset duration, the terminal device may fix the virtual content at the current display position, thereby implementing position locking of the virtual content.
It can be understood that, when the user keeps the wrist-lifting action for a long time, the first position and the posture information of the wearable device relative to the terminal device can be kept stable and can not be changed too much, so that after the user makes the wrist-lifting action, a preset threshold value can be set, and whether the user keeps the wrist-lifting action or not is judged through the preset threshold value. The preset threshold value is the maximum change numerical value of the first position and posture information when the user keeps the wrist lifting action. After the user performs the wrist lifting action, if the variation value of the first position and posture information is smaller than the preset threshold value, it can be determined that the user keeps the wrist lifting action. If the variation value of the first position and posture information is larger than the preset threshold value, the fact that the user does not perform the wrist lifting action can be judged.
Furthermore, a preset time length can be set, so that when the user keeps the wrist-lifting action to reach the preset time length, the virtual picture can be locked. The preset duration is the maximum duration for the user to keep the wrist-lifting action when the virtual picture is locked. It can also be understood that, within the preset duration, the user needs to keep the wrist-raising action all the time, and the terminal device can lock the virtual picture, that is, when the duration that the change value of the first position and posture information is smaller than the preset threshold reaches the preset duration, the middle device can fix the virtual content at the current display position.
In some embodiments, the terminal device fixedly displays the virtual screen at the current display position, which may be by obtaining rendering coordinates of the current virtual screen, and using the rendering coordinates as rendering coordinates of all subsequent virtual screens to be displayed, so as to render the subsequent virtual screens according to the rendering coordinates, thereby implementing that the virtual screen is always displayed at the current display position. Therefore, when the user puts down the wrist, the terminal device can still display the virtual picture, and the display position of the virtual picture relative to the terminal device is fixed. The current display position refers to a position of the virtual content in the virtual space, and the display position is fixed, and may be a relative position between the virtual content and a virtual camera (which may also be regarded as a human eye) or a relative position between the virtual content and a world coordinate origin in the virtual space.
Step S230: and acquiring the position information of the virtual picture relative to the terminal equipment.
Step S240: and acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information.
In some embodiments, the content of step S230 and step S240 may refer to the content of the above embodiments, and is not described herein again.
In some embodiments, the terminal device may obtain the target area selected by the wearable device in the virtual screen through the virtual guide of the wearable device. Specifically, referring to fig. 10, the obtaining a target area selected by the wearable device in the virtual screen according to the position information, the first position, and the posture information may include:
step S241: and generating the virtual guide according to the relative position relation between the virtual guide and the wearable device and the first position and posture information.
In the embodiment of the present application, the virtual guide may be a virtual ray or a virtual curve, which is not limited herein and is used for indicating a direction. The relative position relation between the virtual guide and the wearable device is fixed, and the virtual guide can be adjusted according to the operation habits of the user. In some embodiments, the pointing direction of the virtual guide may be parallel to the plane of the wearable device, or may be at an angle, such as 45 ° obliquely upward, and is not limited herein.
The terminal equipment can obtain the space position coordinate of the wearable equipment in the real space according to the first position and posture information, converts the space position coordinate into the space coordinate in the virtual space, and renders the virtual guide according to the space coordinate.
Step S242: the virtual guide is displayed.
After the terminal device renders the virtual guide, the virtual guide can be displayed. The user can see virtual guide overlapping display on the wearing equipment in the real world through wearing display device's display lens, realizes augmented reality's effect.
Step S243: and acquiring an intersection region intersected with the virtual guide in the virtual picture according to the position information and the virtual guide, and taking the intersection region as a target region selected by the wearable device in the virtual picture.
In the embodiment of the application, the terminal device may obtain an intersection area in the virtual screen, which is intersected with the virtual guide, according to the position information of the virtual screen relative to the terminal device and the displayed virtual guide, and use the intersection area as a target area selected by the wearable device in the virtual screen.
In some embodiments, the acquiring of the intersection region of the virtual screen, which intersects with the virtual guide, may be acquiring a coordinate point region of the virtual screen, which has the same coordinates as the virtual guide, and the coordinate point region may be directly used as the intersection region, or may be acquiring a region where the corresponding virtual content is located according to the coordinate point region, and using the region where the virtual content is located as the intersection region, that is, as a target region selected by the wearable device in the virtual screen. The area where the corresponding virtual content is obtained according to the coordinate point area may be an area where a virtual content closest to the coordinate point area is obtained according to the coordinate point area. Therefore, the user can change the position information of the virtual guide by adjusting the position and the posture information of the wearable device, so that the displayed virtual guide can be intersected with the virtual content to be selected, and the virtual content can be accurately selected in the virtual picture. For example, referring to fig. 11, a user can see through the head-mounted display device that a virtual ray 420 is emitted obliquely upward from the smart watch, and the user can change the pointing direction of the virtual ray 420 by rotating the wrist, so that the target area 410 pointed by the virtual ray 420 can be used as the target area selected by the wearable device in the virtual screen.
In a game scene, in order to enhance the game experience, a virtual sighting device can be displayed on the wearable device, and the whole wearable device can be used as a transmitter to carry out shooting operation. In some embodiments, the pointing direction of the virtual guide may be aligned with the center of the virtual sight, so that when a selection of a virtual content in the virtual screen is required, the user can align the center of the virtual sight with the virtual content. For example, referring to fig. 12, the terminal device displays the virtual sight 600 on the smart watch, the center of the virtual sight 600 is aligned with the pointing direction of the virtual ray 420, and the user can see through the head-mounted display device, and the virtual sight 600 is aligned with the target area 410 in the virtual screen 400. In some embodiments, the virtual guide may be hidden, so that the user cannot see the virtual guide through the head-mounted display device, thereby improving the game experience.
In some embodiments, after the target area selected by the wearable device in the virtual screen is acquired according to the position information, the first position and the posture information, the method for interacting the virtual content may further include:
when the change of the first position and the posture information of the wearable device relative to the terminal device is detected, a new target area selected by the wearable device in the virtual picture is obtained according to the changed first position and posture information.
It can be understood that after the target area selected by the wearable device in the virtual picture is obtained according to the position information, the first position and the posture information of the wearable device relative to the terminal device can be detected in real time, so that when the first position and the posture information of the wearable device relative to the terminal device are changed, the target area selected by the wearable device in the virtual picture is updated. That is, when it is detected that the first position and orientation information of the wearable device with respect to the terminal device changes, the target area selected by the wearable device in the virtual screen is newly determined by the method for determining the target area according to the changed first position and orientation information, and a processing operation corresponding to the new target area is performed. Therefore, the user can change the spatial position of the wearable device relative to the terminal device, and the movement adjustment and the like can be carried out on the selected area of the wearable device in the virtual picture. The wearable device can also be used as a controller to select virtual content and the like.
Step S250: processing operations corresponding to the target area are performed.
In the above step, the terminal device may perform augmented reality display on the related data (such as UI data) of the entity object to realize interaction with the real object, so that, through the above control manner on the virtual content, the terminal device may further operate the virtual content corresponding to the entity object to realize further interaction with the real object.
In some embodiments, when the entity object is an intelligent home device, the terminal device may further set the state of the intelligent home device through the wearable device. Specifically, referring to fig. 13, the performing the processing operation corresponding to the target area may include:
step S251: and acquiring virtual content corresponding to the target area.
In the embodiment of the application, the user can interact with the smart home device through the wearable device. Specifically, the terminal device can identify the smart home device to display a virtual interactive interface (virtual UI) of the smart home device in a virtual space, and then the user can select different virtual contents in the virtual interactive interface by adjusting the position and posture information of the wearable device to set the state of the smart home device.
In some embodiments, when the terminal device displays the virtual interaction interface of the smart home device, the selected target area may be obtained through the obtaining manner of the target area in the above embodiment, so that the corresponding selected virtual content may be obtained according to the target area. Wherein the virtual content is a part of the displayed virtual interactive interface. For example, referring to fig. 6, a displayed virtual frame 400 is a brightness adjustment page of the intelligent desk lamp, a target area 410 is a brightness setting, and virtual content corresponding to the target area 410 is a specific brightness value. Specifically, when the terminal device acquires the selected target area, the spatial position of the target area may be acquired, and according to the spatial position of the target area, the virtual content corresponding to the spatial position is acquired from the virtual interactive interface.
Step S252: and generating an execution instruction according to the virtual content.
In some embodiments, the terminal device may adjust the state of the smart home device according to the obtained virtual content. Specifically, the terminal device may generate a corresponding execution instruction according to the virtual content corresponding to the target area, where the execution instruction is used to adjust the state of the smart home device to a state corresponding to the virtual content. For example, when the virtual content is the brightness 50, the terminal device may generate an execution instruction for adjusting the brightness of the intelligent desk lamp to be 50.
In some embodiments, the virtual content may correspond to the execution instruction, that is, when the terminal device acquires the virtual content corresponding to the target area, the execution instruction corresponding to the virtual content may be generated according to the correspondence between the virtual content and the execution instruction. The corresponding relationship between the virtual content and the execution instruction may be stored in the terminal device, or may be acquired from the server.
Step S253: and transmitting an execution instruction to the intelligent household equipment, wherein the execution instruction is used for indicating the intelligent household equipment to execute the setting operation.
After the terminal device generates the execution instruction, the execution instruction can be transmitted to the smart home device, and the execution instruction is used for instructing the smart home device to execute the setting operation. When the smart home device receives the execution instruction, a setting operation can be performed according to the execution instruction, so that the current state is adjusted to the state set by the user, namely, the state corresponding to the virtual content. Therefore, interaction between the wearable equipment and the intelligent household equipment is realized, and the intelligent level of the wearable equipment is improved.
In other embodiments, after the terminal device selects the target area through the wearable device, the wearable device may be further utilized to control and display the virtual content of the target area. Specifically, referring to fig. 14, the performing the processing operation corresponding to the target area may include:
step S251: and acquiring virtual content corresponding to the target area.
Step S254: and controlling the display of the virtual content according to at least one of the control action parameters detected by the wearable device, the user gestures detected by the terminal device, and the change information of the first position and the gesture.
In some embodiments, when the terminal device acquires the selected target area, the terminal device may acquire virtual content corresponding to the target area in the virtual screen, and may control display of the virtual content according to at least one of a control action parameter detected by the wearable device, a user gesture detected by the terminal device, and change information of the first position and the gesture. Therefore, when the wearable device aims at the target area, the terminal device can utilize the wearable device to control and display the virtual content of the target area. For example, in a game scenario, a user aims with a wearing device worn by the left hand and the right hand presses a key on the wearing device to shoot. For another example, when the user aims with the wearable device worn by the left hand, the aiming area may be subjected to relevant processing operations (selection, shooting, etc.) by changing the six-degree-of-freedom information of the wearable device (e.g., rotating the left wrist).
The terminal device controls the displayed virtual content according to at least one of the control action parameter detected by the wearable device, the user gesture detected by the terminal device, and the change information of the first position and the gesture, and it can be understood that the terminal device can control the displayed virtual content according to the control action parameter detected by the wearable device, also can control the displayed virtual content according to the user gesture detected by the terminal device, also can control the displayed virtual content according to the change information of the first position and the gesture, and also can control the displayed virtual content according to the combination of the control action parameter, the user gesture and the change information of the first position and the gesture. Of course, it can be understood that when the wearable device detects a plurality of control action parameters, the terminal device may control the displayed virtual content according to one of the control action parameters, or may control the displayed virtual content according to a plurality of the control action parameters.
In this embodiment of the application, the control action parameter specifically refers to operation action information that a user performs in a control area of the wearable device, that is, the control action parameter represents a specific action that the user performs in the control area of the wearable device. For example, when a button is arranged in the control area on the wearable device, when a user presses the button, the control action parameter can be a pressure signal generated when the user presses the button, and the wearable device can know that the user presses the button through the pressure signal. Of course, the above-mentioned operation parameters are only examples, and the specific operation parameters are not limited in the embodiments of the present application, and only the operation parameters correspond to the operation area of the wearing device. The control area of the wearable device may include other keys, such as a touch screen, a dial, and the like, besides the keys in the example, and the specific control area of the wearable device is not limited in this embodiment of the application.
In some embodiments, the terminal device may generate a control instruction corresponding to the control action parameter according to the control action parameter detected by the wearable device, and control the displayed virtual content according to the control instruction. In some embodiments, the corresponding relationship between the manipulation action parameter and the manipulation instruction may be stored in the terminal device in advance, and the corresponding relationship may be set by the user, may be default when the terminal device leaves a factory, or may be acquired by the terminal device from the server.
When receiving the control action parameters sent by the wearable device, the terminal device can control the displayed virtual content according to the corresponding relation between the control action parameters and the control instructions. Wherein, different control commands correspond to different control effects, and the control effects can control the virtual content to display different effects. For example, referring to fig. 15, when the user rotates the dial on the dial of the smart watch, the terminal device may generate a control instruction according to the rotation parameter of the dial, and perform option switching on the virtual content in the control target area 410, and for example, when the user operates different gestures on the touch screen of the dial, the terminal device may generate different control instructions according to different gestures on the touch screen, so as to control the virtual content to display different effects. Of course, the above control effects are only examples, and the control effect on the virtual content corresponding to the specific manipulation instruction may not be limited in the embodiment of the present application.
In other embodiments, the wearable device may also generate a control instruction according to the detected control action parameter, and then send the control instruction to the terminal device, and the terminal device may control the virtual content according to the received control instruction.
In this application embodiment, the user gesture that terminal equipment detected can be through image acquisition device real-time scanning user, gathers user's gesture, also can be through wearing equipment's camera real-time scanning user, gathers user's gesture. In some embodiments, the terminal device may generate a control instruction corresponding to the gesture according to the gesture of the user, and control the displayed virtual content. The gesture of the user may be a rising, falling, left-right hand waving, or only a finger sliding up and down. For example, referring to fig. 16, when the user selects the target area 400 through the smart watch, the terminal device may control the switching of the virtual option 410 according to the up-and-down gesture of the index finger of the user.
In this embodiment of the application, the change information of the first position and the posture may include change information of a position and a posture of the wearable device relative to the terminal device, and may be obtained by the terminal device identifying and tracking the wearable device, for example, the terminal device identifies and tracks the wearable device by acquiring an image including a marker on the wearable device.
In some embodiments, the terminal device may generate a control instruction corresponding to the change information according to the change information of the first position and the posture, and control the displayed virtual content. Specifically, the terminal device can determine motion parameters such as a motion distance and a motion direction of the wearable device according to specific changes of the position and/or the posture of the wearable device, and can determine which specific motion state the wearable device is in, such as turning, moving and the like, according to the motion parameters. Therefore, the displayed virtual content is controlled according to the corresponding relation between the motion state and the control instruction. For example, referring to fig. 17, when the user rotates the wrist, the terminal device may generate a control instruction according to the turning state of the wearable device worn on the wrist to control the virtual content 400 to scroll the content, and for example, when the user rotates the wrist to different angles, the terminal device may generate different control instructions according to different turning angles of the wearable device to control the virtual content to display different effects.
In some embodiments, the terminal device may further control the displayed virtual content by means of a combination key, that is, the terminal device may control the displayed virtual content according to different control action parameters (such as a key pressing parameter and a dial rotation parameter) detected by the wearable device, or may control the displayed virtual content according to the control action parameters detected by the wearable device and the change information of the first position and the posture. For example, referring to fig. 1 and 18, a user may zoom in on the three-dimensional virtual automobile model 401 by pressing a button on a smart watch while rotating a dial on the watch face. For another example, the user may move the finger up and down while holding down a key on the smart watch, so as to move the selected virtual content up and down.
In some embodiments, the terminal device may perform at least one of content switching, moving, rotating, and scaling on the displayed virtual content according to the change information of the first position and the posture.
Specifically, the terminal device may obtain the first position and posture information of the wearable device relative to the terminal device in real time, so that when the first position and posture information of the wearable device relative to the terminal device changes, the displayed virtual content is controlled to perform at least one of content switching, moving, rotating, and scaling adjustment according to the change information of the first position and posture. Of course, other control may also be performed on the virtual content, such as splitting the virtual content, and the like, which is not limited herein. The first position and the posture of the wearable device relative to the terminal device can be changed by rotating the wrist (e.g., clockwise rotation or counterclockwise rotation) or waving the wrist (e.g., waving up and down, waving left and right, waving back and forth).
As an embodiment, the terminal device may switch the content of the displayed virtual content according to the change information of the first position and the posture. In some application scenarios, when the number of virtual contents that the user needs to view is large, the terminal device may only display a part of the virtual contents, and therefore, according to the change information of the relative spatial position relationship, the part of the virtual contents that are not displayed may be gradually displayed, that is, the displayed virtual contents are switched. Therefore, the virtual content is displayed in a rolling mode according to the change of the position and posture information of the wearable device. For example, referring to fig. 17, when the user changes the position and posture information of the smart watch by rotating the wrist, the user can see the scroll display of the virtual content 400 on the smart watch in real space through the head-mounted display device. Of course, the above is merely an example, and the application scenario is not limited thereto.
As another embodiment, the terminal device may perform content movement on the displayed virtual content according to the change information of the first position and the posture. The content movement may be a movement in a horizontal direction, a vertical direction, or a free direction. In some implementations, the virtual content movement direction can correspond to a direction in which the wearable device is moving. For example, referring to fig. 19, when the user changes the position and posture information of the smart watch by lifting the wrist upward, the user can see, through the head-mounted display device, that the virtual automobile model 401 is always superimposed on the smart watch displayed in the real space, and moves along with the moving direction of the smart watch.
As still another embodiment, the terminal device may perform content rotation on the displayed virtual content according to the change information of the first position and orientation. In some application scenarios, when a user needs to view virtual content in all directions, the content of the virtual content may be rotated according to the change information of the first position and the orientation, where the content rotation may refer to rotating the virtual content in a specified direction (for example, a horizontal direction, a vertical direction, or a free direction) in a two-dimensional plane or a three-dimensional space, that is, rotating the virtual content along a rotation axis of the specified direction to change the orientation (facing direction, etc.) of the displayed virtual content. For example, referring to fig. 1 and fig. 20, when the user changes the position and posture information of the smart watch by rotating the wrist, the user can see the rotated virtual automobile model 401 overlaid on the smart watch displayed in the real space by wearing the display device, so as to display different viewing angles of the virtual automobile model 401.
Further, in some embodiments, the direction in which the virtual content rotates may be set to be a designated direction, and may also correspond to the direction in which the wearable device rotates, that is, the virtual content may be controlled to rotate in the direction corresponding to the posture of the wearable device. For example, referring to fig. 21, when the user changes the position and posture information of the smart watch by rotating the wrist, the rotation of the virtual automobile model 401 may be controlled according to the rotation direction of the smart watch.
As a further embodiment, the terminal device may perform scaling adjustment on the displayed virtual content according to the change information of the first position and the posture. The scaling adjustment may refer to adjustment in which a model of the virtual content is scaled up or down, where the scaling up and down is a ratio of the size of the displayed virtual content to the original size of the virtual content. In some embodiments, the scaling up or scaling down of the model of the virtual content may be determined by the direction of rotation of the wearable device, for example, referring to fig. 22, the scaling up of the virtual automobile model 401 may be controlled according to the clockwise direction of rotation of the wearable device, and the scaling down of the virtual automobile model 401 may be controlled according to the counterclockwise direction of rotation of the wearable device. In addition, the scale of enlarging or reducing the model of the virtual content may be determined according to the rotation angle size of the wearable device, for example, the larger the rotation angle of the wearable device, the larger the scale of enlarging or reducing.
Because the user can change the position and the posture information of wearing equipment through rotating the wrist, in some embodiments, different rotation angles can correspond to different control effects. Specifically, referring to fig. 23, the controlling, by the terminal device, the displaying of the virtual content according to the change information of the first position and the posture includes:
step S2541: and judging whether the wearable equipment is in a rotating state or not according to the change information of the first position and the posture.
When the user rotates the wrist, the wearable device worn on the wrist can also rotate, namely, the first position and posture information of the wearable device relative to the terminal device can also be changed. Therefore, the terminal device can determine the movement parameters such as the movement distance and the movement direction of the wearable device according to the change information of the first position and the posture, and therefore whether the wearable device is in a rotating state or not can be judged according to the movement parameters.
Step S2542: when in the rotating state, the rotating angle of the wearable device is acquired.
When the terminal device judges that the wearable device is in a rotating state according to the change information of the first position and the posture, the rotation angle of the wearable device can be obtained, namely the turning angle of the wearable device is obtained. It can be understood that the turning angle of the wearable device can be obtained according to the change information of the first position and the posture.
Step S2543: and generating a control command corresponding to the rotation angle according to the corresponding relation between the rotation angle and the control command.
In some embodiments, the rotation angle corresponds to a control command, that is, when the wearable device is turned over at different angles, the corresponding control command is different, and the display effect of the control virtual content is different. After the terminal device obtains the rotation angle of the wearable device, the terminal device can generate a control instruction corresponding to the current rotation angle according to the corresponding relationship between the prestored rotation angle and the control instruction. Wherein. The corresponding relation can be prestored in the terminal equipment or downloaded from the server.
For example, when the wearable device is rotated by 90 ° (i.e., when the wrist is rotated by 90 °), the control command may be the virtual content in the target area to which the virtual guide is selected, or when the rotation angle of the wearable device is in the range of-30 ° to 30 °, the control command may be the virtual content in the target area to which the virtual guide is clicked, and further, when the rotation angle is positive, the control command may be the virtual content in the target area to which the right-key virtual guide is pointed, the next operable command is displayed, and when the rotation angle is negative, the control command may be the virtual content in the target area to which the left-key virtual guide is pointed, and the default operation is directly performed.
Step S2544: and controlling the display of the virtual content according to the control instruction.
After generating the control instruction, the terminal device may control display of the virtual content in the target area pointed by the virtual guide according to the control instruction. The display of the virtual content in the target area pointed to by the virtual guide may be controlled by, for example, switching, moving, rotating, scaling the virtual content in the target area pointed to by the virtual guide, and the like, and is not limited herein.
In some embodiments, the control action parameters detected by the wearable device may include:
the wearable device comprises one or more of pressing parameters detected by keys of the wearable device, touch parameters detected by a touch screen of the wearable device and rotation parameters detected by a turntable of the wearable device.
In some embodiments, the wearable device may include at least one physical key, and the manipulation action parameter detected by the wearable device may include a pressing parameter detected by the key. Because the key-press can not generate a pressing signal when not pressed, and the key-press wearable device can detect one pressing signal every time, the terminal device can control the displayed virtual content according to the pressing parameters detected by the key-press. The pressing parameter may be a pressing signal or a pressing number. In some embodiments, the terminal device may control the virtual content to display different effects according to different pressing times. For example, pressing once selects the virtual content, pressing twice zooms in the virtual content.
In some embodiments, the wearable device may include a touch screen, and the control action parameter detected by the wearable device may include a touch parameter detected by the touch screen. The touch parameter may include a touch operation (e.g., a click, a slide, a long press, etc.) performed by a user on the touch screen. The terminal device can control the displayed virtual content according to the touch parameters detected by the touch screen. For example, referring to fig. 24, a user may rotate virtual automobile model 401 while swiping left or right on the touch screen of the smart watch; for another example, when two fingers swipe in opposite directions on the touch screen, the virtual content may be amplified; when the touch key is clicked on the touch screen, the virtual content can be selected, switched and the like.
In some embodiments, when the same key of the wearable device corresponds to multiple functions, the current function of the key can be switched by clicking the touch key on the touch screen, that is, the current control effect of the key is switched. For example, when a touch key is clicked on the touch screen, the function of the key is to switch virtual options, that is, to press the key of the wearable device, and the selected virtual option can be switched. As an embodiment, each time the touch key on the touch screen is clicked, one function of the key may be switched, so as to implement multiple functions of the key switched through the touch key on the touch screen.
In some embodiments, the wearable device may include a dial, and the detected manipulation motion parameter may include a rotation parameter detected by the dial. The rotation parameter may be a rotation angle or a rotation direction. Therefore, the terminal equipment can control the displayed virtual content according to the rotation parameters detected by the turntable. For example, rotation of the virtual content is adjusted by rotating the dial, the selected virtual option is switched by rotating the dial, and the scaling of the virtual content is adjusted by rotating the dial.
Similarly, the current function of the turntable, that is, the current corresponding control effect of the turntable, can also be switched by clicking the touch key on the touch screen. For example, when the touch key is clicked on the touch screen, the dial may be used to rotate the virtual content, and when the touch key is double-clicked, the dial may be used to switch the displayed virtual content.
When the control action parameter detected by the wearable device includes the rotation parameter detected by the turntable, as an implementation manner, the controlling the displayed virtual content according to the control action parameter detected by the wearable device may include:
generating a control instruction corresponding to the rotation parameter according to the corresponding relation between the rotation parameter and the control instruction; and controlling the displayed virtual content according to the control instruction.
In some embodiments, when the turntable of the wearable device detects a rotation parameter, the wearable device may send the rotation parameter to the terminal device, and after receiving the rotation parameter, the terminal device generates a control instruction corresponding to the rotation parameter according to a correspondence between the rotation parameter and the control instruction, and controls the displayed virtual content according to the control instruction. Wherein, different control instructions correspond to different control effects, and the control effects can control the virtual content to realize the display effect.
In some embodiments, for some ways like a scroll bar, a slider bar, etc. that need to be moved for content switching, the movement of the scroll bar or the slider bar may also be performed by a dial.
In some embodiments, the correspondence between the rotation parameter and the control instruction may be that different rotation angles correspond to different control instructions, for example, when the turntable rotates at different angles, the terminal device may control the virtual content to rotate at different angles. The virtual content may be enlarged when the dial is rotated clockwise, or reduced when the dial is rotated counterclockwise. The combination of the rotation angle and the rotation direction may correspond to different control commands, for example, when the angle of clockwise rotation of the turntable is gradually increased, the amplification ratio of the virtual content is also gradually increased. At present, the corresponding relationship between the above-mentioned rotation parameters and the control commands may be other, for example, different rotation turns correspond to different control commands, which is not limited herein.
In some embodiments, the touch parameters of the touch screen detected by the wearable device may include a touch trajectory, and the terminal device may implement text input of the virtual content according to the touch trajectory of the user. Specifically, the controlling the displayed virtual content according to the control action parameter detected by the wearable device may include:
generating at least one virtual character matched with the touch track according to the touch track; displaying at least one virtual character in an overlapping manner on the virtual content; and when any virtual character is in a selected state and a confirmation instruction of the virtual character in the selected state is received, adding the virtual character in the selected state into the virtual content.
The touch track may be a finger sliding track detected by the touch screen when the user performs a handwriting operation on the touch screen. The virtual character can be generated by rendering the terminal device according to character data in an existing character database, and the character data of the virtual character can be stored in the terminal device or downloaded from a server. The virtual character may include a virtual character symbol (e.g., Chinese symbol, English symbol), a virtual punctuation symbol, a virtual operation symbol, or other virtual special symbols, etc., such as! "@, #? The symbols, $,%, & etc. are not limited herein.
In some embodiments, the terminal device may obtain a touch trajectory of a user according to touch parameters detected by a touch screen of the wearable device, search at least one character having a similar trajectory to the touch trajectory from a database by recognizing the touch trajectory, render and generate at least one corresponding virtual character according to data of the at least one character, and display the at least one virtual character in the displayed virtual content in an overlapping manner. For example, a name word is handwritten on the touch screen, and after recognition, similar candidate virtual characters such as the name, the names, the parsimones and the like are sequentially displayed on the displayed virtual content for the user to select. For another example, referring to fig. 25, a user can see a plurality of candidate characters 413 superimposed and displayed on the virtual editing page 412 by handwriting a "big" character on the touch screen through the head-mounted display device worn by the user, so that the display effect of the virtual content is shown.
As an embodiment, the at least one virtual character is displayed in a superimposed manner on the virtual content, and the color of the virtual content is dimmed or the transparency of the virtual content is increased, so that the user cannot observe the virtual content through a head-mounted display device worn by the user, and the interference of the virtual content on the virtual character is reduced. As another embodiment, only the color of the overlapping area of the virtual content and the virtual character may be dimmed or the transparency may be increased, so that the user cannot observe the content of the overlapping area of the virtual content and the virtual character through the head-mounted display device worn by the user, thereby reducing the interference of the virtual content on the virtual character.
Furthermore, after the terminal device displays at least one virtual character in an overlapping manner on the virtual content, the selected virtual character can be inserted into the virtual content, so that the content addition of the virtual content is realized. Specifically, when any one of the virtual characters is in the selected state and a confirmation instruction for the virtual character in the selected state is received, the terminal device may add the virtual character in the selected state to the virtual content.
The virtual character in the selected state may be that the virtual character is in a preset display range, or a selection operation of the virtual character is obtained, wherein the selection operation of the virtual character may be that the virtual character is selected in the virtual guide mode of the wearable device, and the virtual character in the preset display range may be that the display position of the virtual character is changed by the control action parameter of the wearable device, so that the virtual character is in the preset display range. For example, when the virtual character corresponding to the dial is 26 english letters, the english letter is selected by rotating the dial, and when the virtual character corresponding to the key is a candidate word of the character, the selected character is switched by pressing the key. In some embodiments, the preset display range may be a target area pointed by the virtual guide, or may be set by a user. Of course, the specific preset display range may not be limited in the embodiments of the present application.
After the terminal device obtains the virtual character in the selected state, the terminal device can receive a confirmation instruction of the virtual character, and when the confirmation instruction of the virtual character in the selected state is received, the terminal device can add the virtual character in the selected state to the virtual content. For example, referring to FIG. 26A, the user writes a "big" word on the touch screen and the user selects a selected word 414 ("big") from a plurality of candidate characters 413, and when the user determines the candidate word, referring again to FIG. 26B, the user can see the added content 415 ("big") in the virtual editing page 412 through the head mounted display device worn, enabling content augmentation of the virtual content.
As an implementation manner, the terminal device may add the virtual character in the selected state to the virtual content, and may generate new virtual content according to the virtual character in the selected state and the displayed virtual content, so that the terminal device may display the new virtual content, where the new virtual content includes the displayed virtual content and the virtual character in the selected state, thereby implementing content addition of the virtual content.
In some embodiments, the confirmation instruction may be generated by the wearable device according to the detected manipulation motion parameter, for example, when the wearable device detects double-click on a touch screen or presses a key, the confirmation instruction is generated. The wearable device may also send the detected manipulation motion parameter to the terminal device, and the terminal device generates the confirmation instruction according to the manipulation motion parameter, or the terminal device generates the confirmation instruction when detecting that the duration that the virtual character stays in the preset display range reaches the specified duration, which is not limited herein.
Further, the terminal device may select an insertion position of the selected virtual character in the virtual content. In some embodiments, the insertion location may be a location pointed to by a virtual guide of the wearable device, while a virtual cursor may be displayed on the virtual content to facilitate the user's viewing of the insertion location. The terminal equipment can change the display position of the virtual cursor in the virtual content by changing the position and the posture information of the wearable equipment, so that the insertion position of the selected virtual character is changed, and the selection of the content adding position of the virtual content is realized.
It is to be understood that the above control of the displayed virtual content may be one or a combination of more of the above embodiments, and is not limited herein. For example, the scaling of the virtual content can be realized by rotating the dial while holding down a key, and for example, the up-and-down movement of the selected virtual option in the virtual phenomenon list can be realized by rotating the wrist while holding down a key.
In some embodiments, the terminal device may further implement a shortcut of virtual content display according to a touch gesture of the user on a touch screen of the wearable device. Specifically, the method for controlling virtual content may further include:
acquiring a touch track detected by a touch screen of the wearable device; judging whether the touch track is matched with a preset track or not; and when the virtual interface is matched with the preset track, accessing the application program corresponding to the preset track according to the corresponding relation between the preset track and the application program, and displaying the virtual interface of the application program.
In some embodiments, when a finger of a user slides and touches on the touch screen, the touch screen of the wearable device may detect a corresponding touch trajectory, and the wearable device may transmit data of the touch trajectory to the terminal device, so that the terminal device may acquire the touch trajectory. The terminal device can determine whether to perform the shortcut of the application program by judging whether the touch track is matched with a preset track. Specifically, when the terminal device determines that the touch trajectory is matched with the preset trajectory, the terminal device may access the application program corresponding to the preset trajectory according to the corresponding relationship between the preset trajectory and the application program, and display the virtual interface of the application program. Therefore, the shortcut of virtual content display is realized, and the interactivity between the wearable device and the terminal device is improved. For example, the terminal device may directly display a virtual search page in the virtual space upon an "S" gesture entered by the user on the wearable device.
The preset track may be factory default of the terminal device, may also be set by the user, may be stored in the terminal device, and may also be downloaded from the server, which is not limited herein. Similarly, the corresponding relationship between the preset track and the application program may be factory default of the terminal device, or may be set by the user, and the corresponding relationship may be stored in the terminal device, and may be downloaded from the server, which is not limited herein.
In some embodiments, after the terminal device displays the virtual interface of the application program in the virtual space according to the touch gesture of the user on the touch screen of the wearable device, the terminal device may control the displayed virtual interface according to at least one of the control action parameters detected by the wearable device and the change information of the relative spatial position relationship between the wearable device and the terminal device. The step of controlling the displayed virtual interface may refer to the corresponding step of controlling the displayed virtual content in the above embodiment.
In some embodiments, after the terminal device displays the virtual content, a virtual interface of an application program can be quickly displayed according to a touch gesture of a user on a touch screen of the wearable device, and the virtual interface can be displayed in an overlapped manner on the displayed virtual content. In one embodiment, the virtual interface may be configured to superimpose and display the virtual content displayed in the display, and may be configured to cancel the rendering generation and display of the virtual content and display the virtual interface at the display position of the virtual content to achieve an effect of switching the display content. Currently, the implementation manner of the virtual interface that can be displayed in an overlaid manner on the displayed virtual content may also be a manner of displaying the virtual content in an overlaid manner with reference to the virtual character.
In addition, in some embodiments, a camera may be disposed on the wearable device, the camera may be configured to collect an image of the user, when the wearable device is in a scene such as a chat, the image collected by the camera may be transmitted to the terminal device, and the terminal device transmits the image to the chat target in real time, so as to implement a video call function of the AR device. In some embodiments, the chat target may be equipped with an AR device, and after receiving the image transmitted by the terminal device, the AR device may display the image in a virtual space, thereby implementing a virtual video call function between the AR devices. In other embodiments, the chat target may also receive an image transmitted by the terminal device through a mobile terminal such as a mobile phone or a computer, and the mobile terminal generates a video according to the image, so as to implement a video passing function between the AR device and another mobile terminal.
In addition, the camera on the wearable device can also be suitable for a face payment scene, in the payment scene, when a face image needs to be collected for verification, the user image can be directly collected through the camera for face verification, and the face payment function is realized on the wearable device.
The method for interacting virtual content, provided by the embodiment of the application, includes obtaining the first position and posture information of the wearable device relative to the terminal device and the position information of the virtual picture relative to the terminal device after displaying the virtual picture, obtaining the target area selected by the wearable device in the virtual picture according to the position information, the first position and posture information, and performing processing operation corresponding to the target area, further, the terminal device may further control the virtual content in the target area according to at least one of the control action parameter detected by the wearable device, the user gesture detected by the terminal device, and the change information of the first position and posture, so as to select the target area in the virtual picture according to the spatial position of the wearable device, perform operation processing on the target area, and realize interaction between the wearable device and the terminal device, the interactivity and the interactive convenience of the user and the virtual content are improved.
Referring to fig. 27, a block diagram of a virtual content interaction apparatus 500 according to an embodiment of the present application is shown, where the apparatus is applied to a terminal device, and the apparatus may include: a display control module 510, a relative position acquisition module 520, a position information acquisition module 530, a target area acquisition module 540, and a process execution module 550. The display control module 510 is configured to display a virtual image; the relative position obtaining module 520 is configured to obtain a first position and posture information of the wearable device relative to the terminal device; the position information acquiring module 530 is configured to acquire position information of the virtual image relative to the terminal device; the target area obtaining module 540 is configured to obtain a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information; the processing execution module 550 is configured to perform a processing operation corresponding to the target area.
In some embodiments, the target area acquisition module 540 may include: the navigation device comprises a guidance generation unit, a guidance display unit and an intersection area acquisition unit. The guide generation unit is used for generating virtual guide according to the relative position relation between the virtual guide and the wearable device and the first position and posture information; the guide display unit is used for displaying virtual guide; the intersection region acquisition unit is used for acquiring an intersection region which is intersected with the virtual guide in the virtual picture according to the position information and the virtual guide, and taking the intersection region as a target region selected by the wearable device in the virtual picture.
In some embodiments, the display control module 510 may include: the device comprises a data acquisition unit, a picture generation unit and a picture display control unit. The data acquisition unit is used for identifying an entity object in a real space and obtaining picture data corresponding to the entity object; the picture generation unit is used for generating a virtual picture according to the picture data; the picture display control unit is used for displaying the virtual picture.
In some embodiments, the data obtaining unit may be specifically configured to: acquiring a scene image of an environment where the terminal equipment is located, wherein the environment comprises at least one entity object; acquiring second position and posture information of the wearable device relative to the terminal device; acquiring a selected entity object in the scene image according to the second position and posture information; and identifying the selected entity object to obtain the picture data corresponding to the selected entity object.
In some embodiments, the screen generating unit may be specifically configured to: acquiring a first spatial position relation between the virtual picture and the entity object and a second spatial position relation between the entity object and the terminal equipment; and generating a virtual picture according to the picture data, the first spatial position relation and the second spatial position relation.
In some embodiments, the entity object includes a smart home device, and the processing execution module 550 may be specifically configured to: acquiring virtual content corresponding to a target area; generating an execution instruction according to the virtual content; and transmitting an execution instruction to the intelligent household equipment, wherein the execution instruction is used for indicating the intelligent household equipment to execute the setting operation.
In some embodiments, the process execution module 550 may include: a target content acquisition unit and a target content control unit. The target content acquisition unit is used for acquiring virtual content corresponding to the target area; the target content control unit is used for controlling the display of the virtual content according to at least one of the control action parameters detected by the wearable device, the user gestures detected by the terminal device, and the change information of the first position and the gesture.
In some embodiments, the target content control unit may be specifically configured to: judging whether the wearable equipment is in a rotating state or not according to the change information of the first position and the posture; when the wearable device is in a rotating state, acquiring a rotating angle of the wearable device; generating a control instruction corresponding to the rotation angle according to the corresponding relation between the rotation angle and the control instruction; and controlling the display of the virtual content according to the control instruction.
In some embodiments, the parameters of the manipulation action detected by the wearable device include: the wearable device comprises one or more of pressing parameters detected by keys of the wearable device, touch parameters detected by a touch screen of the wearable device and rotation parameters detected by a turntable of the wearable device.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described apparatuses and modules may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, the coupling or direct coupling or communication connection between the modules shown or discussed may be through some interfaces, and the indirect coupling or communication connection between the devices or modules may be in an electrical, mechanical or other form.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each of the modules may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode.
To sum up, the virtual content interaction device provided in the embodiment of the present application is applied to a terminal device, and after a virtual picture is displayed, a target area selected by a wearable device in the virtual picture is obtained according to the position information, the first position information and the posture information of the wearable device relative to the terminal device, and the position information of the virtual picture relative to the terminal device is obtained, and a processing operation corresponding to the target area is performed, so that the target area in the virtual picture is selected according to a spatial position of the wearable device, and the target area is operated, thereby realizing interaction between the wearable device and the terminal device, and improving interactivity and interactive convenience between a user and virtual content.
Referring to fig. 28, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a terminal device capable of running an application, such as a smart phone, a tablet computer, a head-mounted display device, and the like. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, an image acquisition apparatus 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal device 100 in use, and the like.
In the embodiment of the present application, the image capturing device 130 is used for capturing an image of a physical object and capturing a scene image of a target scene. The image capturing device 130 may be an infrared camera or a color camera, and the specific type of the camera is not limited in the embodiment of the present application.
A block diagram of a computer-readable storage medium is provided. The computer-readable storage medium 800 has stored therein program code that can be invoked by a processor to perform the methods described in the method embodiments above.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. The interaction method of the virtual content is applied to a terminal device, wherein the terminal device is in communication connection with a wearable device, and the method comprises the following steps:
displaying the virtual picture;
acquiring first position and posture information of the wearable device relative to the terminal device;
acquiring the position information of the virtual picture relative to the terminal equipment;
acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information;
and carrying out processing operation corresponding to the target area.
2. The method according to claim 1, wherein the obtaining a target area selected by the wearable device in the virtual screen according to the position information, the first position and the posture information includes:
generating the virtual guide according to the relative position relation between the virtual guide and the wearable device and the first position and posture information;
displaying the virtual guide;
and acquiring an intersection region which is intersected with the virtual guide in the virtual picture according to the position information and the virtual guide, and taking the intersection region as a target region selected by the wearable device in the virtual picture.
3. The method of claim 1, wherein the displaying the virtual screen comprises:
identifying an entity object in a real space to obtain picture data corresponding to the entity object;
generating a virtual picture according to the picture data;
and displaying the virtual picture.
4. The method according to claim 3, wherein the identifying the physical object in the real space and obtaining the picture data corresponding to the physical object comprises:
acquiring a scene image of an environment where the terminal equipment is located, wherein the environment comprises at least one entity object;
acquiring second position and posture information of the wearable device relative to the terminal device;
acquiring the selected entity object in the scene image according to the second position and posture information;
and identifying the selected entity object to obtain picture data corresponding to the selected entity object.
5. The method of claim 3, wherein generating a virtual picture from the picture data comprises:
acquiring a first spatial position relation between a virtual picture and the entity object and a second spatial position relation between the entity object and the terminal equipment;
and generating a virtual picture according to the picture data, the first spatial position relation and the second spatial position relation.
6. The method according to any one of claims 3-5, wherein the entity object comprises a smart device, and the performing the processing operation corresponding to the target area comprises:
acquiring virtual content corresponding to the target area;
generating an execution instruction according to the virtual content;
and transmitting the execution instruction to the intelligent equipment, wherein the execution instruction is used for instructing the intelligent equipment to execute the setting operation.
7. The method according to any one of claims 1-5, wherein said performing a processing operation corresponding to said target region comprises:
acquiring virtual content corresponding to the target area;
and controlling the display of the virtual content according to at least one of the control action parameters detected by the wearable device, the user gestures detected by the terminal device, and the change information of the first position and the gesture.
8. The method according to claim 7, wherein the controlling the display of the virtual content according to the change information of the first position and the posture comprises:
judging whether the wearable equipment is in a rotating state or not according to the change information of the first position and the posture;
when the wearable device is in the rotating state, the rotating angle of the wearable device is obtained;
generating a control instruction corresponding to the rotation angle according to the corresponding relation between the rotation angle and the control instruction;
and controlling the display of the virtual content according to the control instruction.
9. The method of claim 7, wherein the detected parameters of the manipulation action by the wearable device comprise:
the wearable device comprises one or more of a pressing parameter detected by a key of the wearable device, a touch parameter detected by a touch screen of the wearable device, and a rotation parameter detected by a turntable of the wearable device.
10. The interactive device for the virtual content is applied to terminal equipment, and the terminal equipment is in communication connection with wearable equipment. The device comprises:
the display control module is used for displaying the virtual picture;
the relative position acquisition module is used for acquiring first position and posture information of the wearable device relative to the terminal device;
the position information acquisition module is used for acquiring the position information of the virtual picture relative to the terminal equipment;
the target area acquisition module is used for acquiring a target area selected by the wearable device in the virtual picture according to the position information, the first position and the posture information;
and the processing execution module is used for carrying out processing operation corresponding to the target area.
11. A terminal device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-9.
12. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 9.
CN201910263441.3A 2019-04-02 2019-04-02 Virtual content interaction method and device, terminal equipment and storage medium Pending CN111766937A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910263441.3A CN111766937A (en) 2019-04-02 2019-04-02 Virtual content interaction method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910263441.3A CN111766937A (en) 2019-04-02 2019-04-02 Virtual content interaction method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111766937A true CN111766937A (en) 2020-10-13

Family

ID=72718239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910263441.3A Pending CN111766937A (en) 2019-04-02 2019-04-02 Virtual content interaction method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111766937A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112650390A (en) * 2020-12-22 2021-04-13 科大讯飞股份有限公司 Input method, related device and input system
CN113413585A (en) * 2021-06-21 2021-09-21 Oppo广东移动通信有限公司 Interaction method and device of head-mounted display equipment and electronic equipment
CN113515192A (en) * 2021-05-14 2021-10-19 闪耀现实(无锡)科技有限公司 Information processing method and device for wearable equipment and wearable equipment
CN113687721A (en) * 2021-08-23 2021-11-23 Oppo广东移动通信有限公司 Device control method and device, head-mounted display device and storage medium
CN114637545A (en) * 2020-11-30 2022-06-17 华为终端有限公司 VR interaction method and device
CN115086548A (en) * 2022-04-13 2022-09-20 中国人民解放军火箭军工程大学 Double-spectrum virtual camera synthesis method and device
US20220414990A1 (en) * 2021-06-25 2022-12-29 Acer Incorporated Augmented reality system and operation method thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0757122A (en) * 1993-08-20 1995-03-03 Matsushita Electric Ind Co Ltd Image generating device
CN104199556A (en) * 2014-09-22 2014-12-10 联想(北京)有限公司 Information processing method and device
CN104866103A (en) * 2015-06-01 2015-08-26 联想(北京)有限公司 Relative position determining method, wearable electronic equipment and terminal equipment
CN106095102A (en) * 2016-06-16 2016-11-09 深圳市金立通信设备有限公司 The method of a kind of virtual reality display interface process and terminal
CN107223271A (en) * 2016-12-28 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of data display processing method and device
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
JP2018041009A (en) * 2016-09-09 2018-03-15 日本精機株式会社 Display device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0757122A (en) * 1993-08-20 1995-03-03 Matsushita Electric Ind Co Ltd Image generating device
CN104199556A (en) * 2014-09-22 2014-12-10 联想(北京)有限公司 Information processing method and device
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN104866103A (en) * 2015-06-01 2015-08-26 联想(北京)有限公司 Relative position determining method, wearable electronic equipment and terminal equipment
CN106095102A (en) * 2016-06-16 2016-11-09 深圳市金立通信设备有限公司 The method of a kind of virtual reality display interface process and terminal
JP2018041009A (en) * 2016-09-09 2018-03-15 日本精機株式会社 Display device
CN107223271A (en) * 2016-12-28 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of data display processing method and device

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114637545A (en) * 2020-11-30 2022-06-17 华为终端有限公司 VR interaction method and device
CN114637545B (en) * 2020-11-30 2024-04-09 华为终端有限公司 VR interaction method and device
CN112650390A (en) * 2020-12-22 2021-04-13 科大讯飞股份有限公司 Input method, related device and input system
CN113515192A (en) * 2021-05-14 2021-10-19 闪耀现实(无锡)科技有限公司 Information processing method and device for wearable equipment and wearable equipment
CN113413585A (en) * 2021-06-21 2021-09-21 Oppo广东移动通信有限公司 Interaction method and device of head-mounted display equipment and electronic equipment
CN113413585B (en) * 2021-06-21 2024-03-22 Oppo广东移动通信有限公司 Interaction method and device of head-mounted display equipment and electronic equipment
US20220414990A1 (en) * 2021-06-25 2022-12-29 Acer Incorporated Augmented reality system and operation method thereof
CN113687721A (en) * 2021-08-23 2021-11-23 Oppo广东移动通信有限公司 Device control method and device, head-mounted display device and storage medium
CN115086548A (en) * 2022-04-13 2022-09-20 中国人民解放军火箭军工程大学 Double-spectrum virtual camera synthesis method and device

Similar Documents

Publication Publication Date Title
CN111766937A (en) Virtual content interaction method and device, terminal equipment and storage medium
US10698535B2 (en) Interface control system, interface control apparatus, interface control method, and program
CN110310288B (en) Method and system for object segmentation in a mixed reality environment
US20190129607A1 (en) Method and device for performing remote control
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
CN103827780A (en) Methods and systems for a virtual input device
US10372229B2 (en) Information processing system, information processing apparatus, control method, and program
CN102945091B (en) A kind of man-machine interaction method based on laser projection location and system
US10621766B2 (en) Character input method and device using a background image portion as a control region
CN111383345B (en) Virtual content display method and device, terminal equipment and storage medium
CN111736691A (en) Interactive method and device of head-mounted display equipment, terminal equipment and storage medium
Matulic et al. Phonetroller: Visual representations of fingers for precise touch input with mobile phones in vr
CN111083463A (en) Virtual content display method and device, terminal equipment and display system
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
CN111913674A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN110866940A (en) Virtual picture control method and device, terminal equipment and storage medium
CN111913639B (en) Virtual content interaction method, device, system, terminal equipment and storage medium
CN111913560A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN111399630B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN111913565B (en) Virtual content control method, device, system, terminal device and storage medium
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium
CN111651031B (en) Virtual content display method and device, terminal equipment and storage medium
JP7287172B2 (en) Display control device, display control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination