CN111913639B - Virtual content interaction method, device, system, terminal equipment and storage medium - Google Patents

Virtual content interaction method, device, system, terminal equipment and storage medium Download PDF

Info

Publication number
CN111913639B
CN111913639B CN201910377227.0A CN201910377227A CN111913639B CN 111913639 B CN111913639 B CN 111913639B CN 201910377227 A CN201910377227 A CN 201910377227A CN 111913639 B CN111913639 B CN 111913639B
Authority
CN
China
Prior art keywords
content
data
interactive
area
virtual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910377227.0A
Other languages
Chinese (zh)
Other versions
CN111913639A (en
Inventor
卢智雄
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201910377227.0A priority Critical patent/CN111913639B/en
Publication of CN111913639A publication Critical patent/CN111913639A/en
Application granted granted Critical
Publication of CN111913639B publication Critical patent/CN111913639B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Abstract

The embodiment of the application discloses an interaction method, a device, a system, terminal equipment and a storage medium of virtual content, wherein the interaction method of the virtual content is applied to the terminal equipment, the terminal equipment is connected with the interaction equipment, and the method comprises the following steps: receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to touch operation detected in an interactive area; when the touch operation comprises a setting operation, acquiring at least part of content corresponding to the setting operation from display content corresponding to the interaction area, and acquiring content data of the at least part of content; acquiring a processing instruction matched with the edge area; processing operation corresponding to the edge area is carried out on at least part of the content according to the content data and the processing instruction; and acquiring display data matched with the triggered edge area, generating virtual content corresponding to at least part of content according to the content data and the display data, and displaying the virtual content. The method can better realize the interaction with the display content.

Description

Virtual content interaction method, device, system, terminal equipment and storage medium
Technical Field
The present application relates to the field of display technologies, and in particular, to a method, an apparatus, a system, a terminal device, and a storage medium for virtual content interaction.
Background
In recent years, with the advancement of science and technology, more and more users use electronic devices to view display contents, and can interact with the display contents through a touch screen, a touch pad, keys and the like of the electronic devices. However, the traditional interactive mode has poor interactivity, which results in poor interaction effect between the user and the displayed content.
Disclosure of Invention
The embodiment of the application provides an interaction method, device, system, terminal equipment and storage medium of virtual content, and the interactivity of a user and display content can be improved.
In a first aspect, an embodiment of the present application provides an interaction method for virtual content, which is applied to a terminal device, where the terminal device is connected to an interaction device, the interaction device includes an interaction area, and the method includes: receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to the touch operation detected by the interactive area; when it is determined that the touch operation includes a setting operation according to the operation data, acquiring at least part of content corresponding to the setting operation from display content corresponding to the interaction area, and acquiring content data of the at least part of content, wherein the setting operation includes a trigger operation corresponding to an edge area of the interaction area; acquiring a processing instruction matched with the triggered edge area; processing operation corresponding to the edge area is carried out on at least part of the content according to the content data and the processing instruction; acquiring display data matched with the triggered edge area; and generating virtual content corresponding to at least part of the content according to the content data and the display data, and displaying the virtual content.
In a second aspect, an embodiment of the present application provides an interaction method for virtual content, which is applied to an interaction device, where the interaction device is connected to a terminal device, the interaction device includes an interaction area, and the method includes: detecting touch operation through the interaction area; when the touch operation is determined to comprise a setting operation according to the touch operation detected in the interactive area, acquiring at least part of content corresponding to the setting operation from display content corresponding to the interactive area, and acquiring content data of the at least part of content, wherein the setting operation comprises a triggering operation corresponding to an edge area of the interactive area; processing operation corresponding to the edge area is carried out on at least part of the content; acquiring display data matched with the triggered edge area; and sending the content data and the display data to the terminal equipment, wherein the content data and the display data are used for indicating the terminal equipment to generate virtual content corresponding to at least part of the content and displaying the virtual content.
In a third aspect, an embodiment of the present application provides an interaction apparatus for virtual content, which is applied to a terminal device, where the terminal device is connected to an interaction device, the interaction device includes an interaction area, and the apparatus includes: the device comprises a data receiving module, a first data acquisition module, an instruction acquisition module, a processing execution module, a second data acquisition module and a content display module, wherein the first data receiving module is used for receiving operation data sent by the interactive equipment, and the operation data is generated by the interactive equipment according to touch operation detected by the interactive area; the data acquisition module is used for acquiring at least part of content corresponding to a setting operation from display content corresponding to the interaction area and acquiring content data of the at least part of content when the touch operation is determined to include the setting operation according to the operation data, wherein the setting operation includes a trigger operation corresponding to an edge area of the interaction area; the instruction acquisition module is used for acquiring a processing instruction matched with the triggered edge area; the processing execution module is used for carrying out processing operation corresponding to the edge area on at least part of the content according to the content data and the processing instruction; the second data acquisition module is used for acquiring display data matched with the triggered edge area; the content display module is used for generating virtual content corresponding to at least part of the content according to the content data and the display data and displaying the virtual content.
In a fourth aspect, an embodiment of the present application provides an interactive system for virtual content, where the system includes a terminal device and an interactive device, the terminal device is connected to the interactive device, and the interactive device includes an interactive area, where the interactive device is configured to generate operation data according to a touch operation detected in the interactive area, and send the operation data to the terminal device; the terminal device is configured to receive operation data sent by the interactive device, and when it is determined that the touch operation includes a setting operation according to the operation data, obtain at least part of content corresponding to the setting operation from display content corresponding to the interactive region, and obtain content data of the at least part of content, where the setting operation includes a trigger operation corresponding to an edge region of the interactive region, obtain a processing instruction matched with the edge region, and perform a processing operation corresponding to the edge region on the at least part of content according to the content data and the processing instruction; the terminal device is further configured to acquire display data matched with the triggered edge area, generate virtual content corresponding to at least part of the content according to the content data and the display data, and display the virtual content.
In a fifth aspect, an embodiment of the present application provides a terminal device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of interacting with virtual content as provided by the first aspect above.
In a sixth aspect, an embodiment of the present application provides an interaction device, including: one or more processors; a memory; one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the interactive method of virtual content provided by the second aspect described above.
In a seventh aspect, an embodiment of the present application provides a storage medium, where a program code is stored in the computer-readable storage medium, and the program code may be invoked by a processor to execute the method for interacting with virtual content provided in the first aspect or the method for interacting with virtual content provided in the second aspect.
According to the scheme provided by the application, the terminal device receives operation data sent by the interactive device, the operation data are generated by the interactive device according to touch operation detected by the interactive area, when the touch operation is determined to comprise setting operation according to the operation data, at least part of content corresponding to the setting operation is obtained from display content corresponding to the interactive area, content data of the at least part of content is obtained, the setting operation comprises triggering operation corresponding to the edge area of the interactive area, then a processing instruction matched with the edge area is obtained, and processing operation corresponding to the edge area is carried out on the at least part of content according to the content data and the processing instruction. The method and the device have the advantages that the processing operation corresponding to the edge area can be performed on the operated display content according to the triggering operation aiming at the edge area of the interactive area on the interactive equipment, the corresponding virtual content is displayed, the operation is simple, convenient and fast, and the interactivity between the user and the display content is better improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 shows a schematic diagram of an application scenario suitable for use in an embodiment of the present application.
FIG. 2 shows a flow diagram of an interaction method for virtual content according to one embodiment of the present application.
Fig. 3 is a schematic diagram illustrating a display effect according to an embodiment of the present application.
Fig. 4 is a schematic diagram illustrating another display effect provided according to an embodiment of the application.
Fig. 5 is a schematic diagram illustrating still another display effect provided according to an embodiment of the application.
FIG. 6 shows a flow diagram of an interaction method for virtual content according to another embodiment of the present application.
Fig. 7 shows a flowchart of step S220 in the interaction method of the virtual content according to another embodiment of the present application.
Fig. 8 is a schematic diagram illustrating a display effect according to another embodiment of the present application.
Fig. 9 is a schematic diagram illustrating another display effect provided according to another embodiment of the present application.
Fig. 10 is a schematic diagram illustrating still another display effect provided according to another embodiment of the present application.
Fig. 11 is a schematic diagram illustrating still another display effect provided according to another embodiment of the present application.
Fig. 12 is a schematic diagram illustrating still another display effect provided according to another embodiment of the present application.
Fig. 13 shows a flowchart of step S260 in the interaction method of virtual content according to another embodiment of the present application.
FIG. 14 shows a flow diagram of an interaction method for virtual content according to yet another embodiment of the present application.
Fig. 15 is a schematic diagram illustrating a display effect according to another embodiment of the present application.
FIG. 16 shows a flow diagram of an interaction method for virtual content according to yet another embodiment of the present application.
Fig. 17 is a schematic diagram illustrating a display effect according to still another embodiment of the present application.
Fig. 18 is a schematic diagram illustrating a display effect provided according to an embodiment of the present application.
FIG. 19 shows a block diagram of an interactive device for virtual content according to one embodiment of the present application.
FIG. 20 shows a flow diagram of an interaction method for virtual content according to yet another embodiment of the present application.
Fig. 21 is a block diagram of a terminal device for executing an interaction method of virtual content according to an embodiment of the present application.
Fig. 22 is a block diagram of an interaction apparatus for performing an interaction method of virtual content according to an embodiment of the present application.
Fig. 23 is a storage unit for storing or carrying program code implementing an interactive method of virtual content according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
An application scenario of the interaction method for virtual content provided in the embodiment of the present application is introduced below.
Referring to fig. 1, a schematic diagram of an application scenario of an interaction method of virtual content provided by an embodiment of the present application is shown, where the application scenario includes an interaction system 10 of virtual content. The interactive system 10 for virtual content includes: the terminal device 100 and the interactive device 200, wherein the terminal device 100 is connected with the interactive device 200.
In the embodiment of the present application, the terminal device 100 may be a head-mounted display device, or may be a mobile device such as a mobile phone and a tablet. When the terminal device 100 is a head-mounted display device, the head-mounted display device may be an integrated head-mounted display device. The terminal device 100 may also be an intelligent terminal such as a mobile phone connected to an external/access head-mounted display device, that is, the terminal device 100 may be inserted or accessed into the external head-mounted display device as a processing and storage device of the head-mounted display device, and display virtual content on the head-mounted display device.
In the present embodiment, the interaction device 200 may be an electronic device provided with a marker 201. The number of markers 201 provided on the interactive apparatus 200 may not be limited, and the number of markers 201 may be one or more. The specific configuration of the interactive device 200 is not limited, and may be various shapes, such as a square shape, a circular shape, or various shapes, such as a flat-panel-shaped electronic device. The interactive device 200 may be a smart mobile device such as a mobile phone or a tablet.
In some embodiments, the marker 201 may be attached to or integrated with the interactive device 200, or may be disposed on a protective cover of the interactive device 200, or may be an external marker, and may be inserted into the interactive device 200 through a USB (Universal Serial Bus) or an earphone hole when in use. If the interactive device 200 is provided with a display screen, the marker 201 may also be displayed on the display screen of the interactive device 200.
The terminal device 100 and the interactive device 200 may be connected through communication modes such as bluetooth, WiFi (Wireless-Fidelity), ZigBee (ZigBee technology), and the like, or may be connected through wired communication such as a data line. Of course, the connection mode between the terminal device 100 and the interactive device 200 may not be limited in the embodiment of the present application.
When the terminal device 100 and the interactive device 200 are used together, the marker 201 can be located in the visual range of the terminal device 100, so that the terminal device 100 can acquire an image containing the marker 201 to perform identification tracking on the marker 201, obtain spatial position information such as the position and the posture of the marker 201 relative to the terminal device 100, obtain an identification result such as identity information of the marker 201, further obtain spatial position information such as the position and the posture of the interactive device 200 relative to the terminal device 100, and realize positioning tracking on the interactive device 200. The terminal device 100 may display corresponding virtual content according to the relative position and posture information with the interactive device 200.
In some embodiments, the marker 201 is a pattern having a topology, which refers to the connectivity between sub-markers and feature points, etc. in the marker.
In some embodiments, the marker 201 may also be a light spot type marker, and the terminal device tracks the light spot to obtain spatial position information such as relative position and posture. In a specific embodiment, a light spot and an Inertial Measurement Unit (IMU) may be disposed on the interactive device 200, and the terminal device may acquire a light spot image on the interactive device 200 through an image sensor, acquire measurement data through the IMU, and determine relative spatial position information between the interactive device 200 and the terminal device 100 according to the light spot image and the measurement data, so as to implement positioning and tracking of the interactive device 200. Wherein, the light spots arranged on the interactive device 200 can be visible light spots or infrared light spots, and the number of the light spots can be one or a light spot sequence consisting of a plurality of light spots.
In some embodiments, the interactive device 200 is provided with at least one interactive area 202, and the user can perform related control and interaction through the interactive area 202. The interactive area 202 may include a key, a touch pad, a touch screen, or the like. The interactive device 200 may generate a control instruction corresponding to the control operation through the control operation detected in the interactive region 202, and perform related control. Moreover, the interactive device 200 may further send the control instruction to the terminal device 100, or the interactive device 200 may generate operation data according to the operation detected by the interactive region, send the operation data to the terminal device 100, and when the terminal device 100 receives the control instruction sent by the interactive device 200, may control the display of the virtual content (e.g., control the rotation, displacement, etc. of the virtual content) according to the control instruction.
For example, referring to fig. 1 again, the terminal device 100 is a head-mounted display device, and a user can observe, through the head-mounted display device worn by the user, that the virtual document content 301 is superimposed and displayed in the interaction region 202 of the interaction device 200 in the real space in an augmented reality manner, and perform touch operation through the interaction region 202 to control the virtual document content 301 to move to the lower edge of the interaction region 202, and when the lower edge of the interaction region 202 is triggered, the document data corresponding to the virtual document content 301 can be deleted, and at the same time, a corresponding deletion virtual effect can be displayed in the augmented reality manner.
A specific interaction method of the virtual content will be described below.
Referring to fig. 2, an embodiment of the present application provides an interaction method for virtual content, which is applicable to a terminal device, and the interaction method for virtual content may include:
step S110: and receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to the touch operation detected in the interactive area.
In the embodiment of the application, the terminal device is in communication connection with the interactive device, and the interactive device comprises an interactive area. The interactive area may include a touch pad or a touch screen, such that the interactive area may detect a touch operation (e.g., a single-finger click, a single-finger slide, a multi-finger click, a multi-finger slide, etc.) made by a user in the interactive area. When the interaction area of the interaction device detects the touch operation of the user, the interaction device may generate operation data according to the touch operation detected by the interaction area. The operation data may include operation parameters of the touch operation detected by the interaction area.
In some embodiments, the operation data may include parameters such as a touch position corresponding to the touch operation, a type of the touch operation, a number of fingers of the touch operation, a pressing pressure of the fingers, and a duration of the touch operation. The touch position corresponding to the touch operation may refer to a position of a touched area on the interaction area, for example, a touch coordinate in a plane coordinate system of the interaction area. The type of touch operation may include a click operation, a slide operation, a long press operation, and the like. The number of fingers of the touch operation refers to the number of fingers performing the touch operation, that is, the number of areas pressed when the sensor of the interaction area detects the touch operation, and is, for example, 1, and is, for example, 2. The finger pressing pressure refers to a pressing pressure for performing the touch operation, that is, a pressure detected by a sensor in the interaction area, for example, the pressing pressure is 0.5N (cow). The duration of the touch operation is the time when the finger detected in the interaction area is in contact with the interaction area, for example, the duration is 1S (second). Of course, the specific operation data may not be limited in this embodiment, and the operation data may also include other touch parameters, for example, a sliding track, a click frequency of a click operation, and the like.
The interactive device can send the operation data to the terminal device after generating the operation data according to the touch operation detected in the interactive area. Correspondingly, the terminal device may receive the operation data sent by the interactive device, so that the terminal device determines the operated display content according to the operation data, and performs related display control.
In some embodiments, the display content may be display content displayed by the terminal device, and the display content may also be display content displayed by the interactive device.
As an implementation manner, the terminal device may obtain the relative spatial position relationship between the interaction region and the terminal device according to the relative spatial position information between the terminal device and the interaction device and the relative positional relationship between the interaction region and the interaction device, so that virtual display content may be generated and displayed according to the relative spatial position relationship between the interaction region and the terminal device, so that a user sees the display content to be displayed in the interaction region in an overlapping manner, thereby achieving an augmented reality effect of the display content. After receiving the operation data, the terminal device may determine, according to the operation data, an operated display content among display contents corresponding to the interactive area, and perform display control related to the operated display content. The display content corresponding to the interaction region may be display content matched with the spatial position corresponding to the interaction region in the virtual space.
As another embodiment, the interaction device may display the display content in the interaction area, that is, when the interaction area includes the touch screen, the touch screen displays the display content. And the interaction area can detect the touch operation of the user on the touch screen, determine the operated display content, generate operation data according to the specific operation parameters of the touch operation and the operated display content, and send the operation data to the terminal equipment. The terminal device may perform display control related to the operated display content according to the operation data.
Step S120: and when the touch operation is determined to comprise the setting operation according to the operation data, acquiring at least part of content corresponding to the setting operation from the display content corresponding to the interactive area, and acquiring content data of the at least part of content, wherein the setting operation comprises the trigger operation corresponding to the edge area of the interactive area.
In this embodiment of the application, when the terminal device receives the operation data, it may be determined whether the touch operation includes a setting operation according to the operation data, so as to further determine whether to perform a processing operation on the manipulated display content. The setting operation is to perform a setting operation on the display content corresponding to the interactive region, and the setting operation may include a trigger operation corresponding to an edge region of the interactive region. The edge area is the area where the periphery of the interaction area of the interaction device is located.
In some embodiments, when the terminal device displays the display content in a real scene in an overlapping manner, the terminal device may determine the display content corresponding to the interaction region in the display content, and determine whether at least part of the content in the display content corresponding to the interaction region is subjected to the setting operation according to the operation data. The display content corresponding to the interaction area may be display content corresponding to a spatial position of the interaction area in the virtual space. The terminal device can determine a touch position corresponding to the touch operation according to the operation data, can convert a touch coordinate of the touch position into a space coordinate in a virtual space, and then obtains a display content corresponding to the space coordinate from a display content corresponding to the interactive area. The obtained display content is the operated display content in the display content corresponding to the interactive area. When there is no display content matching the spatial coordinates of the touch position in the virtual space, it may be determined that at least part of the content is not operated in the display content corresponding to the interaction area, that is, the touch operation is not an operation on the display content.
In some embodiments, when the interactive device displays content, that is, the interactive region displays content, the terminal device may determine whether at least part of the content is operated in the display content displayed in the interactive region according to the operation data. The interactive device may also determine the operated display content according to the touch position. When the interactive device displays content, the operation data sent by the interactive device may include data of the operated display content and operation parameters of touch operation, or may directly determine whether at least part of the content in the display content displayed in the interactive region is operated according to the operation data.
The at least part of the content may be understood as operated display content in display content corresponding to the interaction area, that is, the display content determined according to the touch position. For example, the display content corresponding to the interaction area may be an interface of an application program, and at least part of the operated content may be a control in the interface. For another example, the display content corresponding to the interaction area includes document content, and the operated at least part of the content may be the entire document content.
In this embodiment of the application, when the terminal device determines that at least part of the content exists in the display content corresponding to the interaction area and is operated, it may be determined whether the operation performed on the part of the content is a setting operation according to the operation data, so as to determine whether to perform a corresponding processing operation. When the operation executed by the part of content is determined to be the setting operation, the processing operation related to the part of content is determined to be executed subsequently.
In this embodiment of the application, the setting operation is a triggering operation corresponding to an edge area of the interaction area, that is, an operation performed on a part of content in the display content corresponding to the interaction area is related to the edge area, and the operation may trigger a subsequent processing operation on the part of content.
As an optional implementation manner, the display content corresponding to the edge area of the interaction area includes control content, that is, the display position of the control content corresponds to the edge area, and the control content is used to trigger a relevant processing operation on the display content corresponding to the interaction area. In this case, the setting operation may be a click operation on the control content, and then a subsequent processing operation on the at least part of the content may be triggered. For example, referring to fig. 3, the display content corresponding to the interaction area 202 includes document content 301 and a control 302 in an edge area, and the setting operation 202 may be a click operation performed on the control 302, so that when the terminal device detects the click operation on the control 302, a subsequent related processing operation on the document interface content 301 may be triggered.
As another alternative, the setting operation may be to move at least part of the content in the display content corresponding to the interaction area to the edge area in a sliding operation manner, and the setting operation may trigger a subsequent processing operation on at least part of the content. In addition, different edge regions may correspond to different processing operations. The sliding operation may be a single-finger sliding operation or a multi-finger sliding operation, and the type of the sliding operation may not be limited. For example, referring to fig. 4 and fig. 5, the display content corresponding to the interaction area 202 includes video content 303, and the setting operation may be a sliding operation performed on the video content 303, so that when the terminal device detects that the video content 303 is moved to an edge area of the interaction area through the sliding operation, a subsequent related processing operation on the video content 303 may be triggered.
As a specific implementation manner, when the interaction area of the interaction device displays corresponding display content, the interaction device may determine whether to slide at least part of the content according to a touch operation detected in the interaction area. If the sliding operation of at least part of the content is judged, the interactive equipment can control the sliding of at least part of the content according to the sliding direction corresponding to the sliding operation and display the sliding effect of at least part of the content, so that the sliding of at least part of the content is controlled according to the sliding operation. In addition, when at least part of the content slides through the sliding operation mode and slides to the edge area of the interaction area, the interaction device may generate operation data to transmit to the terminal device, for example, the operation data may be generated according to the sliding parameters of the sliding operation, the operated at least part of the content, and the position to which the sliding operation is performed, and transmit to the terminal device. Therefore, the terminal device can know that at least part of the content in the display content corresponding to the interaction area is moved to the edge area of the interaction area in a sliding operation mode according to the operation data.
As another specific implementation manner, when the terminal device displays the display content in an overlaid manner in a real scene, the terminal device may determine, according to the operation parameters sent by the interaction device, at least part of the content corresponding to the touch operation, and determine that the touch operation is a sliding operation and a sliding track of the sliding operation, and according to the determined information, the terminal device may control the at least part of the content to slide according to the sliding track corresponding to the sliding operation, and finally control the at least part of the content to move to an end point of the sliding track, and display an effect that the at least part of the content slides, so that the sliding of the at least part of the content is controlled according to the sliding operation.
Therefore, the terminal device can determine whether at least part of the content exists in the display content corresponding to the interaction area and slides to the edge area of the interaction device, and further determine whether to trigger the processing operation of the at least part of the content.
As still another alternative, the setting operation may be: after long-time pressing operation of at least part of the display content corresponding to the interaction region, the terminal device can generate a virtual control to display, so that the virtual control is displayed in the edge region in an overlapped mode, and then clicking operation is performed on the control.
Of course, the above embodiments are only examples, and do not represent the limitation of the setting operation in the embodiments of the present application, and the setting operation may be set according to specific scenarios and requirements.
In this embodiment of the application, when the terminal device determines that the touch operation detected in the interaction area includes the setting operation, the terminal device may obtain at least part of content corresponding to the setting operation, that is, the at least part of content on which the setting operation is performed. And, the content data of the at least part of content may be acquired, so that the terminal device may subsequently perform processing operations related to the part of content according to the content data.
As one mode, the content data of at least part of the content may be three-dimensional model data of the part of the content, and the three-dimensional model data may include a color, model vertex coordinates, model contour data, and the like for constructing a model corresponding to the three-dimensional model.
In some embodiments, the terminal device acquires the content data of the at least part of content, and may acquire, by the terminal device, corresponding content data of the at least part of content according to the at least part of content determined to be operated. That is, when the terminal device displays the content, the content data of at least part of the content may be directly acquired.
In some embodiments, the terminal device may obtain the content data of at least part of the content from the interactive device. It is understood that when the interactive device displays the content, the terminal device may send a data request to the terminal device, thereby obtaining content data of at least part of the content.
Of course, a specific manner of the terminal device acquiring the content data of the at least part of content may not be limited in this embodiment, for example, when the operation data carries the content data of the at least part of content, the terminal device may also directly obtain the content data of the at least part of content according to the operation data.
Step S130: and acquiring a processing instruction matched with the triggered edge area.
In this embodiment, when it is determined that the touch operation detected by the interactive area of the interactive device includes a setting operation, because the setting operation is a trigger operation corresponding to an edge area of the interactive area, the subsequent processing operation performed on the at least part of content may correspond to the edge area. That is to say, different edge areas may correspond to different processing operations, that is, performing touch operations corresponding to different edge areas on the at least part of content may trigger different processing operations on the at least part of content.
In some embodiments, the terminal device may obtain a processing instruction corresponding to the edge area, so as to perform a processing operation on at least part of the content according to the processing instruction. And the processing instructions are different, and the terminal equipment also has different processing operations on the part of contents.
In this embodiment of the present application, a corresponding relationship between the edge area and the processing instruction may be stored in the terminal device, and the corresponding relationship may be set by a user, may be default when the terminal device leaves a factory, and may be acquired by the terminal device from a server. The terminal device may obtain the processing instruction corresponding to the edge area according to the correspondence. For example, when the interactive area is a rectangular area, the first side edge area corresponds to an instruction for a copy operation, the second side edge area corresponds to an instruction for displaying a virtual extended content of the portion of content, the third side edge area corresponds to an instruction for a delete operation, and the fourth side edge area corresponds to an instruction for a scale operation. Of course, the above indication of the correspondence between the instruction and the edge area and the processing operation are only examples, and do not represent a limitation on the correspondence between the instruction and the edge area and the processing operation in the embodiment of the present application.
Step S140: and performing processing operation corresponding to the edge area on at least part of the content according to the content data and the processing instruction.
In this embodiment, after acquiring the content data and the processing instruction, the terminal device may perform corresponding processing operation on at least part of the content according to a specific processing instruction and the content data. Since the processing instruction corresponds to the edge area, the terminal device may perform different processing operations on at least some of the content according to different processing instructions, that is, the processing operations correspond to the edge area.
In some embodiments, the terminal device may cancel the display of at least part of the content, may also maintain the display of the content in the interaction area in at least part of the content, may also control the display of at least part of the content, may also delete a file corresponding to at least part of the content, or uninstall an application corresponding to at least part of the content, and the like. The specific processing operation performed by the terminal device specifically on at least part of the content may not be limiting. And generating corresponding virtual extended content to display according to the content data of at least part of the content and the processing instruction, for example, generating and displaying the menu content at the next level of the part of the content as the virtual extended content. The terminal device may also delete the file corresponding to the at least part of the content or uninstall the application corresponding to the at least part of the content according to the content data and the processing instruction. The terminal device may also control the display of at least part of the content according to the content data and the processing instruction, for example, control at least part of the content to move to a designated position, and for example, reduce the scale size corresponding to at least part of the content. Of course, the specific processing operation performed by the terminal device on at least part of the content may not be limiting.
Step S150: and acquiring display data matched with the triggered edge area.
In some embodiments, the terminal device may acquire the display data matched with the triggered edge region so as to generate virtual content corresponding to at least part of the content for display. The display data may be data for generating virtual content corresponding to at least part of the content. As an embodiment, the display data may include position information that the virtual content needs to be displayed, at least part of content data corresponding to the virtual content, and the like. At least part of the content data may be content data corresponding to part of the content in the virtual content. Of course, the specific display data may not be limiting. The content data may include model data and the like, and the model data may include data such as a color of the model, coordinates of vertices of the model, contour data of the model, and the like.
Step S160: and generating virtual content corresponding to at least part of the content according to the content data and the display data, and displaying the virtual content.
In some embodiments, the terminal device may further generate corresponding virtual content to display while performing processing operation corresponding to the edge region on at least part of the content. The terminal device can generate virtual content corresponding to at least part of the content according to the content data corresponding to at least part of the content and the display data.
In some embodiments, the terminal device may determine, according to the display data, a position where the virtual content needs to be displayed, and determine, according to the position, a rendering coordinate of the virtual content in the virtual space. And then constructing virtual content according to the content data and the display data, and rendering the virtual content according to the rendering coordinates. After rendering the virtual content, the virtual content may then be displayed. Therefore, the display of the related virtual content is realized while at least part of the content is processed according to the triggering operation of the edge area of the interactive area, and the interactivity of the user and the displayed content is improved.
According to the interaction method of the virtual content, the terminal device carries out triggering operation on the edge area of the interaction area according to the display content corresponding to the interaction area detected by the interaction device, carries out processing operation corresponding to the edge area on the operated display content, and displays the corresponding virtual content.
Referring to fig. 6, another embodiment of the present application provides an interaction method for virtual content, which is applicable to the terminal device, and the interaction method for virtual content may include:
step S210: and receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to the touch operation detected in the interactive area.
In the embodiment of the present application, step S210 may refer to the contents of the above embodiments, and is not described herein again.
Step S220: and when the touch operation is determined to comprise the setting operation according to the operation data, acquiring at least part of content corresponding to the setting operation from the display content corresponding to the interactive area, and acquiring content data of the at least part of content, wherein the setting operation comprises the trigger operation corresponding to the edge area of the interactive area.
In this embodiment of the application, when it is determined that the touch operation on the display content corresponding to the interaction area includes a setting operation, the terminal device may acquire content data of at least part of content corresponding to the setting operation, that is, acquire content data of at least part of the content that is operated.
In some embodiments, the terminal device may display the content, that is, the terminal device may generate the display content to display according to the relative spatial position information between the terminal device and the interactive device and the relative positional relationship between the interactive region and the interactive device, so that the user can see the display content to display in the interactive region in an overlapping manner. Referring to fig. 7, the acquiring, by the terminal device, content data of at least part of the content may include:
step S221: and acquiring the touch position detected by the interactive area according to the operation data.
In some embodiments, the operation data sent by the interaction device may include a touch position corresponding to the touch operation, where the touch position may be a touch coordinate of the touch operation in the interaction area.
Step S222: and acquiring the display position of at least part of content corresponding to the executed setting operation according to the touch position.
In some embodiments, the terminal device may acquire a display position of at least part of the operated content according to the touch position. The terminal device can acquire a first space coordinate of the touch coordinate corresponding to the real space, and then convert the first space coordinate into a second space coordinate of the virtual space. The second spatial coordinates acquired by the terminal device may be rendering coordinates of the operated partial content, that is, a display position of the partial content on which the setting operation is performed.
Step S223: and acquiring content data matched with the display position based on the virtual content displayed by the terminal equipment.
After the terminal device acquires the display position of the part of the content on which the setting operation is performed, the terminal device may determine, according to the displayed display content, content data of the display content corresponding to the display position. Specifically, the content matched with the rendering coordinates corresponding to the display position may be obtained according to the rendering coordinates of all the content of the display content, so as to obtain the display content matched with the display position. According to the obtained display content matched with the display position, the content data of the display content can be obtained, and the obtained content data is used as the content data of at least part of the content. The content data may include model data of the display content, and the model data may include data such as a color of the model, coordinates of vertices of the model, and contour data of the model.
Step S230: and acquiring a processing instruction matched with the triggered edge area.
In some embodiments, the setting operation may be a trigger operation corresponding to a first setting area in all edge areas of the interaction area, where the first setting area is used to trigger generation and display of virtual extended content corresponding to the partial content. As one mode, the setting operation may be that at least a part of the content in the display content corresponding to the interactive region is moved to a first setting region in an edge region of the interactive region, for example, the at least a part of the content is moved to the first setting edge region in the edge region of the interactive region by a sliding operation. As another mode, the setting operation may be to trigger a control in the display content corresponding to the first setting area in the edge area after the at least part of the content is selected, for example, the at least part of the content may be selected by a click operation, and then the control in the display content corresponding to the first setting area is triggered by the click operation. Of course, the setting operation for specifically triggering generation and display of the virtual extension content corresponding to the partial content may not be limited.
In this embodiment of the application, the terminal device obtains the processing instruction matched with the edge area, specifically, the processing instruction is a first processing instruction matched with the first setting area, that is, the first processing instruction is used for instructing the terminal device to cancel display of at least part of the content in the interaction area, and generate and display the virtual extended content corresponding to the part of the content.
Step S240: and canceling the display of at least part of the content in the interactive area according to the first processing instruction.
After the terminal device obtains the first processing instruction, the terminal device may perform corresponding processing operation on the part of the content according to the first processing instruction, where the processing operation corresponding to the first processing instruction is that the terminal device cancels display of at least part of the content in the interaction area.
In some embodiments, when the at least part of the content is virtual content that is displayed in a real scene by the terminal device in an overlapping manner, the terminal device may directly cancel the display of the at least part of the content. When at least part of the content is the content displayed in the interactive area of the interactive device, the terminal device may send an instruction for instructing the interactive device to cancel displaying the at least part of the content to the interactive device, so that the interactive device may cancel displaying the at least part of the content according to the instruction.
Step S250: and acquiring display data matched with the triggered edge area.
In the embodiment of the application, the display data, which is acquired by the terminal device and matched with the triggered edge area, includes a first relative position relationship between a first designated position where the virtual content needs to be displayed in an overlapping manner and the interactive device. The first relative position relation is used for the terminal equipment to generate virtual extension content which is displayed at the first specified position in an overlapping mode.
In some embodiments, the terminal device may obtain a first designated position at which the virtual extension content needs to be displayed, and obtain a first relative positional relationship between the first designated position and the interactive device, so as to generate the virtual extension content for display in a subsequent step, so that the virtual extension content is displayed at a display position that needs to be displayed in an overlapping manner, where the first designated position may be outside the interactive region and correspond to the first setting region, for example, be outside the interactive region and be adjacent to the first setting region. The display position refers to a position where the terminal device displays the virtual extension content in the real environment in an overlapping manner, and may also be understood as a position of the virtual extension content in the real environment as seen by the user through the terminal device.
In some embodiments, obtaining the first relative positional relationship between the first specified location and the interactive device may include: and reading a first relative position relation between the first designated position outside the pre-stored interaction area and the interaction equipment, wherein the first designated position corresponds to the first set area. That is, the first relative positional relationship between the first designated position where the virtual extension content needs to be displayed and the interaction device may be fixed, for example, the first designated position may be adjacent to the first set area of the interaction device and at a fixed position outside the interaction area.
Of course, a manner of specifically acquiring the first relative positional relationship between the first designated location and the interactive device may not be limited in this embodiment of the application.
Step S260: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
In the embodiment of the application, the terminal device may obtain the relative spatial position information between the terminal device and the interactive device, so that the terminal device generates the virtual extension content according to the relative spatial position information to display the virtual extension content. The terminal device can identify the marker on the interactive device, so that the relative spatial position information between the terminal device and the interactive device can be obtained according to the identification result of the marker. The recognition result at least comprises position information, posture information and the like of the marker relative to the terminal equipment, so that the terminal equipment can acquire relative spatial position information between the terminal equipment and the interactive equipment according to the position, the size and the recognition result of the marker on the interactive equipment. Wherein, the relative spatial position information between the terminal device and the interactive device may include: the relative position information and the posture information between the terminal device and the interactive device, and the posture information may be the orientation, the rotation angle, and the like of the interactive device relative to the terminal device. The size of the marker can be adjusted according to the requirement, and the specific size is not limited.
In some embodiments, the marker may include at least one sub-marker, and the sub-marker may be a pattern having a shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a dot, a ring, a triangle, or other shapes. In addition, the distribution rules of the sub-markers within different markers are different, and thus, each marker may have different identity information. The terminal device may acquire identity information corresponding to the tag by identifying the sub-tag included in the tag, and the identity information may be information that can be used to uniquely identify the tag, such as a code, but is not limited thereto.
In one embodiment, the outline of the marker may be rectangular, but the shape of the marker may be other shapes, and the rectangular region and the plurality of sub-markers in the region constitute one marker. In some embodiments, the marker may also be a light spot marker formed by a light spot, the light spot marker may emit light of different wavelength bands or different colors, and the terminal device acquires the identity information corresponding to the marker by identifying information such as the wavelength band or the color of the light emitted by the light spot marker. It should be noted that the shape, style, size, color, number of feature points, and distribution of the specific marker are not limited in this embodiment, and only the marker needs to be recognized and tracked by the terminal device.
In some embodiments, the step of identifying the marker on the interactive device may be that the terminal device first acquires an image containing the marker through the image acquisition device, and then identifies the marker in the image. The terminal device collects the image containing the marker, and the image containing the marker can be collected and identified by adjusting the spatial position of the terminal device in the real space or by adjusting the spatial position of the interactive device in the real space, so that the marker on the interactive device is positioned in the visual field range of the image collecting device of the terminal device. The field of view of the image capturing device may be determined by the size of the field of view.
Of course, the specific manner of acquiring the relative spatial location information between the terminal device and the interactive device may not be limited in this embodiment of the application.
Step S270: and generating virtual extended content corresponding to the partial content according to the first relative position relation, the relative spatial position information and the content data, and displaying the virtual extended content.
In this embodiment of the application, after the terminal device acquires the first relative position relationship between the first specified position and the interactive device, the relative spatial position information between the terminal device and the interactive device, and the content data of the at least part of content, the terminal device may generate the virtual extended content corresponding to the at least part of content according to the first relative position relationship, the relative spatial position information, and the content data, so as to display the virtual extended content in the subsequent process.
In some embodiments, the virtual extension content to be displayed may be the at least part of content, and in this case, the content data of the at least part of content acquired may be three-dimensional model data of the at least part of content. The terminal device may obtain a rendering position of the virtual extension content according to the relative spatial position information between the terminal device and the interactive device and the first relative position relationship between the first designated position and the interactive device, and render the three-dimensional virtual content according to the rendering position.
Specifically, the terminal device may obtain a spatial position coordinate of the first designated position in the real space according to the relative spatial position information between the terminal device and the interactive device and the first relative position relationship between the first designated position and the interactive device, and convert the spatial position coordinate into a spatial coordinate in the virtual space. The virtual space can include a virtual camera, the virtual camera is used for simulating human eyes of a user, and the position of the virtual camera in the virtual space can be regarded as the position of the terminal equipment in the virtual space. The terminal device can obtain the spatial position of the virtual extended content relative to the virtual camera by taking the virtual camera as a reference according to the position relation between the virtual extended content and the interactive device in the virtual space, so that the rendering coordinate of the virtual extended content in the virtual space is obtained, and the rendering position of the virtual extended content is obtained. Wherein the rendering position can be used as a rendering coordinate of the virtual extension content to realize that the virtual extension content is rendered at the rendering position. The rendering coordinates refer to three-dimensional space coordinates of the virtual extended content in a virtual space with a virtual camera as an origin (which can be regarded as human eyes as the origin).
It can be understood that, after the terminal device obtains rendering coordinates for rendering the virtual expanded content in the virtual space, the terminal device may obtain content data (i.e., the three-dimensional model data) corresponding to the virtual expanded content, then construct the virtual expanded content according to the content data, and render the virtual expanded content according to the rendering coordinates, where the rendering of the virtual expanded content may obtain vertex coordinates, color values, and the like of each vertex in the virtual expanded content. Since the content data may include three-dimensional model data, the rendered virtual extended content may be three-dimensional virtual content.
In some embodiments, the generated displayed virtual extension content may be display content related to the at least part of content, and the terminal device may acquire content data of the display content related to the part of content according to the content data of the part of content to generate three-dimensional virtual extension content. In the embodiment of the application, after the terminal device generates the three-dimensional virtual extension content, the three-dimensional virtual extension content can be displayed. Specifically, after the terminal device constructs and renders the three-dimensional virtual extension content, the virtual extension content can be converted into a virtual picture, display data of the virtual picture is obtained, the display data can include RGB values of all pixel points in the display picture, corresponding pixel point coordinates and the like, the terminal device can generate the display picture according to the display data, and the display picture is projected onto a display lens through a display screen or a projection module, so that the three-dimensional virtual extension content is displayed. When the terminal device generates the virtual extended content, the virtual extended content is generated according to the first relative position relationship between the first designated position and the interactive device, the relative spatial position information and the content data, so that the display position of the virtual extended content is the first designated position, that is, the display position of the virtual content is a position outside the interactive region and corresponding to the first set region, and thus, a user can see the virtual extended content to be displayed in a superimposed manner outside the interactive region of the interactive device in the real world and at a position corresponding to the first set region through a display lens of the head-mounted display device, and the display effect of augmented reality is achieved.
When the virtual extension content is displayed, since the display position of the virtual extension content in the superimposed display is a position outside the interactive area and corresponding to the first setting area (for example, a position outside the interactive area and adjacent to the first setting area), therefore, the display position of the virtual extension content does not conflict with the display content corresponding to the interactive area, so that the user can display the virtual extension content through the interactive equipment, the display content corresponding to the interactive area is operated, and the user can see that the virtual extension content corresponding to the operated display content is superposed on the set area outside the interactive area for display, so that the display content corresponding to the interactive area and the display of the generated virtual extension content are ensured not to generate conflict, therefore, the user can simultaneously view the virtual extension content and the display content corresponding to the interaction area, and the interaction effect and the display effect of the display content are improved. For example, please refer to fig. 8 and fig. 9 simultaneously, in a scene of viewing a document, the display content corresponding to the interaction area 202 includes a plurality of document contents 301, a user can slide the document content 312 (document B) in the plurality of document contents toward the edge area on the upper side of the interaction area 202 according to an upward moving direction through the interaction device 200, so that the user can see that the document contents 301 are displayed outside the interaction area 202 through the terminal device 100, and the display position of the document contents 301 corresponds to the edge area on the upper side, so that the user can see the document contents with a larger scale size at the same time, and the user can conveniently view the document contents.
For another example, referring to fig. 10 and fig. 11, in an application scenario of chat, the display content corresponding to the interactive area 202 includes chat content 304 and an input keyboard 305, the user can slide the chat content 304 to an edge area on an upper side of the interactive area in an upward moving direction in the interactive area, so that the terminal device 100 can see that the chat content 304 is displayed outside the interactive area 202, and the display position of the chat content 304 corresponds to the edge area on the upper side, and the input keyboard 305 is displayed in the interactive area, so that the user can see the chat content 304 and the input keyboard 305 with a larger scale size at the same time, which is convenient for the user to chat.
In this embodiment of the application, when the terminal device displays the virtual extended content subsequently, if the displayed virtual extended content is some specific content, and the area where the display position of the virtual extended content is located is parallel to the interactive area, it may cause inconvenience for the user to watch the virtual extended content, which brings inconvenience to the user. Therefore, when the terminal device generates the virtual extension content, the area where the display position of the virtual extension content is displayed can be adjusted, so that a certain included angle exists between the area where the first specified position is located and the interactive device, and a user can conveniently view the virtual extension content after the virtual extension content is subsequently displayed.
In this embodiment of the application, after the posture information is obtained, a first relative position relationship between the area where the first designated position is located and the interaction device may be adjusted according to the posture information of the interaction device, so that an included angle between the area where the first designated position is located and the interaction device is a preset included angle. The terminal device may determine an included angle between the area where the current first designated position is located and the interaction area of the interaction device according to the posture information of the interaction device and the first relative position relationship, and adjust the position of the first designated position to make the included angle between the area where the first designated position is located and the interaction area be a preset included angle, so as to obtain the adjusted position of the first designated position and the adjusted first relative position relationship.
In some embodiments, the specific size of the preset included angle may not be limited, for example, the preset included angle may be 45 ° to 70 °, and for example, may be 65 ° to 90 °.
After the first relative positional relationship is adjusted, the virtual extended content may be generated and displayed based on the relative spatial positional information, the adjusted first relative positional relationship, and the content data. The first relative position relation is adjusted, so that a preset included angle is formed between the area where the display position of the displayed virtual extension content is located and the interaction equipment, a user can see that the virtual extension content is located outside the interaction area and forms a certain included angle with the interaction equipment, and the display effect of the virtual extension content is improved.
For example, referring to fig. 12, the display content corresponding to the interactive area includes content of the video list 306, and the user may perform the above-mentioned setting operation on the video options in the video list 306 through the interactive device, so that the terminal device 100 can see that the video content 307 corresponding to the operated video option is displayed outside the interactive area, and a certain included angle exists between the video content 307 and the interactive device, which is convenient for the user to view the video content.
In this embodiment of the application, the virtual extension content that needs to be generated and displayed is the at least part of content, and when the at least part of content includes the interactive content and the non-interactive content at the same time, a position where the interactive content needs to be superimposed and displayed in the real scene may be in the interactive region, and a position where the non-interactive content needs to be superimposed and displayed in the real scene may be outside the interactive region, so that a user may operate the interactive content through the interactive region, and further control over the at least part of content is achieved.
In some embodiments, referring to fig. 13, the generating, by the terminal device, a virtual extended content corresponding to at least a part of the content according to the first relative position relationship, the relative spatial position information, and the content data, and displaying the virtual extended content may include:
step S261: when at least part of the content comprises interactive content and non-interactive content, generating virtual extended content corresponding to the non-interactive content according to the first relative position relation, the content data and the relative spatial position information, and displaying the virtual extended content corresponding to the non-interactive content;
step S262: acquiring a second relative position relation between the first set area and the interactive equipment;
step S263: and generating interactive content according to the second relative position relation, the relative spatial position information and the content data, and displaying the interactive content.
In some embodiments, the interactive content may be understood as content in which interface interaction can be performed in the part of content, and the non-interactive content may be understood as content in which interface interaction cannot be performed. As one way, the interactive content may be control content in the part of content, and the control content may trigger display control of the non-interactive content. For example, if at least part of the content is a video playing interface, the interactive content is control content used for pausing playing, fast forwarding playing, closing playing and the like in the video playing interface, and the non-interactive content is video content in the video playing interface.
When the terminal device generates and displays the virtual extended content corresponding to at least part of the content, it may generate a virtual non-interactive content according to the first relative position relationship, the relative spatial position information, and content data of a non-interactive content in the content data, and display the virtual non-interactive content as a virtual extended content. The terminal device also acquires a second relative position relationship between the first set area and the interactive device, generates virtual interactive content according to the second relative position relationship between the first set area and the interactive device, the relative spatial position information and the content data of the interactive content in the content data, and displays the virtual interactive content. It should be noted that the generation and display of the virtual extension content and the generation and display of the virtual interactive content may be performed simultaneously. Therefore, the user can see that the virtual non-interactive content is displayed outside the interactive area of the interactive equipment in the real world in an overlapping manner through the display lens of the head-mounted display device, and the virtual interactive content is displayed in the edge area of the interactive equipment in the real world in an overlapping manner, so that the display effect of augmented reality is realized, and the user can conveniently control the non-interactive content through the operation of the virtual interactive content.
Further, the interaction method of the virtual content may further include: and when the operation on the interactive content is detected according to the operation data sent by the interactive equipment, controlling the display of the non-interactive content based on the operation data.
In some embodiments, after the terminal device displays the virtual extension content and the interactive content, the interactive area of the interactive device may detect a touch operation, and generate operation data according to the touch operation detected in the interactive area, and send the operation data to the terminal device. After receiving the operation data, the terminal device may detect whether the displayed interactive content is operated according to the operation data sent by the interactive device.
Further, if an operation on the interactive contents is detected, display control may be performed on the non-interactive contents. Specifically, the terminal device may control the display of the non-interactive content according to the operation data and the function of the interactive content. For example, in a video playing scene, the non-interactive content is video content, when the interactive content includes control content for pausing playing, fast-forwarding playing, and closing playing, if a click operation on the control content for pausing playing is determined according to operation data, the video content may be controlled to pause playing, if a click operation on the control content for fast-forwarding playing is determined according to the operation data, the video content may be controlled to fast-forward playing, and if a click operation on the control content for closing playing is determined according to the operation data, the terminal device may cancel display of the video content, thereby achieving an effect of closing playing of the video content. Of course, the above scenarios and the specific display control of the non-interactive content are only examples, and do not represent limitations on specific application scenarios and specific display control of the non-interactive content. Therefore, the user can perform touch operation in the edge area of the interactive area to realize the operation on the virtual interactive content, and further realize the control on the non-interactive content.
According to the virtual content interaction method provided by the embodiment of the application, the terminal device performs touch operation on the first set area in the edge area of the interaction area according to the display content corresponding to the interaction area detected by the interaction device, cancels display of the display content subjected to the touch operation, and displays the virtual extension content corresponding to the display content subjected to the touch operation in the virtual space, so that a user can operate the display content corresponding to the interaction area through the interaction device, see the virtual extension content corresponding to the operated display content, and display the virtual extension content outside the interaction area in a superposed manner at a position corresponding to the first set area, so that the user can simultaneously view the virtual extension content and the display content corresponding to the interaction area, and the interactivity between the user and the display content is improved.
Referring to fig. 14, another embodiment of the present application provides an interaction method of virtual content, which is applicable to the terminal device, and the interaction method of virtual content may include:
step S310: and receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to the touch operation detected in the interactive area.
Step S320: and when the touch operation is determined to comprise the setting operation according to the operation data, acquiring at least part of content corresponding to the setting operation from the display content corresponding to the interactive area, and acquiring content data of the at least part of content, wherein the setting operation comprises the trigger operation corresponding to the edge area of the interactive area.
In the embodiment of the present application, step S310 and step S320 may refer to the content of the above embodiments, and are not described herein again.
Step S330: and acquiring a processing instruction matched with the edge area.
In some embodiments, the setting operation may be a trigger operation corresponding to a second setting area in all edge areas of the interaction area, where the second setting area may be used to trigger generation of virtual content corresponding to the partial content for display.
As an embodiment, the setting operation may be that at least a part of the content in the display content corresponding to the interactive region is moved to a second setting region in an edge region of the interactive region, for example, the at least a part of the content is moved to the second setting region in the edge region of the interactive region by a sliding operation. As a specific implementation manner, the display content corresponding to the interaction area may be displayed by a touch screen of the interaction device, and the interaction device may detect a touch operation on the touch screen and control at least part of the content to move along a sliding track of the sliding operation when the sliding operation on at least part of the content is detected. When the interactive device controls at least part of the content to slide to the second setting area along with the sliding operation, the interactive device can generate operation data and send the operation data to the terminal device, and therefore the terminal device can determine that at least part of the content slides to the second setting area according to the operation data. Therefore, the terminal device can determine whether at least part of the content slides to the second setting area in the display content displayed in the interaction area, so as to further determine whether to trigger generation of virtual content corresponding to the at least part of the content for display.
In this embodiment of the application, when it is determined that at least part of the content in the display content corresponding to the interaction area is moved to the second setting area, the terminal device may obtain a processing instruction matched with the edge area, specifically, obtain a second processing instruction matched with the second setting area, where the second processing instruction is used to instruct the terminal device to maintain display of the first content still in the interaction area in at least part of the content.
Step S340: and when at least part of the content still has the first content displayed corresponding to the interactive area, keeping the display of the first content according to the second processing instruction.
In this embodiment of the application, since the display content moved to the second setting area is displayed by the touch screen of the interactive device, in the process that the at least part of the content is moved, after the second setting area reaches the edge area of the interactive area, in the process of continuing to move in the direction toward the edge area, some content in the at least part of the content is still in the interactive area, that is, the first content displayed corresponding to the interactive area still exists in the at least part of the content. At this time, the display of the content still in the interactive area may be maintained. The terminal device may determine whether the first content in the interaction area still exists in the at least part of the content according to the operation data sent by the interaction device, for example, the terminal device may determine, according to a sliding track in the operation data, a destination position to which the whole at least part of the content is moved, so as to determine whether the first content in the interaction area still exists in the at least part of the content.
Step S350: and acquiring display data matched with the triggered edge area.
When at least part of the content reaches the edge area of the interactive area, the content continues to move in the direction towards the edge area, and the content in the at least part of the content disappears, that is, the content beyond the interactive area is hidden, and the first content in the interactive area is kept displayed. In this way, the hidden content cannot be viewed by the user, so that the user cannot view the complete content at least partially, resulting in poor display effect and interaction effect.
Therefore, if the terminal device determines that the first content in the interaction region still exists in at least part of the content, the terminal device may generate the virtual content corresponding to the hidden content, and determine the position where the virtual content needs to be displayed in an overlapping manner in the real scene as a first adjacent region adjacent to the second set region outside the interaction region, so that when the virtual content is displayed, the virtual content can be adjacent to the first content, and the user can see the complete at least part of the content. If the terminal device determines that the first content in the interaction area does not exist in the at least part of the content, the terminal device may generate a virtual content corresponding to the whole at least part of the content to display.
In this embodiment of the application, the terminal device may further determine whether other virtual content is displayed in an adjacent area, adjacent to the second setting area, outside the interaction area in an overlapping manner, to determine whether display data is acquired, so as to generate virtual content corresponding to the second content, and subsequently generate and display virtual content corresponding to the second content, where the second content is at least some of the content except for the first content. Therefore, before acquiring the second content, the method for interacting the virtual content may further include: and judging whether the first adjacent area correspondingly displays other virtual contents.
Further, if the first neighboring area does not correspond to display of other virtual content, display data for generating virtual extended content may be acquired. It can be understood that, if other virtual content is displayed in a position where the virtual content corresponding to the second content needs to be displayed in an overlapping manner, if the virtual content corresponding to the second content is still displayed at this time, the display of the previous virtual content may be affected. Therefore, when the first adjacent area does not overlap and display other virtual contents, the display data is acquired, so that the virtual contents corresponding to the second contents are generated and displayed later.
In this embodiment of the application, the display data, which is acquired by the terminal device and matched with the triggered edge area, includes a third relative position relationship between a display position where the virtual content needs to be displayed in an overlapping manner and the interactive device. The display position of the virtual content to be displayed can be a first adjacent area which is adjacent to the second set area outside the interaction area. And the third relative position relation is used for the terminal equipment to generate virtual content which is displayed in the first adjacent area in an overlapping mode.
Step S360: and acquiring relative spatial position information between the terminal equipment and the interactive equipment.
The terminal device may obtain the relative spatial position information between the terminal device and the interactive device, and the specific manner of obtaining the relative spatial position information may refer to the contents of the above embodiments, which is not described herein again.
Step S370: and generating virtual content corresponding to the second content according to the content data, the third relative position relation and the relative spatial position information of the second content, and displaying the virtual content, wherein the second content is other contents except the first content in at least part of contents.
After the terminal device acquires the content data of the second content, the third relative position relationship and the relative spatial position information, the terminal device may determine, according to the third relative position relationship and the relative spatial position information, a spatial coordinate in the virtual space corresponding to the set area, so as to obtain a rendering coordinate for rendering the virtual content, that is, a display position of the virtual content. And then the terminal equipment renders the virtual content corresponding to the second content according to the content data and the rendering coordinates, and the virtual extended content is rendered to obtain vertex coordinates, color values and the like of each vertex in the virtual content, so that three-dimensional virtual content is obtained and displayed.
Therefore, the user can simultaneously observe the first content in the interaction area and the second content outside the interaction area through the head-mounted display device, and the first content is adjacent to the second content, namely the complete at least partial content is seen. In addition, in the process of moving the at least part of content, the user can always view the at least part of content, and the display effect and the interaction effect are improved.
For example, referring to fig. 15, in an application scenario of chat, display contents corresponding to an interaction area include chat contents 304 and an input keyboard 305, a user may move the chat contents 304 in the interaction area in an upward direction, and in the moving process, when the chat contents 304 still have first contents 3041 in the interaction area, the terminal device 100 displays second contents 3042 in the chat contents 304 except for the first contents 3041 outside the interaction area in a form of virtual contents and is adjacent to the first contents 3041, so that the user sees the complete chat contents 304, and the display effect is improved.
According to the interaction method of the virtual content, the terminal device performs touch operation of the second set area in the edge area of the interaction area according to the display content corresponding to the interaction area detected by the interaction device, when the first content in the interaction area still exists in the operated display content, the display of the first content is maintained, the virtual content is generated according to the hidden content in at least part of the content displayed by the interaction device and is displayed in the virtual space, so that a user can simultaneously see the content in the interaction area and the content outside the interaction area, thereby seeing the complete at least part of the content, and the interactivity between the user and the display content is improved.
Referring to fig. 16, a further embodiment of the present application provides an interaction method of virtual content, which is applicable to the terminal device, and the interaction method of virtual content may include:
step S410: and receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to the touch operation detected in the interactive area.
Step S420: and when the touch operation is determined to comprise the setting operation according to the operation data, acquiring at least part of content corresponding to the setting operation from the display content corresponding to the interactive area, and acquiring content data of the at least part of content, wherein the setting operation comprises the trigger operation corresponding to the edge area of the interactive area.
Step S430: and acquiring a processing instruction matched with the triggered edge area.
In this embodiment of the application, the setting operation may be a trigger operation corresponding to a third setting area in all edge areas of the interaction area, where the third setting area may be used to trigger an editing operation on at least part of the operated content. The editing operations may include: the above-mentioned at least part of the content is deleted, uninstalled, copied, cut, scaled, etc., and the specific editing operation may not be limited, and may also include other operations, for example, closing the display of at least part of the content, etc.
As an embodiment, the setting operation may be that at least a part of the display content corresponding to the interactive region is moved to a third setting region in an edge region of the interactive region. For details, the content of the above embodiment may be referred to for a manner of moving the third setting area, and details are not described herein. That is to say, when at least a part of the content in the display content corresponding to the interactive region is moved to the third setting region, the terminal device may be triggered to perform an editing operation on the at least a part of the content.
As a specific implementation manner, the display content corresponding to the interaction area includes an editing control in the third setting area, and the setting operation may be that at least part of the content is moved to a position of the editing control. When detecting that the at least part of content is moved to the position of the editing control, the terminal device may trigger an editing operation on the at least part of content. Of course, the editing control may also be displayed when the at least part of the content is touch-controlled, so that the user can observe the editing control and move the at least part of the content to the position where the editing control is located when the user performs touch-control on the editing control. Further, the editing control may include a plurality of editing controls, each of the plurality of editing controls corresponds to a different editing operation, that is, moving the at least part of the content to a position of a different editing control in the third setting area may trigger a different editing operation on the at least part of the content.
Of course, the setting operation specifically for triggering the editing operation on at least part of the content may not be limiting.
In this embodiment of the application, when it is determined that at least part of the content in the display content corresponding to the interaction area is executed by the terminal device, the terminal device may obtain a third processing instruction matched with the edge area, specifically, obtain an instruction for instructing the terminal device to perform a related editing operation.
Step S440: and editing the part of the content according to the third processing instruction and the content data, wherein the editing operation comprises deleting, uninstalling, copying, cutting or scaling.
In this embodiment, after the third processing instruction and the content data of the at least part of content are acquired, an editing operation may be performed on the at least part of content, and a type of the editing operation may be determined by a specific processing instruction.
In some embodiments, the terminal device may perform a deletion operation according to the third processing instruction and the content data of at least part of the content. Specifically, the file corresponding to at least part of the content may be deleted, or the at least part of the content in the display content corresponding to the current interactive area may be deleted. For example, the display content corresponding to the interaction area includes a plurality of document contents, and after the setting operation is performed on one of the document contents, the terminal device may delete the operated document content. For another example, the display content corresponding to the interaction area includes drawing interface content, and after the setting operation is performed on one of the drawing content in the drawing interface content, the drawing content may be deleted.
In some embodiments, the terminal device may perform an uninstalling operation according to the third processing instruction and the at least part of the content data. Specifically, the terminal device may uninstall the application program corresponding to the partial content. For example, the at least part of the content is an icon of an application program, and after the setting operation is performed on the icon, the terminal device may uninstall the corresponding application program.
In some embodiments, the terminal device may perform a copy operation or a cut operation according to the third processing instruction and the at least part of the content data. Specifically, the terminal device may copy or cut a file corresponding to the part of the content, or copy or cut at least part of the content in the display content corresponding to the current interactive area. For example, the display content corresponding to the interaction area includes a plurality of document contents, and after the setting operation is performed on one of the document contents, the terminal device may copy or cut a file of the operated document content. For another example, the display content corresponding to the interactive region includes a plurality of picture contents, and after the setting operation is performed on one of the picture contents, the terminal device may copy or cut the operated picture content.
In some embodiments, the terminal device may perform a scaling operation according to the third processing instruction and the at least part of the content data. Specifically, the terminal device may perform scaling up or scaling down on at least part of the display content corresponding to the current interaction area.
Of course, the specific processing operation performed by the terminal device on at least part of the content may not be limited in the embodiment of the present application.
Step S450: and acquiring display data matched with the triggered edge area.
In some embodiments, the obtaining, by the terminal device, display data matched with the triggered edge region may include: and acquiring effect content data corresponding to the editing operation and a fourth relative position relation between a second adjacent area and the interactive equipment, wherein the second adjacent area is an area outside the interactive area and adjacent to the third set area.
In some embodiments, when the terminal device performs an editing operation on at least part of the content, the terminal device may further display related effect content to improve an interaction effect. Specifically, the terminal device may obtain the effect content data corresponding to the editing operation, and a third relative position relationship between the second adjacent area and the interactive device. The second adjacent area is used as a display position of the effect content, and the second adjacent area is outside the interaction area and adjacent to the third setting area.
Step S460: and generating virtual effect content according to the content data, the effect content data, the fourth relative position relation and the relative spatial position information, and displaying the virtual effect content.
In some embodiments, after the terminal device acquires the relative spatial position information between the terminal device and the interactive device, the third relative position relationship, the content data, and the effect content data, the virtual effect content may be generated. The specific manner of generating the virtual effect content may refer to the manner of generating the virtual extension content in the foregoing embodiment, and details are not described herein.
After the terminal device generates the virtual effect content, the virtual effect content can be displayed. Therefore, when the user can observe and see the at least part of the executed editing operation through the head-mounted display device, the virtual effect content corresponding to the editing operation is displayed outside the interactive area, and the interactive experience of the user in operating the displayed content is improved.
For example, in a scene in which the display content is copied or cut, please refer to fig. 17, the display content corresponding to the interactive area 202 includes the picture content 308, and when the terminal device 100 performs the copy or cut operation on the picture content 308, the terminal device 100 may generate and display the virtual sticky board 309, where the display position of the virtual sticky board 309 is outside the interactive area 202. In addition, the terminal device 100 may generate the virtual picture content 310 corresponding to the picture content 308 and display the virtual picture content 310 on the virtual clipboard 309, so that the user can see the virtual picture content 310 corresponding to the copied or cut picture content 308 and display the virtual picture content on the virtual clipboard 309.
In the above scenario, the terminal device may further generate a virtual control corresponding to the virtual content in the virtual clipboard, and display the virtual control in the interaction area, where the virtual control is used to operate the virtual content in the clipboard. Therefore, the virtual content in the clipboard can be moved into the interaction area by operating the virtual control.
Of course, the above virtual effect content and the display mode thereof are only examples, and do not represent the limitation of the specific virtual effect content and the display mode thereof.
According to the virtual content interaction method provided by the embodiment of the application, the terminal device carries out triggering operation of the third set area in the edge area of the interaction area according to the display content corresponding to the interaction area detected by the interaction device, carries out related editing operation on the operated display content, and improves the interactivity between a user and the display content. In addition, when the operated display content is edited, the related effect content is displayed, and the interactive experience can be improved.
In some implementations, different edge regions can trigger different operations and displays of the display content. Referring to fig. 18, when the interface of the first application is moved to the upper edge area of the interaction area 202, the user may observe that the virtual first application interface 311 (virtual content corresponding to part of the interface content of the first application) is overlappingly displayed outside the interaction area 202 in an augmented reality manner, and a control in the interface of the first application may be displayed in the interaction area 202, at this time, the user may still control the first application through the control in the interaction area 202. When the interface of the second application program is moved to the right edge area of the interaction area 202, the user can see that the virtual second application interface 312 (virtual content corresponding to all interface contents of the second application program) is superimposed and displayed outside the interaction area 202 in an augmented reality manner, at this time, the second application program can be in a suspended state, and the user cannot operate the second application program. When the picture and the document are moved to the left edge area of the interaction area 202, copying, cutting, and the like of the picture and the document can be realized, and the user can observe that the virtual document content 301 and the virtual picture content 308 are displayed outside the interaction area 202 in an overlaid manner in an augmented reality manner. When the longer document is moved to the lower edge area of the interaction area 202, the user can observe that part of the content in the longer document is displayed in the content of the interaction area, and the other content 313 in the longer document is displayed outside the interaction area 202 in an overlapped mode in the form of virtual content, so that the user can conveniently view the longer document. In addition, a deletion angle 314 may be displayed at a lower edge of the interactive region 202, and the user may delete the content by moving the content to the deletion angle 314.
Referring to fig. 19, a block diagram of an interactive apparatus 400 for virtual content provided in the present application is shown. The virtual content interaction apparatus 400 is applied to a terminal device, and the terminal device is connected to an interaction device, and the interaction device includes an interaction area. The virtual content interacting device 400 includes: a data receiving module 410, a first data obtaining module 420, an instruction obtaining module 430, a processing executing module 440, a second data obtaining module 450, and a content displaying module 460. The data receiving module 410 is configured to receive operation data sent by the interactive device, where the operation data is generated by the interactive device according to a touch operation detected in the interactive area; the data obtaining module 420 is configured to, when it is determined that the touch operation includes a setting operation according to the operation data, obtain at least part of content corresponding to the setting operation from display content corresponding to the interaction area, and obtain content data of the at least part of content, where the setting operation includes a trigger operation corresponding to an edge area of the interaction area; the instruction obtaining module 430 is configured to obtain a processing instruction matched with the edge region; the processing execution module 440 is configured to perform a processing operation corresponding to the edge area on at least a part of the content according to the content data and the processing instruction; the second data acquiring module 450 is configured to acquire display data matched with the triggered edge region; the content display module 460 is configured to generate virtual content corresponding to at least part of the content according to the content data and the display data, and display the virtual content.
In some embodiments, the edge region includes a first setting region, and the processing instruction is a first processing instruction. The process execution module 440 may be specifically configured to: and canceling the display of at least part of the content in the interactive area according to the first processing instruction. The display data comprises a first relative position relation between a first designated position of the virtual content to be displayed and the interactive equipment, and the first designated position is located outside the interactive area and corresponds to the first set area. The content display module 460 may be specifically configured to: acquiring relative spatial position information between the terminal equipment and the interactive equipment; and generating virtual extended content corresponding to at least part of the content according to the first relative position relation, the relative spatial position information and the content data, and displaying the virtual extended content.
Further, the content display module 460 generates a virtual extended content corresponding to at least a part of the content according to the first relative position relationship, the relative spatial position information, and the content data, and displays the virtual extended content, which may include: when at least part of the content comprises interactive content and non-interactive content, generating virtual extended content corresponding to the non-interactive content according to the first relative position relation, the content data and the relative spatial position information, and displaying the virtual extended content corresponding to the non-interactive content; acquiring a second relative position relation between the first set area and the interactive equipment; and generating interactive content according to the second relative position relation, the relative spatial position information and the content data, and displaying the interactive content.
In some embodiments, the interaction device 400 of the virtual content may further include: and a display control module. The display control module is used for controlling the display of the non-interactive content based on the operation data when the operation on the interactive content is detected according to the operation data sent by the interactive equipment.
In some embodiments, the edge region includes a second setting region, and the processing instruction is a second processing instruction. The processing execution module 440 may be specifically configured to: and when at least part of the content still has the first content displayed corresponding to the interactive area, keeping the display of the first content according to the second processing instruction. The display data comprises a third relative position relation between the first adjacent area and the interactive equipment, and the first adjacent area is an area outside the interactive area and adjacent to the second set area. The content display module 460 may be specifically configured to: acquiring relative spatial position information between the terminal equipment and the interactive equipment; and generating virtual content corresponding to the second content according to the content data, the third relative position relation and the relative spatial position information of the second content, and displaying the virtual content, wherein the second content is other contents except the first content in at least part of contents.
Further, the virtual content interaction device 400 may further include a content determination module. The content judgment module is used for judging whether the first adjacent area correspondingly displays other virtual content. If no other virtual content exists, the content display module 460 generates virtual content corresponding to at least part of the content according to the content data and the display data, and displays the virtual content.
In some embodiments, the edge area is a third setting area and the processing instruction is a third processing instruction. The process execution module 440 may be specifically configured to: and editing the part of the content according to the third processing instruction and the content data, wherein the editing operation comprises deleting, uninstalling, copying, cutting or scaling. The display data comprises a fourth relative position relation between a second adjacent area and the interactive equipment and effect content data corresponding to the editing operation, and the second adjacent area is an area outside the interactive area and adjacent to the third set area. The content display module 460 may be specifically configured to: acquiring relative spatial position information between the terminal equipment and the interactive equipment; and generating virtual effect content according to the content data, the effect content data, the fourth relative position relation and the relative spatial position information, and displaying the virtual effect content.
Referring to fig. 20, a further embodiment of the present application provides an interaction method for virtual content, which is applicable to the interaction device, where the interaction device is connected to a terminal device, the interaction device includes an interaction area, and the interaction method for virtual content may include:
step S510: and detecting the touch operation through the interaction area.
Step S520: and when the touch operation is determined to comprise the setting operation according to the touch operation detected in the interactive area, acquiring at least part of content corresponding to the setting operation from the display content corresponding to the interactive area, and acquiring content data of at least part of the content, wherein the setting operation comprises the triggering operation corresponding to the edge area of the interactive area.
Step S530: and carrying out processing operation corresponding to the edge area on at least part of the content.
Step S540: and acquiring display data matched with the triggered edge area.
Step S550: and sending the content data and the display data to the terminal equipment, wherein the content data and the display data are used for indicating the terminal equipment to generate virtual content corresponding to at least part of the content and displaying the virtual content.
Referring to fig. 1 again, an embodiment of the present application provides an interactive system 10 for virtual content, where the interactive system 10 for virtual content includes a terminal device 100 and an interactive device 200, the terminal device 100 is connected to the interactive device 200, and the interactive device 200 includes an interactive area 202, where the interactive device 200 is configured to generate operation data according to a touch operation detected by the interactive area 202, and send the operation data to the terminal device 100; the terminal device 100 is configured to receive operation data sent by the interactive device 200, and when it is determined that the touch operation includes a setting operation according to the operation data, obtain at least part of content corresponding to the setting operation from display content corresponding to the interactive region 202, and obtain content data of the at least part of content, where the setting operation includes a trigger operation corresponding to an edge region of the interactive region 202, obtain a processing instruction matching the edge region, and perform a processing operation corresponding to the edge region on the at least part of content according to the content data and the processing instruction; the terminal device is further configured to acquire display data matched with the triggered edge area, generate virtual content corresponding to at least part of the content according to the content data and the display data, and display the virtual content.
To sum up, according to the scheme provided by the application, the terminal device receives operation data sent by the interactive device, the operation data is generated by the interactive device according to the touch operation detected by the interactive region, when it is determined that the touch operation includes a setting operation according to the operation data, at least part of content corresponding to the setting operation is obtained from display content corresponding to the interactive region, and content data of the at least part of content is obtained, the setting operation includes a trigger operation corresponding to an edge region of the interactive region, then a processing instruction matched with the edge region is obtained, and the processing operation corresponding to the edge region is performed on the at least part of content according to the content data and the processing instruction. The method and the device have the advantages that the processing operation corresponding to the edge area can be performed on the operated display content according to the triggering operation aiming at the edge area of the interactive area on the interactive equipment, the corresponding virtual content is displayed, the operation is simple, convenient and fast, and the interactivity between the user and the display content is better improved.
Referring to fig. 21, a block diagram of a terminal device according to an embodiment of the present application is shown. The terminal device 100 may be a terminal device capable of running an application, such as a smart phone, a tablet computer, a head-mounted display device, and the like. The terminal device 100 in the present application may include one or more of the following components: a processor 110, a memory 120, an image acquisition apparatus 130, and one or more applications, wherein the one or more applications may be stored in the memory 120 and configured to be executed by the one or more processors 110, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Processor 110 may include one or more processing cores. The processor 110 connects various parts within the entire terminal device 100 using various interfaces and lines, and performs various functions of the terminal device 100 and processes data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 120 and calling data stored in the memory 120. Alternatively, the processor 110 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 110 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 110, but may be implemented by a communication chip.
The Memory 120 may include a Random Access Memory (RAM) or a Read-Only Memory (Read-Only Memory). The memory 120 may be used to store instructions, programs, code sets, or instruction sets. The memory 120 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing various method embodiments described below, and the like. The storage data area may also store data created by the terminal 100 in use, and the like.
In the embodiment of the present application, the image capturing device 130 is used to capture an image of a marker. The image capturing device 130 may be an infrared camera or a color camera, and the specific type of the camera is not limited in the embodiment of the present application.
Referring to fig. 22, a block diagram of an interaction device according to an embodiment of the present disclosure is shown. The interactive device may be an electronic device such as a smart phone or a tablet computer having an interactive area, and the interactive area may include a touch pad or a touch screen. The interaction device 200 may include one or more of the following components: a processor 210, a memory 220, and one or more applications, wherein the one or more applications may be stored in the memory 220 and configured to be executed by the one or more processors 210, the one or more programs configured to perform a method as described in the aforementioned method embodiments.
Referring to fig. 23, a block diagram of a computer-readable storage medium according to an embodiment of the present application is shown. The computer readable medium 800 has stored therein a program code that can be called by a processor to execute the method described in the above method embodiments.
The computer-readable storage medium 800 may be an electronic memory such as a flash memory, an EEPROM (electrically erasable programmable read only memory), an EPROM, a hard disk, or a ROM. Alternatively, the computer-readable storage medium 800 includes a non-volatile computer-readable storage medium. The computer readable storage medium 800 has storage space for program code 810 to perform any of the method steps of the method described above. The program code can be read from or written to one or more computer program products. The program code 810 may be compressed, for example, in a suitable form.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.

Claims (12)

1. An interaction method of virtual content is applied to a terminal device, the terminal device is connected with an interaction device, the interaction device comprises an interaction area, and the method comprises the following steps:
receiving operation data sent by the interactive equipment, wherein the operation data is generated by the interactive equipment according to the touch operation detected by the interactive area;
when it is determined that the touch operation includes a setting operation according to the operation data, acquiring at least part of content corresponding to the setting operation from display content corresponding to the interaction area, and acquiring content data of the at least part of content, wherein the setting operation includes a trigger operation corresponding to an edge area of the interaction area;
acquiring a processing instruction matched with the triggered edge area;
processing operation corresponding to the triggered edge area is carried out on at least part of the content according to the content data and the processing instruction;
acquiring display data matched with the triggered edge area;
and generating virtual content corresponding to at least part of the content according to the content data and the display data, and displaying the virtual content.
2. The method of claim 1, wherein the triggered edge region comprises a first set region, and the processing instruction is a first processing instruction;
the processing operation corresponding to the triggered edge area is performed on the at least part of the content according to the content data and the processing instruction, and the processing operation includes:
canceling the display of the at least part of the content in the interaction area according to the first processing instruction;
the display data comprises a first relative position relation between a first designated position of virtual content to be displayed and the interactive equipment, and the first designated position is located outside the interactive area and corresponds to the first set area;
the generating virtual content corresponding to at least part of the content according to the content data and the display data and displaying the virtual content includes:
acquiring relative spatial position information between the terminal equipment and the interactive equipment;
and generating virtual extended content corresponding to at least part of the content according to the first relative position relation, the relative spatial position information and the content data, and displaying the virtual extended content.
3. The method according to claim 2, wherein the generating virtual extended content corresponding to the at least part of content according to the first relative positional relationship, the relative spatial position information, and the content data and displaying the virtual extended content includes:
when the at least part of content comprises interactive content and non-interactive content, generating virtual extended content corresponding to the non-interactive content according to the first relative position relationship, the content data and the relative spatial position information, and displaying the virtual extended content corresponding to the non-interactive content;
acquiring a second relative position relation between the first set area and the interactive equipment;
and generating the interactive content according to the second relative position relationship, the relative spatial position information and the content data, and displaying the interactive content.
4. The method of claim 3, further comprising:
and when the operation on the interactive content is detected according to the operation data sent by the interactive equipment, controlling the display of the non-interactive content based on the operation data.
5. The method of claim 1, wherein the triggered edge region comprises a second setting region, and the processing instruction is a second processing instruction;
the processing operation corresponding to the triggered edge area is performed on the at least part of the content according to the content data and the processing instruction, and the processing operation includes:
when the at least part of the content still has the first content displayed corresponding to the interactive area, the first content is kept to be displayed according to the second processing instruction;
the display data comprises a third relative position relation between a first adjacent area and the interactive equipment, wherein the first adjacent area is an area outside the interactive area and adjacent to the second set area;
the generating virtual content corresponding to at least part of the content according to the content data and the display data and displaying the virtual content includes:
acquiring relative spatial position information between the terminal equipment and the interactive equipment;
and generating virtual content corresponding to second content according to content data of the second content, a third relative position relation and the relative spatial position information, and displaying the virtual content, wherein the second content is other content except the first content in at least part of the content.
6. The method according to claim 5, wherein before the generating of the virtual content corresponding to the at least part of the content according to the content data and the display data and the displaying of the virtual content, the method comprises:
judging whether the first adjacent area correspondingly displays other virtual contents;
and when the first adjacent area does not have other correspondingly displayed virtual contents, executing the generation of the virtual contents corresponding to at least part of the contents according to the content data and the display data, and displaying the virtual contents.
7. The method of claim 1, wherein the triggered edge region comprises a third set region, and the processing instruction is a third processing instruction;
the processing operation corresponding to the triggered edge area is performed on the at least part of the content according to the content data and the processing instruction, and the processing operation includes:
according to the third processing instruction and the content data, performing editing operation on at least part of the content, wherein the editing operation comprises deleting, unloading, copying, cutting or scaling;
the display data comprises a fourth relative position relation between a second adjacent area and the interactive equipment and effect content data corresponding to the editing operation, wherein the second adjacent area is an area outside the interactive area and adjacent to the third set area;
the generating virtual content corresponding to at least part of the content according to the content data and the display data and displaying the virtual content includes:
acquiring relative spatial position information between the terminal equipment and the interactive equipment;
and generating virtual effect content according to the content data, the effect content data, the fourth relative position relation and the relative spatial position information, and displaying the virtual effect content.
8. An interaction method of virtual content is applied to an interaction device, the interaction device is connected with a terminal device, the interaction device comprises an interaction area, and the method comprises the following steps:
detecting touch operation through the interaction area;
when the touch operation is determined to comprise a setting operation according to the touch operation detected in the interactive area, acquiring at least part of content corresponding to the setting operation from display content corresponding to the interactive area, and acquiring content data of the at least part of content, wherein the setting operation comprises a triggering operation corresponding to an edge area of the interactive area;
processing operation corresponding to the triggered edge area is carried out on at least part of the content;
acquiring display data matched with the triggered edge area;
and sending the content data and the display data to the terminal equipment, wherein the content data and the display data are used for indicating the terminal equipment to generate virtual content corresponding to at least part of the content and displaying the virtual content.
9. An interaction device of virtual content, which is applied to a terminal device, wherein the terminal device is connected with an interaction device, the interaction device includes an interaction area, and the device includes: a data receiving module, a first data acquisition module, an instruction acquisition module, a processing execution module, a second data acquisition module and a content display module, wherein,
the data receiving module is used for receiving operation data sent by the interactive equipment, and the operation data is generated by the interactive equipment according to touch operation detected in the interactive area;
the data acquisition module is used for acquiring at least part of content corresponding to a setting operation from display content corresponding to the interaction area and acquiring content data of the at least part of content when the touch operation is determined to include the setting operation according to the operation data, wherein the setting operation includes a trigger operation corresponding to an edge area of the interaction area;
the instruction acquisition module is used for acquiring a processing instruction matched with the triggered edge area;
the processing execution module is used for performing processing operation corresponding to the triggered edge area on the at least part of content according to the content data and the processing instruction;
the second data acquisition module is used for acquiring display data matched with the triggered edge area;
the content display module is used for generating virtual content corresponding to at least part of the content according to the content data and the display data and displaying the virtual content.
10. An interactive system for virtual content, characterized in that the system comprises a terminal device and an interactive device, the terminal device is connected with the interactive device, the interactive device comprises an interactive area, wherein,
the interaction equipment is used for generating operation data according to the touch operation detected in the interaction area and sending the operation data to the terminal equipment;
the terminal device is configured to receive operation data sent by the interactive device, and when it is determined that the touch operation includes a setting operation according to the operation data, obtain at least part of content corresponding to the setting operation from display content corresponding to the interactive region, and obtain content data of the at least part of content, where the setting operation includes a triggering operation corresponding to an edge region of the interactive region, obtain a processing instruction matched with the triggered edge region, and perform a processing operation corresponding to the triggered edge region on the at least part of content according to the content data and the processing instruction;
the terminal device is further configured to acquire display data matched with the triggered edge area, generate virtual content corresponding to at least part of the content according to the content data and the display data, and display the virtual content.
11. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to perform the method of any of claims 1-8.
12. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 8.
CN201910377227.0A 2019-05-07 2019-05-07 Virtual content interaction method, device, system, terminal equipment and storage medium Active CN111913639B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910377227.0A CN111913639B (en) 2019-05-07 2019-05-07 Virtual content interaction method, device, system, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910377227.0A CN111913639B (en) 2019-05-07 2019-05-07 Virtual content interaction method, device, system, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111913639A CN111913639A (en) 2020-11-10
CN111913639B true CN111913639B (en) 2022-01-28

Family

ID=73242723

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910377227.0A Active CN111913639B (en) 2019-05-07 2019-05-07 Virtual content interaction method, device, system, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111913639B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023024871A1 (en) * 2021-08-24 2023-03-02 亮风台(上海)信息科技有限公司 Interface interaction method and device
CN115942022A (en) * 2021-08-27 2023-04-07 中移(苏州)软件技术有限公司 Information preview method, related equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9395917B2 (en) * 2013-03-24 2016-07-19 Sergey Mavrody Electronic display with a virtual bezel
EP3050030B1 (en) * 2013-09-24 2020-06-24 Apple Inc. Method for representing points of interest in a view of a real environment on a mobile device and mobile device therefor
CN103543879A (en) * 2013-10-28 2014-01-29 陕西高新实业有限公司 Virtual touch screen system
CN105487673B (en) * 2016-01-04 2018-01-09 京东方科技集团股份有限公司 A kind of man-machine interactive system, method and device
CN108776544B (en) * 2018-06-04 2021-10-26 网易(杭州)网络有限公司 Interaction method and device in augmented reality, storage medium and electronic equipment

Also Published As

Publication number Publication date
CN111913639A (en) 2020-11-10

Similar Documents

Publication Publication Date Title
CN111158469A (en) Visual angle switching method and device, terminal equipment and storage medium
US9324305B2 (en) Method of synthesizing images photographed by portable terminal, machine-readable storage medium, and portable terminal
US20140300542A1 (en) Portable device and method for providing non-contact interface
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
US9691179B2 (en) Computer-readable medium, information processing apparatus, information processing system and information processing method
US10372229B2 (en) Information processing system, information processing apparatus, control method, and program
CN111766937A (en) Virtual content interaction method and device, terminal equipment and storage medium
US10621766B2 (en) Character input method and device using a background image portion as a control region
CN110737414B (en) Interactive display method, device, terminal equipment and storage medium
US10095940B2 (en) Image processing apparatus, image processing method and non-transitory computer readable medium
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
CN111383345A (en) Virtual content display method and device, terminal equipment and storage medium
CN111913639B (en) Virtual content interaction method, device, system, terminal equipment and storage medium
KR102061867B1 (en) Apparatus for generating image and method thereof
US20150261385A1 (en) Picture signal output apparatus, picture signal output method, program, and display system
CN111913674A (en) Virtual content display method, device, system, terminal equipment and storage medium
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
CN111913560A (en) Virtual content display method, device, system, terminal equipment and storage medium
WO2019150430A1 (en) Information processing device
JP4871226B2 (en) Recognition device and recognition method
EP3974949A1 (en) Head-mounted display
CN111913565B (en) Virtual content control method, device, system, terminal device and storage medium
US20150339538A1 (en) Electronic controller, control method, and control program
CN111913564B (en) Virtual content control method, device, system, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Interactive method, device, system, terminal device and storage medium of virtual content

Effective date of registration: 20221223

Granted publication date: 20220128

Pledgee: Shanghai Pudong Development Bank Limited by Share Ltd. Guangzhou branch

Pledgor: GUANGDONG VIRTUAL REALITY TECHNOLOGY Co.,Ltd.

Registration number: Y2022980028733

PE01 Entry into force of the registration of the contract for pledge of patent right