WO2020135719A1 - 虚拟内容的交互方法及系统 - Google Patents

虚拟内容的交互方法及系统 Download PDF

Info

Publication number
WO2020135719A1
WO2020135719A1 PCT/CN2019/129222 CN2019129222W WO2020135719A1 WO 2020135719 A1 WO2020135719 A1 WO 2020135719A1 CN 2019129222 W CN2019129222 W CN 2019129222W WO 2020135719 A1 WO2020135719 A1 WO 2020135719A1
Authority
WO
WIPO (PCT)
Prior art keywords
terminal
virtual content
content
virtual
interactive device
Prior art date
Application number
PCT/CN2019/129222
Other languages
English (en)
French (fr)
Inventor
戴景文
贺杰
卢智雄
Original Assignee
广东虚拟现实科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201811652926.3A external-priority patent/CN111383345B/zh
Priority claimed from CN201811641778.5A external-priority patent/CN111381670B/zh
Priority claimed from CN201910082681.3A external-priority patent/CN111563966B/zh
Application filed by 广东虚拟现实科技有限公司 filed Critical 广东虚拟现实科技有限公司
Publication of WO2020135719A1 publication Critical patent/WO2020135719A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer

Definitions

  • the present application relates to the field of augmented reality, and in particular, to a virtual content interaction method and system.
  • Augmented reality is a technology that increases the user's perception of the real world through the information provided by computer systems. It uses computer-generated virtual objects, scenes or systems Content objects such as prompt information are superimposed on the real scene to enhance or modify the perception of the real world environment or the data representing the real world environment.
  • content objects such as prompt information are superimposed on the real scene to enhance or modify the perception of the real world environment or the data representing the real world environment.
  • the interaction of display content between terminals is a key issue that affects the application of technology.
  • the embodiments of the present application provide a method and system for interacting with virtual content.
  • An interactive system for virtual content includes: a first terminal, a first interactive device, a second terminal, and a second interactive device, the first terminal is connected to at least one of the second terminals, The first terminal is connected to at least one of the first interactive devices, and the second terminal is connected to at least one of the second interactive devices, wherein the first terminal is used to The first relative spatial position relationship between the two displays the first virtual content, acquires the second virtual content based on the first virtual content, and sends content data corresponding to the second virtual content to the connected second terminal; the first The two terminals are used to receive the content data sent by the first terminal, and display the second virtual according to the content data and the second relative spatial position relationship between the second terminal and the connected second interactive device Content; the second interactive device is used to send a control instruction to the second terminal; and the second terminal is also used to control the displayed second virtual content according to the control instruction sent by the second interactive device.
  • a virtual content interaction method is applied to a first terminal, the first terminal is connected to a first interaction device; the first terminal is also connected to at least one second terminal, and the second terminal corresponds to at least one second interaction Device, the method includes: displaying first virtual content according to a first relative spatial position relationship between the first terminal and the first interactive device; acquiring second virtual content based on the first virtual content; and The second terminal sends content data corresponding to the second virtual content, and the content data is used to instruct the second terminal to display the second virtual content and is sent according to the corresponding second interactive device.
  • the control instruction controls the displayed second virtual content.
  • a virtual content interaction method is applied to a second terminal, the second terminal is connected to a second interaction device; the second terminal is also connected to at least one first terminal, and the first terminal corresponds to at least one first interaction
  • the method includes: receiving content data corresponding to second virtual content sent by the first terminal, where the second virtual content is virtual content obtained by the first terminal according to the displayed first virtual content; and determining A second relative spatial position relationship between the second terminal and the second interactive device; displaying the second virtual content according to the content data and the second relative spatial position relationship; and receiving the second interactive device
  • the sent control instruction controls the displayed second virtual content according to the control instruction.
  • a virtual content display method is applied to a terminal device, and the terminal device is connected to an interactive device.
  • the method includes: displaying virtual content according to a relative spatial position between the terminal device and the interactive device; A gesture parameter sent by the interactive device, the gesture parameter being obtained by the interactive device according to the detected gesture control operation; and generating a control instruction according to the gesture parameter, and controlling the displayed virtual content according to the control instruction.
  • a virtual content display method includes: identifying target markers, and acquiring position and posture information of the target markers relative to the terminal device; acquiring virtual content to be displayed, and acquiring virtual content relative to a specified plane Reflection content, the designated plane is the horizontal plane where the bottom of the virtual content is located in the virtual space; based on the position and posture information, acquiring the virtual content and the rendering position of the reflection content in the virtual space; and according to The rendering position renders the virtual content and the reflection content.
  • a terminal device includes: one or more processors; a memory, the memory stores one or more computer programs, the computer programs are executed by the one or more processors, so that the processors Perform the method described above.
  • the computer-readable storage medium stores a computer program, and the computer program can be called by a processor to perform the method described above.
  • FIG. 1 is a schematic diagram of an augmented reality system according to an embodiment of the present application.
  • FIG. 2 is a frame diagram of a first terminal according to an embodiment of this application.
  • FIG. 3 is a flowchart of a virtual content interaction method according to an embodiment of the application.
  • FIG. 4 is a schematic diagram of a display scenario according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another display scenario provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another display scenario provided by an embodiment of the present application.
  • FIG. 7 is a flowchart of a virtual content interaction method according to another embodiment of this application.
  • FIG. 8 is a schematic diagram of a display scene provided by another embodiment of the present application.
  • FIG. 9 is a flowchart of a virtual content interaction method according to another embodiment of this application.
  • FIG. 10 is a flowchart of a virtual content interaction method according to still another embodiment of this application.
  • FIG. 11 is a schematic diagram of a display effect according to still another embodiment of the present application.
  • 12a-12b are schematic diagrams of a gesture operation according to an embodiment of the present application.
  • 13a-13b are schematic diagrams of another display effect according to an embodiment of the present application.
  • 14a-14b are schematic diagrams of a gesture operation according to an embodiment of the present application.
  • 15a-15b are schematic diagrams of a gesture operation according to an embodiment of the present application.
  • 16 is a flowchart of a method for virtual reflection display according to an embodiment of the application.
  • FIG. 17 is a schematic diagram of model data according to an embodiment of the present application.
  • FIG. 18 is a schematic diagram of a display effect according to an embodiment of the present application.
  • 19 is a schematic diagram of another display effect according to an embodiment of the present application.
  • FIG. 20 is a flowchart of a method for virtual reflection display in another embodiment of the present application.
  • FIG. 21 is a schematic diagram of model data according to an embodiment of the present application.
  • FIG. 22 is a flowchart of step S2030 in FIG. 20.
  • 23a-23b are schematic diagrams of a display effect according to an embodiment of the present application.
  • 24 is a schematic diagram of another display effect according to an embodiment of the present application.
  • FIG. 25 is a flowchart of step S2040 in FIG. 20.
  • FIG. 26 is a schematic diagram of a display effect according to an embodiment of the present application.
  • FIG. 27 is a schematic diagram of a display effect according to an embodiment of the present application.
  • FIG. 28 is a schematic diagram of a display effect according to an embodiment of the present application.
  • an augmented reality system 10 provided by an embodiment of the present application includes a first terminal 100, a second terminal 200, a first interactive device 300, and a second interactive device 400, where the first terminal 100 and the first The interaction device 300 and the second terminal 200 are connected, the second terminal 200 is connected to the first terminal 100 and the second interaction device 400, and the number of the second terminal 200 may not be limited.
  • the first terminal 100 and the second terminal 200 may be a head-mounted display device, or may be a smart terminal such as a mobile phone connected to an external head-mounted display device, that is, the processing of the first terminal 100 and the second terminal 200 as a head-mounted display device And a storage section to display the virtual content in the head-mounted display device.
  • the first terminal 100 includes a processor and a memory, where the memory stores one or more application programs, which may be configured to be executed by one or more processors, one or more The program is used to execute the method described in the embodiments of the present application.
  • the processor may include one or more processing cores.
  • the processor uses various interfaces and lines to connect the various parts of the entire first terminal 100, and executes the first by running or executing instructions, programs, code sets or instruction sets stored in the memory, and calling data stored in the memory.
  • the processor may be implemented in at least one hardware form of digital signal processing, field programmable gate array, and programmable logic array.
  • the processor may integrate one or a combination of a central processor, an image processor, and a modem.
  • the CPU mainly handles the operating system, user interface and application programs, etc.; the GPU is used for rendering and rendering of the displayed content; and the modem is used for processing wireless communication.
  • the above-mentioned modem may not be integrated into the processor, and implemented by a communication chip alone.
  • the memory may include random access memory or read-only memory.
  • the memory may include a storage program area and a storage data area, where the storage program area may store instructions for implementing an operating system, instructions for implementing at least one function, instructions for implementing various method embodiments described below, and the like.
  • the storage data area may also store data created by the first terminal 100 in use and the like.
  • the first terminal 100 further includes a camera, which is used to collect images of real objects and scene images of the target scene.
  • the camera 130 may be an infrared camera, a visible light camera, or the like.
  • the first terminal 100 further includes one or more of the following components: a display module, an optical module, a communication module, and a power supply.
  • the display module may include a display control unit for receiving a display image of the virtual content rendered by the processor, displaying and projecting the display image onto the optical module, so that the user can view the virtual content through the optical module .
  • the display module may be a display screen or a projection device for displaying images.
  • the optical module may use an off-axis optical system or a waveguide optical system. After the display image displayed by the display module passes through the optical module, it can be projected to the user's eyes. The user can see the display image projected by the display module through the optical module.
  • the user can also observe the real environment through the optical module and feel the visual effect of the superimposed virtual content and the real environment.
  • the communication module may be a module such as Bluetooth, WiFi, or ZigBee.
  • the terminal device may communicate with the interactive device through the communication module to exchange information and instructions.
  • the power supply can supply power to the entire terminal equipment to ensure the normal operation of various components of the terminal equipment.
  • the structure of the second terminal 200 may be the same as or similar to the first terminal 100.
  • the first interactive device 300 may include a control panel provided with a first marker 310 and a first touch area 320.
  • the number of the first marker 310 may be one or more.
  • the first terminal 100 may collect an image including the first marker 310 and identify the first marker 310 in the image to display corresponding virtual content, and the user may view the virtual content through the first terminal 100 Superimposed on the first interactive device 300 in the real world.
  • the first marker 310 can be any graphic with identifiable feature markers, for example, the first marker 310 can be a pattern with a topological structure, which refers to between the sub-markers and the feature points in the marker Connectivity.
  • the first marker 310 is an object that can form a visible light spot or an infrared light spot to be recognized by the terminal device.
  • the first terminal 100 can recognize the first marker 310 to obtain position and posture information of the first marker 310 relative to the first terminal 100, and the first The identity information of a marker 310.
  • the first terminal 100 may track the first interactive device 300 according to the first marker 310 to display the corresponding virtual content.
  • an infrared filter may be covered on the first marker 310, and an infrared camera is used on the first terminal, and the first marker 310 covering the infrared filter is irradiated with infrared light, and may be collected by an infrared camera
  • the image containing the first marker 310 reduces the influence of visible light on the image and improves the accuracy of tracking.
  • the first interactive device 300 can be held by the user, or can be fixed on the operating table for the user to operate and view the virtual content.
  • the first interactive device 300 is also provided with a touch area so that the user can control the virtual content displayed by the first terminal 100.
  • the first interactive device 300 can detect gesture operations through the touch area and send corresponding operation data to the first terminal 100.
  • the first terminal 100 may generate a control instruction according to the operation data to control the virtual content, such as controlling the scrolling, displacement, segmentation, and rotation of the virtual content.
  • the first interactive device 300 may also directly generate a control instruction according to the detected gesture operation, and send the control instruction to the first terminal 100.
  • the first terminal 100 may send the content data of the virtual content to the second terminal 200.
  • the virtual content is related to the displayed content of the first terminal 100.
  • the second terminal 200 may The data shows virtual content.
  • the second interaction device 400 is substantially the same as the first interaction device 300, and includes a control panel provided with a second marker 410 and a second touch area 420.
  • the number of the second marker 410 may be one Or more.
  • an embodiment of the present application provides a virtual content interaction method, which is applied to a first terminal and includes the following steps.
  • Step S310 Render the first virtual content according to the first relative spatial position relationship between the first terminal and the first interactive device.
  • the first terminal may acquire the first relative spatial position relationship with the first interactive device, and render the first virtual content according to the first relative spatial position relationship.
  • the first relative spatial position relationship may include position and posture information of the first interactive device relative to the first terminal, and the posture information is the orientation and rotation angle of the first interactive device relative to the first terminal.
  • the first terminal may recognize the first marker on the first interactive device and obtain the relative spatial position between the first terminal and the first marker, thereby obtaining the relationship between the first terminal and the first interactive device The first relative spatial position relationship.
  • the first interactive device may include an inertial measurement unit for detecting posture data of the first interactive device, and the posture data may include the angular velocity and acceleration of the first interactive device in three-dimensional space.
  • the first terminal may receive the detected gesture data sent by the first interactive device, and determine the first relative spatial position relationship between the first terminal and the first interactive device according to the gesture data. The specific way to obtain the first relative spatial position relationship is not limited in this application.
  • the first terminal may obtain the display position of the first virtual content according to the first relative spatial position relationship, where the display position is an overlay of the first virtual content that the user can view through the first terminal in the real space
  • the position that is, the rendering coordinates of the first virtual content in the virtual space.
  • the first terminal may obtain the space coordinates of the first interactive device in the real space according to the first relative spatial position relationship, convert the space coordinates of the real space into the space coordinates of the first interactive device in the virtual space, and according to the space of the virtual space
  • the coordinates and the relative position of the first virtual content to be displayed and the first interactive device obtain the rendering coordinates of the first virtual content in the virtual space, that is, the display position of the first virtual content.
  • the first terminal may render the first virtual content according to the data and display position of the first virtual content.
  • the data of the first virtual content may include model data, which is used to construct a three-dimensional model of the first virtual content, for example, including color data, vertex coordinate data, and contour data.
  • the data of the first virtual content may be stored in the first terminal, or may be obtained from other devices such as the first interactive device and the server.
  • the first terminal can display the first virtual content, and the user can view the first virtual content superimposed on the real world through the first terminal, realizing the display effect of the augmented reality of the first virtual content and improving the display effect of the first virtual content .
  • the user can view the first interactive device 300 in the real world through the first terminal, and can view the first virtual content 20 superimposed on the first interactive device 300, such as a 3D virtual human model.
  • Step S320 Acquire second virtual content based on the first virtual content.
  • the second virtual content is used for the virtual content displayed in the second terminal.
  • the first terminal may acquire second virtual content related to the first virtual content.
  • the type of the second virtual content may be related to the type of the first virtual content, for example, the second virtual content is the same type of virtual content as the first virtual content.
  • the second virtual content may be subordinate to the first virtual content, for example, the second virtual content is part of the first virtual content.
  • the first virtual content may be an extended content of the first virtual content.
  • the first virtual content is main-level content
  • the second virtual content is sub-level content.
  • Step S330 Send content data corresponding to the second virtual content to the second terminal, so that the second terminal displays the second virtual content and controls the second virtual content according to the control instruction sent by the second interactive device.
  • the first terminal may send content data corresponding to the second virtual content to the second terminal.
  • the second terminal may display the second virtual content according to the received content data corresponding to the second virtual content.
  • the second terminal may establish a communication connection with the second interactive device, and control the displayed second virtual content according to different control operations detected by the second interactive device, for example, switching, moving, and adjusting the size of the second virtual content Adjustment, etc.
  • the first terminal shares the second virtual content related to the first virtual content to the second terminal, the user of the second terminal can view the virtual content shared by the first terminal, and can control the shared virtual content through the second interactive device, Meet the user's control needs.
  • the first terminal 100 may display the first virtual content 20 (3D virtual human body model), and the user may view the 3D virtual human body model superimposed on the first interactive device through the first terminal 100
  • the second terminal 200 may receive the content data of the second virtual content 30 (3D virtual heart organ) sent by the first terminal 100 and display the 3D virtual heart organ; the second terminal 200 may also send according to the second interactive device 400 Control instructions to control the displayed 3D virtual heart organ.
  • the second terminal receives the content data of the second virtual content sent by the first terminal, and may also display the second virtual content according to the relative spatial position with the third marker, where the third marker may Set on a location or an actual object in the real world.
  • the third marker may Set on a location or an actual object in the real world.
  • multiple second terminals may simultaneously display the same second virtual content based on the third marker, and different users may observe that the second virtual content is superimposed and displayed on the same real world through different second terminals position. Since the position and posture information of the second terminal are different with respect to the third marker, the second terminal can display the second virtual content of the corresponding viewing angle according to the relative spatial position with the third marker.
  • a third marker 50 is provided on the table, and a plurality of second terminals 200 (only 2 are shown) can simultaneously display the same second virtual content 30 (3D virtual heart organ) according to the third marker The users of the plurality of second terminals 200 can see that the 3D virtual heart organ is superimposed on the position of the third marker 50.
  • an embodiment of the present application provides another method for interacting with virtual content, which is applied to a first terminal.
  • the method includes the following steps.
  • Step S710 Acquire an image containing the first marker.
  • Step S720 Identify the first marker in the image and obtain the first relative spatial position relationship between the first terminal and the first interactive device.
  • the first terminal can recognize the collected image to obtain the spatial position of the first terminal relative to the first marker, and then obtain the relationship between the first terminal and the first interactive device according to the positional relationship between the first marker and the first interactive device The first relative spatial position relationship.
  • the positional relationship between the first marker and the first interactive device may be stored in the first terminal in advance.
  • Step S730 Display the first virtual content according to the first relative spatial position relationship.
  • the first terminal may recognize the collected image, obtain the identity information of the first marker, obtain the data of the first virtual content according to the identity information, and based on the data of the first virtual content and the first relative space The position relationship renders the first virtual content.
  • Step S740 When receiving the control instruction sent by the first interactive device, control the displayed first virtual content according to the control instruction sent by the first interactive device.
  • the first interactive device may be provided with a first touch area.
  • the first interaction device may obtain operation data according to the gesture operation detected in the first touch area, and the operation data may include gesture parameters.
  • the gesture parameters include at least one of the number of fingers performed by the user to perform the gesture operation, the sliding track, the pressing pressure to perform the gesture operation, the duration of the gesture operation, and the operation frequency of the gesture operation.
  • the sensor of the first touch area detects the gesture operation, it can determine the number of pressed areas, that is, finger data, for example, the number is 1, and the number is 3.
  • the sliding track includes the sliding direction and sliding distance of the gesture operation, for example, the sliding track is sliding down by 1 cm.
  • the sensor in the first touch area can detect the amount of pressure when being pressed, that is, the pressing pressure of the gesture operation, for example, the pressing pressure is 3N (newton).
  • the first touch area can detect the duration of being touched, that is, the duration of gesture operation, for example, the duration of long press is 1.5S (seconds).
  • the first touch area can detect the number of operations within a preset time, that is, the frequency of gesture operations, for example, the click frequency is 3 times/second.
  • the gesture parameters may also include other parameters, for example, the touch area of the gesture operation.
  • the first interactive device may generate a control instruction corresponding to the gesture parameter according to the correspondence between the gesture parameter and the control instruction.
  • the corresponding relationship between the gesture parameter and the control instruction may be stored in the first interactive device in advance, or may be obtained by the first interactive device from the server. This correspondence can be set by the user, or it can be set by default at the factory.
  • the first terminal may control the displayed first virtual content according to the received control instruction.
  • Different control instructions have different control effects on the first virtual content. For example, when the gesture operation is a single-finger click operation, a control instruction for selecting the first virtual content can be generated; when the gesture operation is a sliding operation, a control instruction for controlling the movement or scrolling of the first virtual content can be generated.
  • Step S750 Receive identity information sent by the second terminal, where the identity information includes identity information of the second marker on the second interaction device corresponding to the second terminal.
  • the first terminal can be connected to the identity information of the second terminal.
  • the identity information can be used to identify the second terminal.
  • the identity information can be the MAC address, IP address, device number, etc. of the second terminal, or the second terminal connected to the second terminal.
  • Identity information of the second marker on the interactive device Different second markers may be provided on different second interactive devices.
  • the second terminal may recognize the second marker of the second interactive device to obtain the identity information of the second marker.
  • the second terminal may send the identity information of the second marker to the first terminal, so as to obtain the second virtual content corresponding to the identity information.
  • Step S760 Acquire the second virtual content corresponding to the identity information from the first virtual content.
  • the second virtual content may be a part of the first virtual content
  • the identity information of the second terminal corresponds to the part of the first virtual content
  • the identity corresponding to the identity information may be obtained from the first virtual content
  • the second virtual content
  • the first virtual content includes content A, content B, and content C. It may be determined that content A corresponds to the received identity information of the second terminal, and content A is used as the second virtual content.
  • Step S770 Send the content data of the second virtual content corresponding to the identity information to the second terminal, so that the second terminal displays the second virtual content and controls the second virtual content according to the control instruction sent by the second interactive device.
  • the first terminal 100 and the second terminals 202 and 204 are included.
  • the first terminal 100 displays the first virtual content 20 (3D virtual part model), including the first part 21 and the second part 22.
  • the user of the first terminal 100 is the sharer, and the users of the second terminals 202 and 204 are the shared persons.
  • the second terminal 202 may acquire the identity information (eg, ID001) of the second marker 412 of the second interactive device 402.
  • the second terminal 204 may acquire the identity information (eg ID002) of the second marker 414 of the second interaction device 404.
  • the second terminal 202 and the second terminal 204 may send identity information to the first terminal 100, respectively.
  • the first terminal 100 may determine the second virtual content according to the received identity information, where the identity information ID001 corresponds to the first part 21 and the identity information ID002 corresponds to the second part 22.
  • the first terminal 100 may send the content data of the first part 21 to the second terminal 202, and send the content data of the second part 22 to the second terminal 204.
  • the second terminal 202 may receive the content data of the first part 21 and display the first part 21, and may also use the second interactive device 402 to control the displayed first part 21.
  • the second terminal 204 receives the content data of the second part 22 and displays the second part 22, and the second interactive device 404 can be used to control the displayed second part 22.
  • an embodiment of the present application provides yet another virtual content interaction method, which is applied to a second terminal.
  • the method may include the following steps. Step S910, receiving content data corresponding to the second virtual content sent by the first terminal, and the second virtual content is acquired by the first terminal according to the displayed first virtual content; step S920, determining between the second terminal and the second interactive device Step S930, displaying the second virtual content according to the content data and the second relative spatial position relationship; Step S940, controlling the displayed second virtual content according to the control instruction sent by the second interactive device.
  • an embodiment of the present application provides yet another virtual content interaction method, which is applied to a second terminal.
  • the method includes the following steps.
  • Step S1010 Send identity information to the first terminal, where the identity information includes identity information of the second marker on the second interaction device corresponding to the second terminal.
  • Step S1020 Receive content data of the second virtual content sent by the first terminal, where the second virtual content is the virtual content corresponding to the identity information acquired by the first terminal from the displayed first virtual content.
  • Step S1030 Determine a second relative spatial position relationship between the second terminal and the second interactive device.
  • Step S1040 Display the second virtual content according to the content data and the second relative spatial position relationship between the second terminal and the second interactive device.
  • Step S1050 determine whether the second terminal has control authority over the displayed second virtual content.
  • the second terminal may determine whether it has control authority over the displayed second virtual content, so as to determine whether the second virtual content can be controlled when receiving the control instruction sent by the second interactive device. If the second terminal has control authority over the second virtual content, the second virtual content can be controlled according to the control instruction; if it does not have control authority, the second virtual content cannot be controlled.
  • the second terminal may generate a authority acquisition request according to the user's operation, and send the authority acquisition request to the first terminal, and acquire the Control the control authority of the second virtual content.
  • the user's operation may be detected by the second interactive device, or may be obtained by the user's gesture operation collected by the camera of the second terminal, which is not limited herein.
  • the second terminal may receive permission addition information returned by the first terminal according to the permission acquisition request, where the permission addition information carries control permissions for controlling the second virtual content.
  • the second terminal may write the permission addition information into the permission information of the second terminal, so that the permission of the second terminal includes the control permission for the second virtual content.
  • the manner in which the second terminal obtains the control authority is not limited. For example, the above control authority may also be acquired from the server.
  • Step S1060 When having control authority, control the displayed second virtual content according to the control instruction sent by the second interactive device.
  • the second terminal has control authority over the second virtual content, and can control the second virtual content in response to the received control instruction.
  • the control of the second virtual content may be performed under a two-dimensional plane or a three-dimensional space, for example, controlling the movement of the second virtual content on the two-dimensional plane, and controlling the second virtual content to be flipped 360°.
  • the second terminal may select, rotate, adjust, move, or select the second virtual content according to the control instruction sent by the second interactive device.
  • the selection of the virtual content refers to the selection of the virtual content or the part of the virtual content, so that the virtual content or the part of the virtual content is selected.
  • the option contents may be selected through touch operation on the second interactive device 420.
  • the touch operation may be a single-finger click operation shown in FIG. 12a.
  • the rotation of the second virtual content refers to rotating the second virtual content in a specified direction (eg, horizontal direction, vertical direction, or free direction, etc.) in a two-dimensional plane or a three-dimensional space, that is, the second virtual content Rotating along the rotation axis in the specified direction changes the attitude (direction, etc.) of the displayed second virtual content.
  • the rotation of the second virtual content may correspond to the single-finger sliding operation or the multi-finger sliding operation detected by the second interactive device, for example, to the multi-finger sliding operation shown in FIG. 12b.
  • Adjusting the scaling ratio of the second virtual content refers to enlarging or reducing the model of the second virtual content.
  • the displayed second virtual content 30 is a 3D virtual heart, and the 3D virtual heart can be reduced in a three-dimensional space through a touch operation on the second interactive device 420.
  • the 3D virtual heart can be enlarged in three dimensions.
  • the scaling of the second virtual content may correspond to the multi-finger sliding operation detected by the second interactive device.
  • the two-finger sliding direction is opposite and close to the center point of the two fingers. It can correspond to shrinking the virtual content;
  • Figure 14b the sliding direction of the two fingers is opposite, and away from the center point of the two fingers, it can correspond to zooming in the virtual content.
  • the movement of the second virtual content refers to moving the second virtual content or part of the content of the second virtual content in any direction.
  • the second virtual content is a virtual chess board and virtual chess pieces, which can control the movement of the virtual chess pieces in any direction on the virtual chess board.
  • the page selection of the second virtual content means that when the second virtual content includes multiple pages, the virtual content of one of the pages can be selected, which can include turning pages up/down, selecting pages corresponding to numeric options, and so on.
  • the second virtual content includes multiple levels of menu pages, and each level of menu pages includes multiple virtual options, and a menu page at any level can be selected.
  • performing page selection on the second virtual content may correspond to a single-finger sliding operation or a multi-finger sliding operation. As shown in Figure 15a, one-finger slide to the left, corresponding to page up; as shown in Figure 15b, one-finger to right slide, corresponding to page down.
  • control of the second virtual content may also be other controls, for example, the second virtual content is divided and copied.
  • the second terminal may receive the gesture parameter sent by the second interactive device, and generate a control instruction according to the gesture parameter to control the displayed second virtual content.
  • the gesture parameter includes the number of fingers performing the gesture operation
  • the second terminal may determine the operation type of the gesture operation according to the number of fingers and generate a control instruction.
  • Operation types include single-finger touch and multi-finger touch.
  • a first control instruction is generated to control the second virtual content in a two-dimensional plane.
  • the second virtual content can be selected, scrolled, moved, or selected on the two-dimensional plane.
  • a second control instruction is generated to control the second virtual content in the three-dimensional space.
  • the second virtual content is controlled in three-dimensional space by rotation, scaling adjustment, movement, page selection, segmentation, or copying.
  • the gesture parameter includes the duration of the gesture operation.
  • the second terminal may determine whether the duration is greater than the time threshold, and when greater than the time threshold, generate a control instruction according to the gesture parameter.
  • the terminal device may determine whether the duration of the gesture operation is greater than the time threshold according to the received gesture parameter to determine whether the gesture operation detected by the second interactive device is a valid control operation.
  • the time threshold can be set according to the situation. For example, the time threshold can be 0.5S, 1S, etc. When the duration of the gesture operation is greater than the time threshold, it can be determined that the gesture operation is an effective control operation. When the duration of the gesture operation is equal to or less than the time threshold, it can be determined that the gesture operation is an invalid control operation, and no control order is generated.
  • the duration of the gesture operation when the duration of the gesture operation is not greater than the time threshold, it may be determined whether the duration of the gesture operation is greater than the specified threshold, where the specified threshold is less than the time threshold. When the duration of the gesture operation is greater than the specified threshold, a prompt message prompting the user to re-enter the gesture operation may be output. If the duration of the gesture operation is not greater than the specified threshold, the second terminal may ignore the received gesture parameter.
  • the second terminal determines that the second interactive device detects multiple different gesture operations at the same time according to the gesture parameters, it regards the gesture operation as an invalid gesture operation and outputs prompt information prompting the user to re-enter the gesture operation to prompt the user to input Effective control operation.
  • the second terminal may provide a function to turn on and off the above prompt function to prevent the user from accidentally touching the touch area of the second interactive device and generating prompt information.
  • the first terminal can also control the first virtual content by referring to the above-mentioned method.
  • Step S1070 Send at least one of the manipulation instruction and the content data corresponding to the second virtual content to other terminals, the manipulation instruction is used to instruct the other terminal to control the displayed content, and the content data corresponding to the second virtual content is used to instruct the other terminal to display The second virtual content.
  • the second terminal may also be connected to at least one other terminal.
  • the second terminal may send a manipulation instruction to other connected terminals to control the virtual content displayed by the other terminals.
  • the second terminal may also send the content data of the displayed second virtual content to other connected terminals to share the displayed second virtual content.
  • the first terminal is the terminal of the conference presenter
  • the second terminal is the terminal of the conference participant.
  • the presenter's first terminal can share different second virtual content (shared content) to the corresponding second terminal according to the identity information of the participant's second terminal.
  • the second terminal of the conference participant can control the second virtual content according to the control authority over the second virtual content, and can also share the second virtual content it views to the terminals of other participants, and control the second virtual content of other terminals .
  • the virtual content interaction method provided in the embodiment of the present application enables the first terminal to share the second virtual content corresponding to the first virtual content displayed with the second terminal, and achieves good mutual understanding between the terminals.
  • the second terminal can control the display of the second virtual content according to the control instruction of the second interactive device to improve the interactivity.
  • an embodiment of the present application provides a method for displaying virtual reflection, which is applied to a terminal device and includes the following steps.
  • Step S1610 Identify the target marker, and obtain the position and posture information of the target marker relative to the terminal device.
  • Step S1620 Based on the virtual content to be displayed, obtain the reflection content of the virtual content relative to the specified plane, where the specified plane is the horizontal plane where the bottom of the virtual content is located in the virtual space.
  • the virtual content to be displayed may include 3D objects that can be reflected, such as virtual characters, virtual animals, art exhibits, dolls, furniture, books, mechanical models, and the like.
  • the terminal device can first obtain the model data of the virtual content to be displayed, and according to the model data of the virtual content, use the horizontal plane where the bottommost vertex of the model is located as the specified plane, and use the principle of specular reflection to obtain the virtual content relative to the specified plane Mirrored content, the mirrored content is the reflection content of the virtual content. That is to say, the terminal device can obtain the model data corresponding to the reflection content according to the model data corresponding to the virtual content and the data of the specified plane, where the model data of the reflection content and the model data of the virtual content have a one-to-one correspondence.
  • the designated plane is the horizontal plane 1702 where the bottommost vertex of the virtual animal model 1701 is located, and the model 1703 of the reflected content can be obtained in the above manner.
  • the specified plane can be regarded as an auxiliary tool for obtaining reflection content.
  • the virtual content can be downloaded from the server by the terminal device or obtained from other terminals.
  • Step S1630 Acquire the rendering position of the virtual content and the reflection content in the virtual space according to the position and posture information.
  • the terminal device may obtain the spatial position coordinates of the target marker in the real space according to the position and posture of the target marker relative to the terminal device, convert the spatial position coordinates into the spatial coordinates in the virtual space, and according to the virtual The positional relationship between the virtual content to be displayed in the space and the target marker, and the positional relationship between the virtual content and the reflection content, using the virtual camera as a reference, the spatial positions of the virtual content and the reflection content relative to the virtual camera are obtained, that is, the virtual content is obtained And reflection content rendering position.
  • the virtual camera exists in the virtual space and is used to simulate the human eye of the user, and the position of the virtual camera in the virtual space can be regarded as the position of the terminal device in the virtual space.
  • Step S1640 Render virtual content and reflection content according to the rendering position.
  • the terminal device can obtain model data of the virtual content and the reflection content to be displayed, and render the virtual content and the reflection content according to the model data and the rendering position.
  • Step S1650 Display virtual content and reflection content.
  • the display position of the virtual content and the reflection content corresponds to the rendering position of the virtual content and the reflection content
  • the display position can be understood as a position where the virtual content and the reflection content seen by the user through the head-mounted display device are superimposed on the real world.
  • the user can scan the marker 1802 in real time by wearing the head-mounted display device 1801, and can see the virtual character 1803, the virtual animal 1804, the virtual character reflection 1805 and the virtual animal reflection 1806 superimposed on the real space, which improves Augmented reality display.
  • the display position of the reflection content may be on any plane in the real world, that is, the display position of the reflection content may overlap a certain plane in the real world.
  • the user wears a head-mounted display device and scans the marker 1901 on the ground in real time, and can see that the virtual character and virtual animal 1902 are displayed on the desktop in the real space, and the reflection of the virtual character and the reflection of the virtual animal 1903 are displayed.
  • virtual characters and virtual animals are generated for the user, and the visual senses of the reflection are generated with the desktop as the reference reflection, thereby improving the realism of the virtual content.
  • FIG. 20 another embodiment of the present application provides a virtual reflection display method, which is applied to a terminal device and includes the following steps.
  • Step S2010 Identify the target marker, and obtain the position and posture information of the target marker relative to the terminal device.
  • the terminal device may also identify the identity information of the target marker, and obtain at least one virtual content corresponding to the identity information. Different target markers can correspond to different virtual content.
  • Step S2020 Based on the virtual content to be displayed, obtain the reflection content of the virtual content relative to the specified plane, where the specified plane is the horizontal plane where the bottom of the virtual content is located in the virtual space.
  • the designated plane may be a specular reflection surface
  • the terminal device may use the specular reflection matrix to obtain the reflection content of the virtual content relative to the designated plane.
  • the specular reflection surface is a smooth plane, and when the parallel incident light hits the specular reflection surface, it can be reflected in one direction in parallel.
  • Each specular reflection surface has a specular reflection matrix, and the coordinates of any point in space can be obtained from the specular reflection matrix to obtain the point of symmetry about the specular reflection surface.
  • each vertex of the model corresponding to the virtual content and the mirror reflection matrix calculates the coordinates of each vertex of the model corresponding to the virtual content and the mirror reflection matrix, the symmetry points of each vertex coordinate about the mirror reflection surface can be obtained, and each symmetry point can be used as the reflection content of the virtual content to correspond to each of the models Vertex coordinates to obtain the reflection content of the virtual content relative to the specular reflection surface.
  • the specular reflection matrix may be the following matrix:
  • n x , n y , and n z in the matrix are the values in the unit normal vector (n x , n y , n z ) of the specular reflection surface.
  • the content is relative to the reflection content of the specified plane.
  • the terminal device may establish a world coordinate system in the virtual space, and flip the coordinates of each vertex of the virtual content in the vertical direction to obtain the reflection content corresponding to the virtual content.
  • the world coordinate system of the virtual space can coincide with or parallel to the ground in the real world.
  • the specified plane is parallel to the X0Z plane of the world coordinate system, and the vertices of the virtual content can be flipped relative to the coordinate of the specified plane on the Y axis to obtain the reflected content of the virtual content relative to the specified plane, where ,
  • the coordinates on the Y axis are flipped refers to the difference between the Y coordinate of the vertex of the virtual content and the Y coordinate of the specified plane, and the difference between the corresponding Y coordinate after the flip and the Y coordinate of the specified plane, that is, the vertex of the virtual content
  • the distance to the specified plane is the same as the distance from the corresponding vertex to the specified plane after flipping. As shown in FIG.
  • the difference between the Y coordinate of each vertex coordinate of the virtual content and the Y coordinate of the specified plane may be obtained by subtracting 2 times the corresponding Y coordinate of the specified plane from the Y coordinate of each vertex coordinate of the virtual content
  • the difference can get the Y coordinate of each vertex coordinate of the reflection content.
  • the Y coordinate of the vertex A of the virtual content in the world coordinate system is 8
  • the Y coordinate of the specified plane in the world coordinate system is 5
  • the difference between the Y coordinate of the A vertex and the Y coordinate of the specified plane is 3, corresponding to the reflection
  • the above method of using Y coordinates to obtain the coordinates of each vertex of the reflection content is applicable to the mirror reflection from top to bottom, that is, the reflected content is exactly the same as the virtual content, which is directly the inversion of the virtual content, such as on the ground, desktop, mark
  • An application scenario where a plane placed in parallel such as a board realizes the display of reflected content.
  • Step S2030 Acquire the rendering position of the virtual content and the reflection content in the virtual space according to the position and posture information.
  • the terminal device may determine the display direction of the virtual reflection according to the light source direction of the ambient light source. Please refer to FIG. 22, the above obtains the rendering position of the virtual content and the reflection content in the virtual space according to the position and posture information, including:
  • Step S2031 Determine the display direction of the reflection content according to the light path direction of the light source of the environment where the target marker is located.
  • the terminal device may determine the display direction of the reflection content in the virtual space according to the light path direction of the light source in the real space.
  • the display direction of the reflected content is the direction of the displayed reflected content relative to the virtual content, for example, the reflected content is displayed in the front, back, and side directions of the virtual content.
  • the terminal device may obtain the spatial position coordinates of the target marker in the real space, and determine the spatial position of the light source in the real space relative to the target marker according to the light path direction of the light source in the real space, and then The positional relationship between the virtual content and the light source is determined according to the positional relationship between the virtual content and the target marker in the virtual space, so that the display direction of the reflected content can be determined according to the positional relationship between the virtual content and the light source. For example, when the light source is in front of the avatar, the reflection content is displayed in front of the avatar; when the light source is in the rear of the avatar, the reflection content is displayed behind the avatar.
  • the terminal device may also determine the display direction of the reflected content according to the light path direction of the light source in the virtual space. For example, the terminal device may acquire a virtual scene in which the virtual content is located, and the virtual scene has a virtual light source, and the display direction of the reflected content may be determined according to the positional relationship between the virtual content and the virtual light source.
  • Step S2032 Determine the first rendering position of the virtual content according to the position and posture information, and determine the second rendering position of the reflection content according to the position, posture information and display direction.
  • the terminal device can acquire the first spatial position of the virtual content relative to the virtual camera, that is, the first rendering position of the virtual content, according to the positional relationship between the virtual content and the target marker in the virtual space, the positional relationship between the virtual content and the reflected content, and the relative position of the reflected content For the display direction of the virtual content, obtain the second spatial position of the reflection content relative to the virtual camera, that is, the second rendering position of the reflection content.
  • the terminal device may also set the display brightness of the reflected content according to the brightness of the ambient light source.
  • the ambient light source is a light source in a real space.
  • the terminal device can collect the light source brightness of the surrounding environment through a light sensor, etc., or it can take an image of the surrounding environment through the camera and perform recognition processing on the image to obtain the ambient light source. Brightness, the display brightness of the reflection content is changed in real time according to the brightness of the ambient light source.
  • the ambient light source is a light source in a virtual space, and the terminal device may obtain the brightness value of the virtual light source according to the construction data of the virtual scene to set the display brightness of the reflected content. For example, when the light source is bright, the displayed reflection content is brighter; when the light source is dark, the displayed reflection content is darker.
  • the head-mounted display device collects the image of the marker 2300 through the camera, obtains the position and posture information of the marker 2300, and displays the virtual content 2310 and the reflection content 2320a.
  • the user displays through the head-mounted display
  • the device can see that the reflection content 2320a superimposed on the marker 2300 is more obvious; as shown in FIG. 23b, when the light is dark in the real space, the reflection content 2320b is relatively dim.
  • Step S2040 Render virtual content and reflection content according to the rendering position.
  • the terminal device may construct virtual content and reflection content based on the data of the virtual content and reflection content, and according to the rendering position of the virtual content and reflection content, render the virtual content above the virtual plane and render the reflection content on In the virtual plane.
  • the virtual plane is a reflection surface corresponding to the virtual content in the virtual space, that is, a plane for reflecting the virtual content and displaying the reflection content.
  • the bottom of the virtual content may not completely fit the virtual plane, the virtual content may have a certain distance from the virtual plane, and the reflection content may be rendered in the virtual plane. As shown in FIG. 24, there is a certain distance between the virtual content 2410 and the virtual plane 2430, and the reflected content 2420 is still displayed in the virtual plane 2430.
  • the terminal device may limit the rendering area of the reflection content.
  • the above rendering virtual content and reflection content according to the rendering position includes:
  • Step S2041 According to the rendering position, determine the part of the reflection content in the virtual plane in the reflection content.
  • the terminal device may use the plane area of the virtual plane as the rendering area of the reflection content, and the reflection content beyond the plane area of the virtual plane may not be rendered.
  • the terminal device may determine a part of the reflection content in the virtual plane according to the rendering position of the reflection content.
  • the reflection content within the region boundary line may be intercepted according to the region boundary line of the virtual plane, that is, part of the reflection content located in the virtual plane.
  • the terminal device may limit the rendering area of the reflection content according to the size of the plane area of the physical plane in the real space.
  • the terminal device can collect the image of the physical plane in real time and recognize the image, calculate the size of the plane area of the physical plane, and set the size of the plane area of the virtual plane according to the size of the plane area of the physical plane, so as to reflect only part of the virtual plane The content is rendered.
  • the user sees that the reflection content is only superimposed on the physical plane through the head-mounted display device.
  • Step S2042 Render the virtual content and part of the reflection content according to the rendering position.
  • the terminal device can render the virtual content above the virtual plane according to the rendering position of the virtual content and the partial reflection content, and the partial reflection content is rendered in the virtual plane, and display the rendered virtual content and the partial reflection content.
  • the virtual content 310 is displayed superimposed on the plane, and part of the reflection content 340 is rendered in the virtual plane.
  • Step S2050 Display virtual content and reflection content.
  • the virtual plane coincides with the physical plane in the real space
  • the user views the virtual content superimposed on the physical plane in the real space through the worn head-mounted display device, and the reflected content is superimposed and displayed in the physical plane
  • the augmented reality effect gives the user the illusion that the physical plane produces a virtual image that reflects the virtual content and enhances the sense of realism.
  • the virtual plane may coincide with the marking board.
  • the marker board may include a filter, and the filter is disposed on the target marker, and the head-mounted display device may collect the image of the marker board through the infrared camera, and superimpose and display the virtual content on the marker board. The reflection content corresponding to the virtual content is superimposed and displayed on the marker board to give the user the illusion that the marker board generates the reflection virtual image of the virtual content. As shown in FIG.
  • a user wears a head-mounted display device to collect markers 2702 on a marker board 2701 in real time, and can see that virtual content 2703 is superimposed on the marker board 2701 in the real space, and reflection content 2704 is superimposed on the marker board 2702 .
  • the virtual plane may also coincide with other physical planes in the real space (such as on the desktop, on the ground, etc.).
  • the virtual plane 2801 coincides with the desktop in the real space.
  • the user wears a head-mounted display device and scans the marker 2802 on the ground in real time.
  • the virtual content 2803 can be seen superimposed and displayed on the desktop in the real space.
  • Content 2804 is displayed superimposed on the desktop.
  • the position of the virtual plane in the real space can be set in advance, or can be obtained by real-time scanning.
  • the terminal device may scan the physical plane near the marker, select one of the solid planes to be coincident with the virtual plane, and obtain the spatial position of the virtual plane in the virtual space according to the position and posture information of the selected physical plane relative to the marker,
  • the virtual plane is rendered and displayed according to the spatial position, so that the virtual plane is superimposed and displayed on the selected solid plane, which coincides with the solid plane.
  • the virtual plane can be hidden and displayed to enhance the sense of reality that the virtual content and the reflection content are superimposed on the real space.
  • the terminal device may perform corresponding image processing on the reflected content according to the material reflection parameters, for example, the material reflection parameters of the virtual plane and the material reflection parameters of the physical plane of the environment where the target marker is located.
  • the material reflection parameters include the reflectance and the texture map of the material.
  • the reflectance is the ratio of the rendering brightness of the reflection content to the rendering brightness of the virtual content.
  • the material texture map is the texture pattern of the plane.
  • the value of the reflectivity is between 0-1, which can be reasonably set according to the texture map of the virtual plane. For example, when the virtual plane is a mirror material, the reflectance can be set to 1, and when the virtual plane is a water surface material, the reflectance can be set to 0.85. The larger the value of the reflectance, the clearer the reflection content, and the smaller the value, the more blurred the reflection content.
  • the image processing may include at least one of adjusting the transparency of the reflection content display to a specified transparency, adjusting the color of the reflection content to a specified color, etc., to enhance the realism of the reflection content.
  • the value of the specified transparency is between 0-1, which can be reasonably set according to the material reflection parameters of the virtual plane. For example, when the virtual plane is made of wood, the specified transparency can be set to 0.5, that is, 50% transparent, which makes the displayed reflection content blurred. When the virtual plane is made of mirror material, the specified transparency can be set to 1, which is completely opaque, so that The displayed reflection content is very clear.
  • the specified color can be set reasonably according to the material reflection parameters of the virtual plane.
  • the specified color when the virtual plane is made of wood, the specified color can be a color similar to wood, such as light brown, and when the virtual plane is made of metal, the specified color can be a color similar to metal, such as silver and white, to enhance the realism of the reflected content .
  • the reflection content may be rendered according to the material texture map of the virtual plane or the material texture map of the solid plane in the real space, so that the reflection content and the plane have corresponding material textures.
  • the terminal device may perform gradual attenuation processing on the reflection content, that is, by setting the height at which the attenuation of the reflection content starts and the height at the end of the reflection content (completely disappearing), the reflectance of the reflection content gradually decreases, Until the reflectance attenuation is zero.
  • the height of the beginning of the attenuation of the reflection content is set to zero, that is, the attenuation starts from the top of the reflection content and the virtual content
  • the height of the end of the attenuation of the reflection content is set to 150 pixels
  • the reflectance of the reflection content is attenuated at the end It is zero, and the reflection disappears completely to achieve the gradual effect of the reflection content.
  • the edge of the reflection content may be blurred.
  • the color of the contour edge area of the virtual plane is adjusted to a preset color, and the brightness value of each color component of the preset color is lower than the first threshold.
  • the first threshold is the maximum brightness value of each color component when the virtual content cannot be normally superimposed and displayed in the real space through the head-mounted display device.
  • the color of the outline edge area of the virtual plane is adjusted to a preset color, so that the user cannot observe the reflection content of the outline edge area of the virtual plane through the head-mounted display device, thereby realizing the edge blur effect of the reflection content.
  • the color of the content is black, which is displayed through the display screen of the head-mounted display device, and will not be reflected by the lens into the human eye, and the user cannot normally see the content. Therefore, the first threshold can be set to 13 brightness, which is 95 % Black, can also be set to 0 brightness, that is, black.
  • the displayed reflection content is updated according to the changed display position and posture information.
  • the user can change the display position of the virtual content through the controller connected to the terminal device.
  • the vertical projection point of the virtual content on the virtual plane can be determined.
  • the rendering position of the reflection content on the virtual plane can be adjusted in a direction away from the projection point.
  • the rendering position of the reflection content on the virtual plane can be adjusted toward the direction close to the projection point.
  • the change in the position or posture of the target marker relative to the terminal device may be a change in the position or posture of at least one of the target marker and the terminal device.
  • the display angle, display size, display position and other states of the virtual content and the reflection content can be changed to update the displayed virtual content.
  • the display angle, display size and display position of the reflected content can be re-determined according to the relative position and posture between the viewing angle of the terminal device and the target marker. That is, when the user wears the head-mounted display device and scans the target marker at different viewing angles, the reflected content viewed by the user presents different viewing angles.
  • the present application provides a computer-readable storage medium that stores program code, and the program code can be called by a processor to execute the method described in the foregoing method embodiments.
  • the computer-readable storage medium may be an electronic memory such as flash memory, EEPROM, EPROM, hard disk, or ROM.
  • the computer-readable storage medium includes a non-volatile computer-readable medium.
  • the computer-readable storage medium has a storage space for program code that performs any of the method steps described above. These program codes can be read from or written into one or more computer program products.
  • the program code may be compressed in an appropriate form, for example.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种虚拟内容的交互系统,包括:第一终端、第一交互设备、第二终端及第二交互设备,所述第一终端与至少一个所述第二终端连接,所述第一终端与至少一个所述第一交互设备连接,所述第二终端与至少一个所述第二交互设备连接,其中,所述第一终端用于根据与连接的第一交互设备之间的第一相对空间位置关系显示第一虚拟内容,基于所述第一虚拟内容获取第二虚拟内容,并向连接的第二终端发送与所述第二虚拟内容对应的内容数据;所述第二终端用于接收所述第一终端发送的所述内容数据,根据所述内容数据以及所述第二终端与连接的第二交互设备之间的第二相对空间位置关系显示所述第二虚拟内容;所述第二交互设备用于向所述第二终端发送控制指令;及所述第二终端还用于根据所述第二交互设备发送的控制指令控制显示的所述第二虚拟内容。

Description

虚拟内容的交互方法及系统 技术领域
本申请涉及增强现实领域,尤其涉及一种虚拟内容的交互方法及系统。
背景技术
近年来,随着科技的进步,增强现实技术已逐渐成为国内外研究的热点,增强现实是通过计算机系统提供的信息增加用户对现实世界感知的技术,其将计算机生成的虚拟对象、场景或系统提示信息等内容对象叠加到真实场景中,来增强或修改对现实世界环境或表示现实世界环境的数据的感知。在增强现实技术中,终端之间的显示内容的交互是影响技术应用的一个关键问题。
发明内容
本申请实施例提出了一种虚拟内容的交互方法及系统。
一种虚拟内容的交互系统,所述虚拟内容的交互系统包括:第一终端、第一交互设备、第二终端及第二交互设备,所述第一终端与至少一个所述第二终端连接,所述第一终端与至少一个所述第一交互设备连接,所述第二终端与至少一个所述第二交互设备连接,其中,所述第一终端用于根据与连接的第一交互设备之间的第一相对空间位置关系显示第一虚拟内容,基于所述第一虚拟内容获取第二虚拟内容,并向连接的第二终端发送与所述第二虚拟内容对应的内容数据;所述第二终端用于接收所述第一终端发送的所述内容数据,根据所述内容数据以及所述第二终端与连接的第二交互设备之间的第二相对空间位置关系显示所述第二虚拟内容;所述第二交互设备用于向所述第二终端发送控制指令;及所述第二终端还用于根据所述第二交互设备发送的控制指令控制显示的所述第二虚拟内容。
一种虚拟内容的交互方法,应用于第一终端,所述第一终端与第一交互设备连接;所述第一终端还连接至少一个第二终端,所述第二终端对应至少一个第二交互设备,所述方法包括:根据所述第一终端与所述第一交互设备之间的第一相对空间位置关系显示第一虚拟内容;基于所述第一虚拟内容获取第二虚拟内容;及向所述第二终端发送与所述第二虚拟内容对应的内容数据,所述内容数据用于指示所述第二终端显示所述第二虚拟内容,并根据所述对应的第二交互设备发送的控制指令控制显示的所述第二虚拟内容。
一种虚拟内容的交互方法,应用于第二终端,所述第二终端与第二交互设备连接;所述第二终端还连接至少一个第一终端,所述第一终端对应至少一个第一交互设备,所述方法包括:接收所述第一终端发送的与第二虚拟内容对应的内容数据,所述第二虚拟内容为所述第一终端根据显示的第一虚拟内容获取的虚拟内容;确定所述第二终端与第二交互设备之间的第二相对空间位置关系;根据所述内容数据以及所述第二相对空间位置关系显示所述第二虚拟内容;及接收所述第二交互设备发送的控制指令,根据所述控制指令控制显示的所述第二虚拟内容。
一种虚拟内容的显示方法,应用于终端设备,所述终端设备与交互设备连接,所述方法包括:根据所述终端设备与所述交互设备之间的相对空间位置,显示虚拟内容;接收所述交互设备发送的手势参数,所述手势参数为所述交互设备根据检测到的手势控制操作得到;及根据所述手势参数生成控制指令,并根据所述控制指令控制显示的所述虚拟内容。
一种虚拟内容显示方法,包括:识别目标标记物,并获取所述目标标记物相对所述终端设备的位置及姿态信息;获取待显示的虚拟内容,并获取所述虚拟内容相对于 指定平面的倒影内容,所述指定平面为虚拟空间中所述虚拟内容的底部所在的水平面;根据所述位置及姿态信息,获取所述虚拟内容以及所述倒影内容在所述虚拟空间的渲染位置;及根据所述渲染位置,渲染所述虚拟内容以及所述倒影内容。
一种终端设备,包括:一个或多个处理器;存储器,所述存储器存储有一个或多个计算机程序,所述计算机程序用于被所述一个或多个处理器执行,使得所述处理器执行如上所述的方法。
一种计算机可读介质,所述计算机可读取存储介质中存储有计算机程序,所述计算机程序可被处理器调用,以执行如上所述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例的一种增强现实系统的示意图。
图2为本申请实施例的第一终端的框架图。
图3为本申请一实施例的虚拟内容的交互方法流程图。
图4为本申请一实施例的一种显示场景示意图。
图5为本申请一实施例提供的另一种显示场景示意图。
图6为本申请一实施例提供的又一种显示场景示意图。
图7为本申请另一个实施例的虚拟内容的交互方法流程图。
图8为本申请另一个实施例提供的显示场景示意图。
图9为本申请又一个实施例的虚拟内容的交互方法流程图。
图10为本申请再一个实施例的虚拟内容的交互方法流程图。
图11为本申请再一个实施例的一种显示效果示意图。
图12a-12b为本申请实施例的一种手势操作示意图。
图13a-13b为本申请实施例的另一种显示效果示意图。
图14a-14b为本申请实施例的一种手势操作示意图。
图15a-15b为本申请实施例的一种手势操作示意图。
图16为本申请一实施例的虚拟倒影显示的方法流程图。
图17为本申请实施例的一种模型数据示意图。
图18为本申请实施例的一种显示效果示意图。
图19为本申请实施例的另一种显示效果示意图。
图20为本申请另一个实施例的虚拟倒影显示的方法流程图。
图21为本申请实施例的一种模型数据示意图。
图22为图20中步骤S2030的流程图。
图23a-23b为本申请实施例的一种显示效果示意图。
图24为本申请实施例的另一种显示效果示意图。
图25为图20中步骤S2040的流程图。
图26为本申请实施例的一种显示效果示意图。
图27为本申请实施例的一种显示效果示意图。
图28为本申请实施例的一种显示效果示意图。
具体实施方式
请参图1,本申请实施例提供的一种增强现实系统10,包括第一终端100、第二终端200、第一交互设备300及第二交互设备400,其中,第一终端100与第一交互设备 300及第二终端200连接,第二终端200与第一终端100及第二交互设备400连接,第二终端200的数量可不限定。第一终端100及第二终端200可为头戴显示装置,也可为与外接式头戴显示装置连接的手机等智能终端,即第一终端100及第二终端200作为头戴显示装置的处理和存储部分,以在头戴显示装置中显示虚拟内容。
请参图2,作为一种实施方式,第一终端100包括处理器及存储器,其中,存储器存储有一个或多个应用程序,可被配置为由一个或多个处理器执行,一个或多个程序用于执行本申请实施例所描述的方法。处理器可包括一个或者多个处理核。处理器利用各种接口和线路连接整个第一终端100内的各个部分,通过运行或执行存储在存储器内的指令、程序、代码集或指令集,及调用存储在存储器内的数据,执行第一终端100的各种功能和处理数据。处理器可采用数字信号处理、现场可编程门阵列、可编程逻辑阵列中的至少一种硬件形式来实现。处理器可集成中央处理器、图像处理器和调制解调器等中的一种或几种的组合。其中,CPU主要处理操作系统、用户界面和应用程序等;GPU用于负责显示内容的渲染和绘制;调制解调器用于处理无线通信。其中,上述调制解调器可不集成到处理器中,单独通过一块通信芯片进行实现。存储器可包括随机存储器,也可包括只读存储器。存储器可包括存储程序区和存储数据区,其中,存储程序区可存储用于实现操作系统的指令、用于实现至少一个功能的指令、用于实现下述各个方法实施例的指令等。存储数据区还可存储第一终端100在使用中所创建的数据等。
在一些实施例中,第一终端100还包括相机,用于采集现实物体的图像及采集目标场景的场景图像。相机130可为红外相机、可见光相机等。
在一个实施例中,第一终端100还包括如下一个或多个部件:显示模组、光学模组、通信模块及电源。显示模组可包括显示控制单元,显示控制单元用于接收处理器渲染后的虚拟内容的显示图像,将该显示图像显示并投射至光学模组上,使用户能够通过光学模组观看到虚拟内容。其中,显示模组可为显示屏或投射装置,用于显示图像。光学模组可采用离轴光学系统或波导光学系统,显示模组显示的显示图像经光学模组后,能够被投射至用户的眼睛。用户通过光学模组可看到显示模组投射的显示图像。在一些实施方式中,用户还可透过光学模组观察到现实环境,感受虚拟内容与现实环境叠加后的视觉效果。通信模块可为蓝牙、WiFi、ZigBee等模块,终端设备可通过通信模块与交互设备通信连接,以进行信息及指令的交互。电源可为整个终端设备进行供电,保证终端设备各个部件的正常运行。第二终端200的结构可与第一终端100相同或相似。
在一些实施例中,第一交互设备300可包括设置有第一标记物310及第一触控区域320的控制面板。其中,第一标记物310的数量可为一个或多个。
在一些实施方式中,第一终端100可采集包含第一标记物310的图像,识别该图像中的第一标记物310,以显示相应的虚拟内容,用户通过第一终端100可观看到虚拟内容叠加在现实世界中的第一交互设备300上。其中,第一标记物310可为任意具有可识别特征标记的图形,例如,第一标记物310可为具有拓扑结构的图案,拓扑结构是指标记物中的子标记物和特征点等之间连通关系。在一些实施例中,第一标记物310为可形成可见光点或红外光点的物体,以被终端设备识别。
当第一标记物310处于第一终端100的相机的视觉范围内时,第一终端100可识别第一标记物310,得到第一标记物310相对第一终端100的位置及姿态信息,及第一标记物310的身份信息。第一终端100可根据第一标记物310追踪第一交互设备300,以显示相应的虚拟内容。作为一种实施方式,可将红外滤光片覆盖在第一标记物310上,第一终端上采用红外相机,通过红外光线照射覆盖红外滤光片的第一标记物310,可利用红外相机采集包含第一标记物310的图像,以降低可见光对图像的影响,提高追踪的准确性。
作为一种方式,第一交互设备300可由用户手持,也可固定在操作台上,供用户操作及观看虚拟内容。第一交互设备300上还设置有触控区域,以便用户控制第一终端100显示的虚拟内容。第一交互设备300可通过触控区域检测手势操作,并将相应的操作数据发送给第一终端100。第一终端100可根据该操作数据生成控制指令,控制虚拟内容,例如控制虚拟内容滚动、位移、切分及旋转等。第一交互设备300也可直接根据检测到的手势操作生成控制指令,并将控制指令发送给第一终端100。
在一些实施方式中,第一终端100可向第二终端200发送虚拟内容的内容数据,该虚拟内容与第一终端100的显示内容相关,第二终端200在接收该内容数据后,可根据内容数据显示虚拟内容。
在本申请实施例中,第二交互设备400与第一交互设备300大致相同,包括设置有第二标记物410及第二触控区域420的控制面板,第二标记物410的数量可为一个或多个。
请参图3,本申请实施例提供了一种虚拟内容的交互方法,应用于第一终端,包括如下步骤。
步骤S310:根据第一终端与第一交互设备之间的第一相对空间位置关系渲染第一虚拟内容。
第一终端可获取与第一交互设备之间的第一相对空间位置关系,并根据该第一相对空间位置关系渲染第一虚拟内容。其中,第一相对空间位置关系可包括第一交互设备相对第一终端的位置及姿态信息,姿态信息为第一交互设备相对第一终端的朝向及旋转角度。
在一些实施方式中,第一终端可识别第一交互设备上的第一标记物,获取第一终端与第一标记物之间的相对空间位置,从而得到第一终端与第一交互设备之间的第一相对空间位置关系。
在一些实施方式中,第一交互设备可包括惯性测量单元,用于检测第一交互设备的姿态数据,姿态数据可包括第一交互设备在三维空间中的角速度和加速度。第一终端可接收第一交互设备发送的检测到的姿态数据,并根据该姿态数据确定第一终端与第一交互设备之间的第一相对空间位置关系。具体获取第一相对空间位置关系的方式在本申请中不限定。
在一些实施方式中,第一终端可根据第一相对空间位置关系,获取第一虚拟内容的显示位置,其中,显示位置为用户通过第一终端可观看到的第一虚拟内容在现实空间的叠加位置,也即第一虚拟内容在虚拟空间中的渲染坐标。
第一终端可根据第一相对空间位置关系获取第一交互设备在现实空间的空间坐标,将该现实空间的空间坐标转换为第一交互设备在虚拟空间的空间坐标,并根据该虚拟空间的空间坐标及需要显示的第一虚拟内容与第一交互设备的相对位置,得到第一虚拟内容在虚拟空间中的渲染坐标,即第一虚拟内容的显示位置。
第一终端可根据第一虚拟内容的数据及显示位置,渲染第一虚拟内容。其中,第一虚拟内容的数据可包括模型数据,用于构建第一虚拟内容的三维模型,例如,包括颜色数据、顶点坐标数据、轮廓数据等。另外,第一虚拟内容的数据可存储于第一终端中,也可从第一交互设备、服务器等其他设备获取。
第一终端可显示第一虚拟内容,用户通过第一终端可观看到第一虚拟内容叠加在现实世界中,实现了第一虚拟内容的增强现实的显示效果,提升了第一虚拟内容的显示效果。如图4所示,用户可通过第一终端观看到真实世界中的第一交互设备300,并可观看到叠加在第一交互设备300上的第一虚拟内容20,例如3D的虚拟人体模型。
步骤S320:基于第一虚拟内容获取第二虚拟内容。
第二虚拟内容用于在第二终端中显示的虚拟内容。在一些实施例中,第一终端可获取与第一虚拟内容相关的第二虚拟内容。作为一种方式,第二虚拟内容的类型可与 第一虚拟内容的类型相关,例如,第二虚拟内容为与第一虚拟内容的类型相同的虚拟内容。作为另一种方式,第二虚拟内容可从属于第一虚拟内容,例如,第二虚拟内容为第一虚拟内容的部分内容。作为又一种方式,第一虚拟内容可为第一虚拟内容的扩展内容,例如,多层级的虚拟内容中,第一虚拟内容为主层级的内容,第二虚拟内容为子层级的内容。
步骤S330:向第二终端发送与第二虚拟内容对应的内容数据,以使第二终端显示第二虚拟内容并根据第二交互设备发送的控制指令控制第二虚拟内容。
第一终端可向第二终端发送与第二虚拟内容对应的内容数据。第二终端可根据接收到的第二虚拟内容对应的内容数据,显示第二虚拟内容。第二终端可与第二交互设备建立通信连接,并根据第二交互设备检测到的不同的控制操作,控制显示的第二虚拟内容,例如,对第二虚拟内容进行切换、移动、对尺寸进行调整等。第一终端将与第一虚拟内容相关的第二虚拟内容分享给第二终端,第二终端的用户可观看到第一终端分享的虚拟内容,且可通过第二交互设备控制分享的虚拟内容,满足用户的控制需求。
在一种应用场景中,如图5所示,第一终端100可显示第一虚拟内容20(3D虚拟人体模型),用户可通过第一终端100观看到3D虚拟人体模型叠加在第一交互设备300上,第二终端200可接收第一终端100发送的第二虚拟内容30(3D虚拟心脏器官)的内容数据,显示该3D虚拟心脏器官;第二终端200还可根据第二交互设备400发送的控制指令,控制显示的3D虚拟心脏器官。
在一个实施例中,第二终端接收第一终端发送的第二虚拟内容的内容数据,也可根据与第三标记物之间的相对空间位置显示第二虚拟内容,其中,第三标记物可设置在真实世界中的某个位置或某个实际物体上。可能存在多个第二终端,多个第二终端可同时基于第三标记物显示相同的第二虚拟内容,不同用户通过不同的第二终端可观察到第二虚拟内容叠加显示在真实世界的相同位置。由于第三标记物相对不同第二终端的位置及姿态信息等不同,第二终端可根据与第三标记物之间的相对空间位置显示相应视角的第二虚拟内容。
如图6所示,桌子上设置有第三标记物50,多个第二终端200(仅示出2个)可同时根据第三标记物显示相同的第二虚拟内容30(3D虚拟心脏器官),多个第二终端200的用户可看到3D虚拟心脏器官叠加在第三标记物50的位置上。
请参图7,本申请实施例提供另一种虚拟内容的交互方法,应用于第一终端,该方法包括如下步骤。
步骤S710:获取包含第一标记物的图像。
步骤S720:识别图像中的第一标记物,并获取第一终端与第一交互设备之间的第一相对空间位置关系。
第一终端可识别采集的图像,得到第一终端相对第一标记物的空间位置,再根据第一标记物与第一交互设备之间的位置关系,得到第一终端与第一交互设备之间的第一相对空间位置关系。其中,第一标记物与第一交互设备之间的位置关系可预先存储于第一终端中。
步骤S730:根据第一相对空间位置关系显示第一虚拟内容。
在一些实施方式中,第一终端可识别采集的图像,得到第一标记物的身份信息,根据该身份信息获取第一虚拟内容的数据,并基于该第一虚拟内容的数据及第一相对空间位置关系渲染第一虚拟内容。
步骤S740:当接收到第一交互设备发送的控制指令时,根据第一交互设备发送的控制指令控制显示的第一虚拟内容。
在一些实施例中,第一交互设备可设置有第一触控区域。第一交互设备可根据第一触控区域检测到的手势操作,得到操作数据,该操作数据可包括手势参数。其中, 手势参数包括用户执行手势操作的手指数量、滑动轨迹、执行手势操作的按压压力、手势操作的持续时间及手势操作的操作频率等中的至少一种。第一触控区域的传感器检测到手势操作时,可确定被按压的区域的数量,即手指数据,例如数量为1,数量为3。滑动轨迹包括手势操作的滑动方向、滑动距离等,例如滑动轨迹为向下滑动1厘米。第一触控区域的传感器可检测被按压时的压力大小,即手势操作的按压压力,例如按压压力为3N(牛)。第一触控区域可检测被持续触控的时间,即手势操作的持续时间,例如长按时的持续时间为1.5S(秒)。第一触控区域可检测在预设时间内的操作次数,即手势操作的操作频率,例如,点击频率为3次/秒。手势参数还可包括其他参数,例如,手势操作的触控面积。
第一交互设备可根据手势参数与控制指令的对应关系,生成与手势参数对应的控制指令。手势参数与控制指令的对应关系可预先存储于第一交互设备中,也可为第一交互设备从服务器获取的。该对应关系可由用户设置,也可在出厂时默认设置。
第一终端可根据接收的控制指令控制显示的第一虚拟内容。不同的控制指令对第一虚拟内容产生的控制效果不同。例如,手势操作为单指点击操作时,可生成用于选取第一虚拟内容的控制指令;手势操作为滑动操作时,可生成用于控制第一虚拟内容移动或滚动的控制指令。
步骤S750:接收第二终端发送的身份信息,身份信息包括第二终端对应的第二交互设备上的第二标记物的身份信息。
第一终端可接第二终端的身份信息,该身份信息可用于标识第二终端,身份信息可为第二终端的MAC地址、IP地址、设备编号等,也可为第二终端连接的第二交互设备上的第二标记物的身份信息。不同第二交互设备上可设有不同的第二标记物。第二终端可识别第二交互设备的第二标记物,得到第二标记物的身份信息。第二终端可发送第二标记物的身份信息至第一终端,从而获取与该身份信息对应的第二虚拟内容。
步骤S760:从第一虚拟内容中获取与身份信息对应的第二虚拟内容。
在一些实施方式中,第二虚拟内容可为第一虚拟内容的部分内容,第二终端的身份信息与第一虚拟内容的部分内容对应,可从第一虚拟内容中获取与该身份信息对应的第二虚拟内容。例如,第一虚拟内容包括内容A、内容B及内容C,可确定内容A与接收的第二终端的身份信息对应,将内容A作为第二虚拟内容。
步骤S770:向第二终端发送与身份信息对应的第二虚拟内容的内容数据,以使第二终端显示第二虚拟内容并根据第二交互设备发送的控制指令控制第二虚拟内容。
如图8所示,在一个具体的场景中,包括第一终端100、第二终端202、204。第一终端100显示第一虚拟内容20(3D虚拟零件模型),包括第一部分21与第二部分22。第一终端100的用户为分享者,第二终端202、204的用户为被分享者。第二终端202可获取第二交互设备402的第二标记物412的身份信息(如ID001)。第二终端204可以获取第二交互设备404的第二标记物414的身份信息(如ID002)。第二终端202和第二终端204可分别向第一终端100发送身份信息。第一终端100可根据接收的身份信息确定第二虚拟内容,其中,身份信息ID001对应第一部分21,身份信息ID002对应第二部分22。第一终端100可将第一部分21的内容数据发送给第二终端202,并将第二部分22的内容数据发送给第二终端204。第二终端202可接收第一部分21的内容数据,并显示第一部分21,还可利用第二交互设备402控制显示的第一部分21。第二终端204接收第二部分22的内容数据,并显示第二部分22,可利用第二交互设备404控制显示的第二部分22。
请参图9,本申请实施例提供又一种虚拟内容的交互方法,应用于第二终端,该方法可包括如下步骤。步骤S910,接收第一终端发送的与第二虚拟内容对应的内容数据,第二虚拟内容由第一终端根据显示的第一虚拟内容获取;步骤S920,确定第二终端与第二交互设备之间的第二相对空间位置关系;步骤S930,根据内容数据以及第二相对 空间位置关系显示第二虚拟内容;步骤S940,根据第二交互设备发送的控制指令,控制显示的第二虚拟内容。
本申请实施例中各步骤的实现方式可参考上述实施例中第一终端的相关描述内容,在此不再赘述。
请参图10,本申请实施例提供了又一种虚拟内容的交互方法,应用于第二终端,该方法包括如下步骤。
步骤S1010:发送身份信息至第一终端,身份信息包括第二终端对应的第二交互设备上的第二标记物的身份信息。
步骤S1020:接收第一终端发送的第二虚拟内容的内容数据,第二虚拟内容为第一终端从显示的第一虚拟内容中获取的与身份信息对应的虚拟内容。
步骤S1030:确定第二终端与第二交互设备之间的第二相对空间位置关系。
步骤S1040:根据内容数据以及第二终端与第二交互设备之间的第二相对空间位置关系显示第二虚拟内容。
步骤S1050:判断第二终端是否具备对显示的第二虚拟内容的控制权限。
第二终端可判断是否具备对显示的第二虚拟内容的控制权限,以在接收到第二交互设备发送的控制指令时,确定是否可控制第二虚拟内容。若第二终端具备对第二虚拟内容的控制权限,可根据控制指令控制第二虚拟内容;若不具备控制权限,则无法控制第二虚拟内容。
在一些实施方式中,当确定出第二终端不具备控制权限时,第二终端可根据用户的操作,生成权限获取请求,并将权限获取请求发送至第一终端,从第一终端获取用于控制第二虚拟内容的控制权限。用户的操作可通过第二交互设备检测得到,也可由第二终端的相机采集到的用户的手势操作获得,在此不限定。
第二终端在发送权限获取请求后,可接收到第一终端根据该权限获取请求返回的权限添加信息,该权限添加信息中携带有用于控制第二虚拟内容的控制权限。第二终端可将该权限添加信息写入到第二终端的权限信息中,以使第二终端的权限包括对第二虚拟内容的控制权限。第二终端获取控制权限的方式并不限定,例如,还可从服务器获取上述控制权限。
步骤S1060:当具备控制权限时,根据第二交互设备发送的控制指令,控制显示的第二虚拟内容。
第二终端具备对第二虚拟内容的控制权限,可响应接收的控制指令,控制第二虚拟内容。第二虚拟内容的控制可在二维平面下或三维空间中进行,例如,控制第二虚拟内容在二维平面上的移动,控制第二虚拟内容进行360°翻转。
在一些实施例中,第二终端可根据第二交互设备发送的控制指令,对第二虚拟内容进行选取、旋转、缩放比例调整、移动、或页面选取等。
对虚拟内容的选取,指对虚拟内容或虚拟内容的部分进行选取,以使虚拟内容或虚拟内容的部分处于选中状态。如图11所示,第二虚拟内容30包括多个虚拟选项内容时,可通过在第二交互设备420上的触控操作,选取其中的选项内容。其中,该触控操作可为图12a所示的单指点击操作。
对第二虚拟内容的旋转,指在二维平面或者三维空间中将第二虚拟内容进行指定方向(例如,水平方向、垂直方向、或自由方向等)上的转动,也就是将第二虚拟内容沿指定方向的旋转轴进行转动,使显示的第二虚拟内容的姿态(朝向方向等)发生变换。在一个实施例中,对第二虚拟内容的旋转,可与第二交互设备检测到的单指滑动操作或多指滑动操作对应,例如,与图12b所示的多指滑动操作对应。
对第二虚拟内容的缩放比例调整,指对第二虚拟内容的模型进行放大或缩小。如图13a,显示的第二虚拟内容30为3D虚拟心脏,可通过在第二交互设备420上的触控操作对3D虚拟心脏进行三维空间中的缩小。又如图13b,可对3D虚拟心脏进行三 维空间中的放大。在一个实施例中,对第二虚拟内容的缩放比例调整,可与第二交互设备检测到的多指滑动操作对应,如图14a,双指的滑动方向相反,且靠近双指的中心点,可与缩小虚拟内容对应;图14b,双指的滑动方向相反,且远离双指的中心点,可与放大虚拟内容对应。
对第二虚拟内容的移动,指将第二虚拟内容或者第二虚拟内容的部分内容进行任意方向的移动。例如,在棋类游戏场景中,第二虚拟内容为虚拟棋盘和虚拟棋子,可控制虚拟棋子在虚拟棋盘上任意方向的移动。
对第二虚拟内容进行页面选取,指第二虚拟内容包括多个页面时,可选取其中一个页面的虚拟内容,可包括上/下翻页、选取数字选项对应的页面等。例如,第二虚拟内容包括多层级菜单页面,每级菜单页面中包括多个虚拟选项,可选取其中任一层级的菜单页面。在一个实施例中,对第二虚拟内容进行页面选取可与单指滑动操作或者多指滑动操作对应。如图15a,单指向左滑动,可对应向上翻页;如图15b,单指向右滑动,可对应向下翻页。
对第二虚拟内容的控制还可为其他控制,例如,对第二虚拟内容进行切分、复制等。
在一种实施方式中,第二终端可接收第二交互设备发送的手势参数,并根据手势参数生成控制指令,控制显示的第二虚拟内容。
在一个实施例中,手势参数包括执行手势操作的手指数量,第二终端可根据手指数量,确定手势操作的操作类型,并生成控制指令。操作类型包括单指触控及多指触控。
当操作类型为单指触控时,生成第一控制指令,以对第二虚拟内容进行二维平面下的控制。例如,对第二虚拟内容进行二维平面下的选取、滚动、移动、或页面选取等控制。
当操作类型为多指触控时,生成第二控制指令,以对第二虚拟内容进行三维空间中的控制。例如,对第二虚拟内容进行三维空间中的旋转、缩放比例调整、移动、页面选取、切分、或复制等控制。
在一个实施方式中,手势参数包括手势操作的持续时间。第二终端可判断持续时间是否大于时间阈值,当大于时间阈值时,根据手势参数生成控制指令。
终端设备可根据接收的手势参数,判断手势操作的持续时间是否大于时间阈值,以确定第二交互设备检测到的手势操作是否为有效的控制操作。其中,时间阈值可根据情况设定,例如,时间阈值可为0.5S,1S等。当手势操作的持续时间大于时间阈值时,可确定手势操为有效的控制操作。当手势操作的持续时间等于或小于时间阈值时,可确定手势操为无效的控制操作,不生成控制令。
在一实施例中,当手势操作的持续时间不大于时间阈值时,可判断手势操作的持续时间是否大于指定阈值,其中,指定阈值小于时间阈值。当手势操作持续时间大于指定阈值,可输出提示用户重新输入手势操作的提示信息。若手势操作的持续时间不大于指定阈值,第二终端可忽略接收的手势参数。
在一些实施方式中,可能存在双手同时在第二交互设备的触控区域的控制操作,或用户在触控区域进行单手操作,但是被检测为不能识别的多指触控操作的情况。当第二终端根据手势参数确定第二交互设备同时检测到多种不同的手势操作时,将该手势操作视为无效的手势操作,并输出提示用户重新输入手势操作的提示信息,以提示用户输入有效的控制操作。
第二终端可提供将上述提示功能开启和关闭的功能,避免用户不小心触控到第二交互设备的触控区域,而产生提示信息。
第一终端对第一虚拟内容的控制,也可参照上述方式进行。
步骤S1070:发送操控指令和第二虚拟内容对应的内容数据中的至少一种至其他终 端,操控指令用于指示其他终端控制显示的内容,第二虚拟内容对应的内容数据用于指示其他终端显示第二虚拟内容。
在一些实施例中,第二终端还可与至少一个其他终端连接。第二终端可发送操控指令到连接的其他终端,控制其他终端显示的虚拟内容。第二终端也可发送其显示的第二虚拟内容的内容数据至连接的其他终端,以分享显示的第二虚拟内容。
例如,在会议场景中,第一终端为会议主讲者的终端,第二终端为会议参与者的终端。通过上述方法,主讲者的第一终端可根据参与者的第二终端的身份信息,分享不同的第二虚拟内容(分享内容)至相应的第二终端。会议参与者的第二终端可根据对第二虚拟内容的控制权限,控制第二虚拟内容,还可分享其观看的第二虚拟内容至其他参与者的终端,以及控制其他终端的第二虚拟内容。
本申请实施例提供的虚拟内容的交互方法,实现第一终端分享与其显示的第一虚拟内容对应的第二虚拟内容给第二终端,实现终端之间的良好互懂。第二终端可根据第二交互设备的控制指令,控制第二虚拟内容的显示,提升交互性。
传统的增强现实技术中,通常仅会显示虚拟内容本身,导致真实感较弱,因此可通过同时显示虚拟内容的倒影内容,使虚拟内容更贴合现实世界,以增强虚拟内容的真实感。请参图16,本申请实施例提供一种显示虚拟倒影的方法,应用于一终端设备,包括如下步骤。
步骤S1610:识别目标标记物,并获取目标标记物相对终端设备的位置及姿态信息。
步骤S1620:基于待显示的虚拟内容,获取虚拟内容相对于指定平面的倒影内容,指定平面为虚拟空间中虚拟内容的底部所在的水平面。
待显示的虚拟内容可包括能够被呈现出倒影的3D物体,如虚拟人物、虚拟动物、艺术展品、玩偶、家具、书籍、机械模型等。终端设备可先获取待显示的虚拟内容的模型数据,并根据虚拟内容的模型数据,将模型的最底部的顶点所在的水平面作为指定平面,利用镜面反射原理,得到虚拟内容相对于该指定平面的镜像内容,该镜像内容即为虚拟内容的倒影内容。也就是说,终端设备可根据虚拟内容对应的模型数据及指定平面的数据,获取倒影内容对应的模型数据,其中,倒影内容的模型数据与虚拟内容的模型数据一一对应。如图17,指定平面为虚拟动物模型1701的最底部的顶点所在的水平面1702,通过上述方式可得到倒影内容的模型1703。指定平面可视为用于获取倒影内容的辅助工具。虚拟内容可为终端设备从服务器处下载得到,或从其他终端处获取。
步骤S1630:根据位置及姿态信息,获取虚拟内容及倒影内容在虚拟空间的渲染位置。
在一些实施例中,终端设备可根据目标标记物相对终端设备的位置及姿态获取目标标记物在现实空间中的空间位置坐标,将该空间位置坐标转换为虚拟空间中的空间坐标,并根据虚拟空间中需要显示的虚拟内容与目标标记物的位置关系,及虚拟内容与倒影内容的位置关系,以虚拟相机作为参照,获取到虚拟内容及倒影内容分别相对虚拟相机的空间位置,即得到虚拟内容的和倒影内容的渲染位置。其中,虚拟相机存在于虚拟空间中,用于模拟用户的人眼,虚拟相机在虚拟空间中的位置可看作终端设备在虚拟空间中的位置。
步骤S1640:根据渲染位置,渲染虚拟内容以及倒影内容。
终端设备可获取待显示的虚拟内容及倒影内容的模型数据,根据模型数据及渲染位置渲染该虚拟内容及倒影内容。
步骤S1650:显示虚拟内容及倒影内容。
其中,虚拟内容及倒影内容的显示位置与虚拟内容及倒影内容的渲染位置对应,该显示位置可理解为用户通过头戴显示设备看到的虚拟内容及倒影内容叠加在现实世界中的位置。
如图18,用户可通过佩戴的头戴显示装置1801,实时扫描标记物1802,可看到虚拟人物1803、虚拟动物1804、虚拟人物的倒影1805及虚拟动物的倒影1806叠加在现实空间,提升了增强现实显示效果。
在一些实施例中,倒影内容的显示位置可在现实世界中的任意平面上,即倒影内容的显示位置可与现实世界中的某个平面重叠。如图19,用户通过佩戴的头戴显示装置,实时扫描地面上的标记物1901,可看到虚拟人物及虚拟动物1902显示在现实空间的桌面上,虚拟人物的倒影及虚拟动物的倒影1903显示在现实空间的桌面中,给用户产生虚拟人物及虚拟动物以桌面为参照反射生成倒影的视觉感官,提高虚拟内容的真实感。
请参图20,本申请另一实施例提供一种虚拟倒影的显示方法,应用于终端设备,包括如下步骤。
步骤S2010:识别目标标记物,并获取目标标记物相对终端设备的位置及姿态信息。
在一些实施方式中,终端设备还可识别得到目标标记物的身份信息,获取身份信息对应的至少一个虚拟内容。不同的目标标记物可对应不同的虚拟内容。
步骤S2020:基于待显示的虚拟内容,获取虚拟内容相对于指定平面的倒影内容,指定平面为虚拟空间中虚拟内容的底部所在的水平面。
在一些实施例中,指定平面可为镜面反射面,终端设备可利用镜面反射矩阵,获取虚拟内容相对于指定平面的倒影内容。其中,镜面反射面为光滑平面,当平行入射的光线射到该镜面反射面时,可平行地向一个方向反射出来。每个镜面反射面都存在一个镜面反射矩阵,空间中的任意一点的坐标都可根据镜面反射矩阵,得到该点关于镜面反射面的对称点。因此,可通过将虚拟内容对应的模型的各个顶点坐标,与镜面反射矩阵进行运算,得到每个顶点坐标关于镜面反射面的对称点,每个对称点可作为虚拟内容的倒影内容对应模型的各个顶点坐标,以得到虚拟内容相对于镜面反射面的倒影内容。
在一些实施例中,镜面反射矩阵可为下述矩阵:
Figure PCTCN2019129222-appb-000001
其中,矩阵中的n x、n y、n z为镜面反射面的单位法向量(n x,n y,n z)中的数值。
作为一种方式,可获取指定平面的单位法向量,确定指定平面的镜面反射矩阵,获取虚拟内容对应的模型的各个顶点的坐标,计算出倒影内容对应的模型的各个顶点的坐标,以得到虚拟内容相对于指定平面的倒影内容。
在一些实施例中,终端设备可在虚拟空间中建立世界坐标系,并对虚拟内容的各个顶点在竖直方向上的坐标进行翻转,得到虚拟内容对应的倒影内容。虚拟空间的世界坐标系可与现实世界中的地面重合或平行。在一个实施例中,指定平面与世界坐标系的X0Z平面平行,可将虚拟内容的各个顶点相对于指定平面在Y轴上的坐标进行翻转,获取到虚拟内容相对于指定平面的倒影内容,其中,在Y轴上的坐标进行翻转是指虚拟内容顶点的Y坐标与指定平面Y坐标的差值,和对应翻转后的Y坐标与指定平面Y坐标的差值相同,也即,虚拟内容的顶点到指定平面的距离与翻转后对应顶点到指定平面的距离相同。如图21,获得的倒影内容2120对应的模型的各个顶点坐标中,与虚拟内容2110对应的模型的各个顶点坐标相比(如A2点与A1),仅有Y坐标发生了变化,X坐标及Z坐标保持不变,其中,倒影内容的各个顶点坐标的Y坐标与虚拟内容的各个顶点坐标的Y坐标,相对指定平面对称。因此,当指定平面与世界坐标系的X0Z平面平行时,仅对虚拟内容的各个顶点坐标中的Y坐标进行运算,以获取Y坐标相对于指定平面的对称Y坐标,计算倒影内容对应的模型的各个顶点的坐标,即 可得到虚拟内容相对于指定平面的倒影内容。
作为一种实施方式,可获取虚拟内容的各个顶点坐标的Y坐标与指定平面的Y坐标的差值,通过用虚拟内容的各个顶点坐标的Y坐标减去2倍对应的与指定平面Y坐标的差值,即可得到倒影内容的各个顶点坐标的Y坐标。例如,虚拟内容的A顶点在世界坐标系中的Y坐标为8,指定平面在世界坐标系中的Y坐标为5,A顶点的Y坐标与指定平面的Y坐标的差值为3,对应倒影内容的顶点的Y坐标为8-3*2=2。
上述利用Y坐标获取倒影内容的各个顶点坐标的方式,适用于直上直下的镜面反射,即反射出的倒影内容与虚拟内容是一模一样的,直接是虚拟内容的倒转,例如需要在地面、桌面、标记板等平行放置的平面实现倒影内容的显示的应用场景。无需对虚拟内容的每个顶点坐标进行矩阵乘法的运算,可降低运算量,保证实时渲染虚拟内容的倒影内容的效率。
步骤S2030:根据位置及姿态信息,获取虚拟内容以及倒影内容在虚拟空间的渲染位置。
在一些实施例中,终端设备可根据环境光源的光源方向,确定虚拟倒影的显示方向。请参图22,上述根据位置及姿态信息,获取虚拟内容及倒影内容在虚拟空间的渲染位置,包括:
步骤S2031:根据目标标记物所处环境的光源的光线路径方向,确定倒影内容的显示方向。
在一些实施例中,终端设备可根据现实空间中的光源的光线路径方向,确定虚拟空间中倒影内容的显示方向。其中,倒影内容的显示方向为显示的倒影内容相对于虚拟内容的方向,如倒影内容显示在虚拟内容的前方、后方、侧方向等。
作为一种实施方式,终端设备可获取现实空间中的目标标记物的空间位置坐标,并根据现实空间中的光源的光线路径方向,确定现实空间中的光源相对于目标标记物的空间位置,再根据虚拟空间中虚拟内容与目标标记物的位置关系,确定虚拟内容与光源的位置关系,从而可根据虚拟内容与光源的位置关系,确定倒影内容的显示方向。例如,光源在虚拟人物的前方时,倒影内容显示在虚拟人物的前方;光源在虚拟人物的后方时,倒影内容显示在虚拟人物的后方。
在另一些实施方式中,终端设备也可根据虚拟空间中的光源的光线路径方向,确定倒影内容的显示方向。例如,终端设备可获取虚拟内容所处的虚拟场景,虚拟场景中具有虚拟光源,可根据虚拟内容与虚拟光源的位置关系,确定倒影内容的显示方向。
步骤S2032:根据位置及姿态信息确定虚拟内容的第一渲染位置,并根据位置、姿态信息及显示方向,确定倒影内容的第二渲染位置。
终端设备可获取虚拟内容相对虚拟相机的第一空间位置,即虚拟内容的第一渲染位置,根据虚拟空间中虚拟内容与目标标记物的位置关系、虚拟内容与倒影内容的位置关系及倒影内容相对于虚拟内容的显示方向,获取倒影内容相对虚拟相机的第二空间位置,即倒影内容的第二渲染位置。
在一些实施例中,终端设备还可根据环境光源的亮度,设置倒影内容的显示亮度。作为一种实施方式,环境光源为现实空间中的光源,终端设备可通过光线传感器等采集所处环境的光源亮度,也可通过相机拍摄周围环境的图像,对图像进行识别处理,得到环境光源的亮度,实时根据环境光源的亮度改变倒影内容的显示亮度。作为另一种实施方式,环境光源为虚拟空间中的光源,终端设备可根据虚拟场景的构建数据,得到虚拟光源的亮度值,以设置倒影内容的显示亮度。例如,光源较亮时,显示的倒影内容较明亮;光源较暗时,显示的倒影内容较暗。
如图23a,头戴显示装置通过相机采集标记物2300的图像,获取标记物2300的位置及姿态信息,并显示虚拟内容2310及倒影内容2320a,现实空间中光线较亮时,用户通过头戴显示装置可观看到叠加在标记物2300的倒影内容2320a比较明显;如图23b, 现实空间中光线较暗时,倒影内容2320b比较暗淡。
步骤S2040:根据渲染位置,渲染虚拟内容及倒影内容。
在一些实施例中,终端设备可根据虚拟内容及倒影内容的数据构建虚拟内容及倒影内容,并根据虚拟内容及倒影内容的渲染位置,将虚拟内容渲染于虚拟平面的上方,将倒影内容渲染于虚拟平面之中。虚拟平面为虚拟空间中虚拟内容对应的倒影面,即用于倒影出虚拟内容,对倒影内容进行显示的平面。
在一些实施方式中,虚拟内容的底部可与虚拟平面不完全贴合,虚拟内容可与虚拟平面存在一定距离,倒影内容可渲染在虚拟平面之中。如图24,虚拟内容2410与虚拟平面2430存在一定距离,倒影内容2420仍然显示在虚拟平面2430中。
在一些实施例中,为防止倒影内容的渲染区域超出虚拟平面的平面区域,终端设备可限制倒影内容的渲染区域。请参图25,上述根据渲染位置,渲染虚拟内容及倒影内容,包括:
步骤S2041:根据渲染位置,确定倒影内容中处于虚拟平面的部分倒影内容。
在一些实施方式中,终端设备可将虚拟平面的平面区域作为倒影内容的渲染区域,超出虚拟平面的平面区域的倒影内容可不进行渲染。终端设备可根据倒影内容的渲染位置,确定倒影内容中处于虚拟平面的部分倒影内容。作为一种实施方式,可根据虚拟平面的区域边界线,截取处于区域边界线内的倒影内容,即为处于虚拟平面的部分倒影内容。
在一些实施例中,终端设备可根据现实空间中实体平面的平面区域大小,限制倒影内容的渲染区域。例如,终端设备可实时采集实体平面的图像并识别该图像,计算出实体平面的平面区域大小,根据实体平面的平面区域大小设定虚拟平面的平面区域大小,以便仅对虚拟平面内的部分倒影内容进行渲染。通过将倒影内容的渲染区域与现实空间中的实体平面进行匹配,使得用户通过头戴显示装置看到倒影内容只叠加在实体平面上。
步骤S2042:根据渲染位置,渲染虚拟内容及部分倒影内容。
终端设备可根据虚拟内容及部分倒影内容的渲染位置,将虚拟内容渲染于虚拟平面上方,部分倒影内容渲染于虚拟平面中,并对渲染的虚拟内容及部分倒影内容进行显示。如图26,虚拟内容310叠加显示在平面上方,部分倒影内容340渲染于虚拟平面中。
步骤S2050:显示虚拟内容及倒影内容。
在一些实施例中,虚拟平面与现实空间中的实体平面重合,用户通过佩戴的头戴显示装置观看到虚拟内容叠加在现实空间中的实体平面的上方,倒影内容叠加显示于实体平面之中的增强现实效果,给用户形成实体平面产生虚拟内容的倒影虚像的错觉,提升真实感。
在一些实施方式中,目标标记物设置在标记板上时,虚拟平面可与标记板重合。在一个实施例中,标记板可包含有滤光片,滤光片设置在目标标记物上面,头戴显示装置通过红外相机可采集到标记板的图像,将虚拟内容叠加显示在标记板上并将虚拟内容对应的倒影内容叠加显示在标记板中,给用户形成标记板产生虚拟内容的倒影虚像的错觉。如图27,用户通过佩戴的头戴显示装置,实时采集标记板2701上的标记物2702,可看到虚拟内容2703叠加在现实空间中的标记板2701上方,倒影内容2704叠加在标记板2702中。
作为另一种实施方式,虚拟平面也可与现实空间中的其他实体平面上(如桌面上、地面上等)重合。如图28,虚拟平面2801重合于现实空间中的桌面,用户通过佩戴的头戴显示装置,实时扫描地面上的标记物2802,可看到虚拟内容2803叠加显示于现实空间中的桌面上,倒影内容2804叠加显示在桌面中。虚拟平面在现实空间中的位置可预先设置,也可实时扫描获取的。例如,终端设备可扫描在标记物附近的实体平面, 选取其中的一个实体平面用于与虚拟平面重合,根据选取的实体平面相对标记物的位置及姿态信息获取虚拟空间中虚拟平面的空间位置,并根据该空间位置渲染并显示虚拟平面,使虚拟平面叠加显示在选取的实体平面上,与实体平面重合。
其中,该虚拟平面可隐藏显示,以增强虚拟内容及倒影内容叠加于现实空间中的真实感。
在一些实施方式中,终端设备可根据材质反射参数对倒影内容进行相应的图像处理,例如,虚拟平面的材质反射参数、目标标记物所处环境的实体平面的材质反射参数。
以虚拟平面的材质反射参数为例,材质反射参数包括反射率及材质纹理图,反射率为倒影内容的渲染亮度与虚拟内容的渲染亮度之比,材质纹理图为平面的纹理图案。其中,反射率的取值在0-1之间,可根据虚拟平面的材质纹理图进行合理设置。例如,虚拟平面为镜面材质时,反射率可设为1,虚拟平面为水面材质时,反射率可设为0.85。反射率的数值越大倒影内容越清晰,数值越小倒影内容越模糊。
在一些实施方式中,图像处理可包括将倒影内容显示的透明度调整为指定透明度、将倒影内容的颜色调整为指定颜色等中的至少一种,以增强倒影内容的真实感。指定透明度的取值在0-1之间,可根据虚拟平面的材质反射参数进行合理设定。例如,虚拟平面为木头材质时,指定透明度可设定为0.5,即50%透明,使得显示的倒影内容较模糊,虚拟平面为镜面材质时,指定透明度可设定为1,即完全不透明,使得显示的倒影内容非常清晰。指定颜色可根据虚拟平面的材质反射参数进行合理设定。例如,虚拟平面为木头材质时,指定颜色可为与木头相近的颜色,如浅褐色,虚拟平面为金属材质时,指定颜色可为与金属相近的颜色,如银白色,提升倒影内容的真实感。
在一些实施例中,可根据虚拟平面的材质纹理图或现实空间中的实体平面的材质纹理图,对倒影内容进行渲染,使得倒影内容与平面具有相应的材质纹理。
在一些实施例中,终端设备可对倒影内容进行渐变式衰减处理,即通过设置倒影内容的衰减开始处的高度及倒影内容结束处(完全消失)的高度,使倒影内容的反射率逐渐降低,直到反射率衰减为零。例如,倒影内容衰减的开始处的高度设置为零,即从倒影内容与虚拟内容最接近的顶端开始衰减,倒影内容衰减的结束处的高度设置为150像素,在结束处倒影内容的反射率衰减为零,在此处倒影完全消失,以实现倒影内容的渐变效果。
在一些实施例中,为防止显示的倒影内容的边缘与虚拟平面或者与实体平面出现不完美贴合的情况,可对倒影内容的边缘进行虚化处理。例如,将虚拟平面的轮廓边缘区域的颜色调整为预设颜色,预设颜色的各颜色分量的亮度值低于第一阈值。第一阈值为虚拟内容无法通过头戴显示装置正常叠加显示在现实空间时的各颜色分量的最大亮度值。将虚拟平面的轮廓边缘区域的颜色调整为预设颜色,使得用户通过头戴显示装置观察不到虚拟平面的轮廓边缘区域的倒影内容,实现倒影内容的边缘虚化效果。例如,内容的颜色为黑色,通过头戴显示装置的显示屏进行显示,不会被镜片反射进入人眼,用户无法正常看到该内容,因此,第一阈值可设定为13亮度,即95%黑,也可设定为0亮度,即黑色。
在一些实施例中,在检测到虚拟内容相对虚拟平面的显示位置及姿态信息发生变化时,根据变化后的显示位置及姿态信息,对显示的倒影内容进行更新。
作为一种方式,用户可通过与终端设备连接的控制器改变虚拟内容的显示位置。可确定虚拟内容在虚拟平面上垂直的投射点,在检测到虚拟内容与虚拟平面之间的相对高度变大时,可向着远离该投射点的方向,调整倒影内容在虚拟平面上的渲染位置,在检测到虚拟内容与虚拟平面之间的相对高度变小时,可向着接近该投射点的方向,调整倒影内容在虚拟平面上的渲染位置。
目标标记物相对终端设备的位置或姿态发生变化可为目标标记物和终端设备中的 至少一个的位置或姿态发生变化。根据变化后相对位置或姿态,可改变虚拟内容及倒影内容的显示角度、显示大小、显示位置等状态,以更新显示的虚拟内容。可根据终端设备的视角与目标标记物之间的相对位置及姿态,重新确定倒影内容的显示角度、显示大小及显示位置等状态。也即是,用户佩戴头戴显示装置以不同的视角扫描目标标记物时,观看到的倒影内容呈现不同的视角。
在一个实施例中,本申请提供一种计算机可读存储介质,存储有程序代码,程序代码可被处理器调用执行上述方法实施例中所描述的方法。
计算机可读存储介质可以是诸如闪存、EEPROM、EPROM、硬盘或者ROM之类的电子存储器。可选地,计算机可读存储介质包括非易失性计算机可读介质。计算机可读存储介质具有执行上述方法中的任何方法步骤的程序代码的存储空间。这些程序代码可以从一个或者多个计算机程序产品中读出或者写入到这一个或者多个计算机程序产品中。程序代码可以例如以适当形式进行压缩。

Claims (26)

  1. 一种虚拟内容的交互系统,其特征在于,所述虚拟内容的交互系统包括:第一终端、第一交互设备、第二终端及第二交互设备,所述第一终端与至少一个所述第二终端连接,所述第一终端与至少一个所述第一交互设备连接,所述第二终端与至少一个所述第二交互设备连接,其中,
    所述第一终端用于根据与连接的第一交互设备之间的第一相对空间位置关系显示第一虚拟内容,基于所述第一虚拟内容获取第二虚拟内容,并向连接的第二终端发送与所述第二虚拟内容对应的内容数据;
    所述第二终端用于接收所述第一终端发送的所述内容数据,根据所述内容数据以及所述第二终端与连接的第二交互设备之间的第二相对空间位置关系显示所述第二虚拟内容;
    所述第二交互设备用于向所述第二终端发送控制指令;及
    所述第二终端还用于根据所述第二交互设备发送的控制指令控制显示的所述第二虚拟内容。
  2. 根据权利要求1所述的系统,其特征在于,所述第一交互设备设置有第一标记物;
    所述第一终端还用于获取包含所述第一标记物的图像,识别所述图像中的第一标记物,并获取所述第一终端与第一交互设备之间的第一相对空间位置关系,再根据所述第一相对空间位置关系显示第一虚拟内容;
    所述第一交互设备还用于向所述第一终端发送控制指令;及
    所述第一终端还用于根据所述第一交互设备发送的控制指令控制显示的所述第一虚拟内容。
  3. 根据权利要求1所述的系统,其特征在于,所述第二交互设备上设置有第二标记物;
    所述第二终端还用于向所述第一终端发送身份信息,所述身份信息包括所述第二终端连接的第二交互设备的第二标记物的身份信息;及
    所述第一终端还用于接收所述第二终端发送的所述身份信息,从所述第一虚拟内容中获取与所述身份信息对应的第二虚拟内容,并向所述第二终端发送所述与身份信息对应的第二虚拟内容的内容数据。
  4. 根据权利要求1所述的系统,其特征在于,所述第二终端还用于判断是否具备对显示的所述第二虚拟内容的控制权限,当具备所述控制权限时,则根据所述第二交互设备发送的控制指令,控制显示的所述第二虚拟内容。
  5. 根据权利要求4所述的系统,其特征在于,所述第二终端还用于当不具备所述控制权限时,发送权限获取请求至所述第一终端;
    所述第一终端设备还用于接收所述第二终端发送的所述权限获取请求,并根据所述权限获取请求向所述第二终端发送权限添加信息,所述权限添加信息包括对所述第二虚拟内容的控制权限;及
    所述第二终端还用于接收所述第一终端发送的权限添加信息,并将所述权限添加信息写入权限信息中。
  6. 根据权利要求1所述的系统,其特征在于,所述第二终端还用于发送操控指令和所述第二虚拟内容对应的内容数据中的至少一种至其他第二终端,所述操控指令用于指示所述其他第二终端对显示的内容进行控制,所述第二虚拟内容对应的内容数据用于指示所述其他第二终端显示所述第二虚拟内容。
  7. 根据权利要求1所述的系统,其特征在于,所述第一交互设备还用于根据检测到的手势控制操作获取手势参数,并将所述手势参数发送至所述第一终端;及
    所述第一终端还用于根据所述第一交互设备发送的所述手势参数生成控制指令,并根据生成的控制指令控制显示的所述第一虚拟内容。
  8. 根据权利要求7所述的系统,其特征在于,所述手势参数至少包括执行所述手势控制操作的手指数量;
    所述第一终端还用于根据手指数量,确定所述手势控制操作的操作类型,当所述操作类型为单指触控操作类型时,生成第一控制指令,并根据所述第一控制指令,在二维平面下控制所述第一虚拟内容;当所述操作类型为多指触控操作类型时,生成第二控制指令,并根据所述第二控制指令,在三维空间中控制所述第一虚拟内容。
  9. 根据权利要求8所述的系统,其特征在于,所述第一终端还用于根据所述第一控制指令控制所述第一虚拟内容执行二维平面下的选取、滚动、移动和页面选取中的至少一种操作;根据所述二控制指令控制所述第一虚拟内容执行三维空间中的旋转、缩放比例调整、移动、页面选取、切分和复制中的至少一种操作。
  10. 根据权利要求7所述的系统,其特征在于,所述手势参数包括所述手势控制操作的持续时间;
    所述第一终端还用于当所述手势控制操作的持续时间大于时间阈值时,根据所述手势参数生成控制指令。
  11. 一种虚拟内容的交互方法,其特征在于,应用于第一终端,所述第一终端与第一交互设备连接;所述第一终端还连接至少一个第二终端,所述第二终端对应至少一个第二交互设备,所述方法包括:
    根据所述第一终端与所述第一交互设备之间的第一相对空间位置关系显示第一虚拟内容;
    基于所述第一虚拟内容获取第二虚拟内容;及
    向所述第二终端发送与所述第二虚拟内容对应的内容数据,所述内容数据用于指示所述第二终端显示所述第二虚拟内容,并根据所述对应的第二交互设备发送的控制指令控制显示的所述第二虚拟内容。
  12. 根据权利要求11所述的方法,其特征在于,所述第一交互设备设置有第一标记物;
    所述根据所述第一终端与所述第一交互设备之间的第一相对空间位置关系显示第一虚拟内容,包括:
    获取包含所述第一标记物的图像;
    识别所述图像中的第一标记物,并获取所述第一终端与第一交互设备之间的第一相对空间位置关系;及
    根据所述第一相对空间位置关系显示第一虚拟内容;
    在所述显示第一虚拟内容之后,所述方法还包括:
    当接收到所述第一交互设备发送的控制指令时,根据所述第一交互设备发送的控制指令控制显示的所述第一虚拟内容。
  13. 根据权利要求11所述的方法,其特征在于,所述第二交互设备上设置有第二标记物;
    基于所述第一虚拟内容获取第二虚拟内容,包括:
    接收所述第二终端发送的身份信息,所述身份信息包括所述第二终端对应的第二交互设备上的第二标记物的身份信息;及
    从所述第一虚拟内容中获取与所述身份信息对应的第二虚拟内容;
    所述向所述第二终端发送与所述第二虚拟内容对应的内容数据,包括:
    向所述第二终端发送所述与身份信息对应的第二虚拟内容的内容数据。
  14. 根据权利要求11所述的方法,其特征在于,在所述显示第一虚拟内容之后,所述方法还包括:
    接收所述第一交互设备发送的手势参数,所述手势参数为所述第一交互设备根据检测到的手势控制操作得到;及
    根据所述手势参数生成控制指令,并根据所述控制指令控制显示的所述第一虚拟内容。
  15. 根据权利要求14所述的方法,其特征在于,所述手势参数至少包括执行所述手势控制操作的手指数量,所述根据所述手势参数生成控制指令,并根据所述控制指令控制显示的所述第一虚拟内容,包括:
    根据手指数量,确定所述手势控制操作的操作类型;
    当所述操作类型为单指触控操作类型时,生成第一控制指令,并根据所述第一控制指令,在二维平面下控制所述第一虚拟内容;
    当所述操作类型为多指触控操作类型时,生成第二控制指令,并根据所述第二控制指令,在三维空间中控制所述第一虚拟内容。
  16. 一种虚拟内容的交互方法,其特征在于,应用于第二终端,所述第二终端与第二交互设备连接;所述第二终端还连接至少一个第一终端,所述第一终端对应至少一个第一交互设备,所述方法包括:
    接收所述第一终端发送的与第二虚拟内容对应的内容数据,所述第二虚拟内容为所述第一终端根据显示的第一虚拟内容获取的虚拟内容;
    确定所述第二终端与第二交互设备之间的第二相对空间位置关系;
    根据所述内容数据以及所述第二相对空间位置关系显示所述第二虚拟内容;及
    接收所述第二交互设备发送的控制指令,根据所述控制指令控制显示的所述第二虚拟内容。
  17. 根据权利要求16所述的方法,其特征在于,在所述根据所述第二交互设备发送的控制指令,控制显示的所述第二虚拟内容之前,所述方法还包括:
    判断所述第二终端是否具备对显示的所述第二虚拟内容的控制权限;及
    当具备所述控制权限时,则执行所述根据所述第二交互设备发送的控制指令,控制显示的所述第二虚拟内容的步骤。
  18. 根据权利要求17所述的方法,其特征在于,所述方法还包括:
    当不具备所述控制权限时,发送权限获取请求至所述第一终端,所述权限获取请求用于向所述第一终端请求获取对显示的虚拟内容的控制权限。
  19. 根据权利要求16所述的方法,其特征在于,所述第二终端还与至少一个其他第二终端连接;
    在所述根据所述第二交互设备发送的控制指令,控制显示的所述第二虚拟内容之后,所述方法还包括:
    发送操控指令和/或所述第二虚拟内容对应的内容数据至所述其他第二终端,所述操控指令用于指示所述其他终端对显示的内容进行控制,所述第二虚拟内容对应的内容数据用于指示所述其他终端显示所述第二虚拟内容。
  20. 根据权利要求16所述的方法,其特征在于,所述第二交互设备上设置有第二标记物,在所述接收所述第一终端发送的与第二虚拟内容对应的内容数据之前,所述方法还包括:
    发送身份信息至所述第一终端,所述身份信息包括所述第二终端对应的第二交互设备上的第二标记物的身份信息;
    所述接收所述第一终端发送的与第二虚拟内容对应的内容数据,包括:
    接收所述第一终端发送的第二虚拟内容的内容数据,所述第二虚拟内容为所述第一终端从显示的第一虚拟内容中获取的与所述身份信息对应的虚拟内容。
  21. 一种虚拟内容的显示方法,其特征在于,应用于终端设备,所述终端设备与交互设备连接,所述方法包括:
    根据所述终端设备与所述交互设备之间的相对空间位置,显示虚拟内容;
    接收所述交互设备发送的手势参数,所述手势参数为所述交互设备根据检测到的手势控制操作得到;及
    根据所述手势参数生成控制指令,并根据所述控制指令控制显示的所述虚拟内容。
  22. 一种终端设备,其特征在于,包括:
    一个或多个处理器;
    存储器,所述存储器存储有一个或多个计算机程序,所述计算机程序用于被所述一个或多个处理器执行,使得所述处理器执行如下步骤:
    根据所述终端设备与所述交互设备之间的相对空间位置,显示虚拟内容;
    接收所述交互设备发送的手势参数,所述手势参数为所述交互设备根据检测到的手势控制操作得到;及
    根据所述手势参数生成控制指令,并根据所述控制指令控制显示的所述虚拟内容。
  23. 一种计算机可读介质,其特征在于,所述计算机可读取存储介质中存储有计算机程序,所述计算机程序可被处理器调用,以执行以下步骤:
    根据所述终端设备与所述交互设备之间的相对空间位置,显示虚拟内容;
    接收所述交互设备发送的手势参数,所述手势参数为所述交互设备根据检测到的手势控制操作得到;及
    根据所述手势参数生成控制指令,并根据所述控制指令控制显示的所述虚拟内容。
  24. 一种虚拟内容显示方法,其特征在于,包括:
    识别目标标记物,并获取所述目标标记物相对所述终端设备的位置及姿态信息;
    获取待显示的虚拟内容,并获取所述虚拟内容相对于指定平面的倒影内容,所述指定平面为虚拟空间中所述虚拟内容的底部所在的水平面;
    根据所述位置及姿态信息,获取所述虚拟内容以及所述倒影内容在所述虚拟空间的渲染位置;及
    根据所述渲染位置,渲染所述虚拟内容以及所述倒影内容。
  25. 一种终端设备,其特征在于,包括:
    一个或多个处理器;
    存储器,所述存储器存储有一个或多个计算机程序,所述计算机程序用于被所述一个或多个处理器执行,使得所述处理器执行如下步骤:
    识别目标标记物,并获取所述目标标记物相对所述终端设备的位置及姿态信息;
    获取待显示的虚拟内容,并获取所述虚拟内容相对于指定平面的倒影内容,所述指定平面为虚拟空间中所述虚拟内容的底部所在的水平面;
    根据所述位置及姿态信息,获取所述虚拟内容以及所述倒影内容在所述虚拟空间的渲染位置;及
    根据所述渲染位置,渲染所述虚拟内容以及所述倒影内容。
  26. 一种计算机可读介质,其特征在于,所述计算机可读取存储介质中存储有计算机程序,所述计算机程序可被处理器调用,以执行以下步骤:
    识别目标标记物,并获取所述目标标记物相对所述终端设备的位置及姿态信息;
    获取待显示的虚拟内容,并获取所述虚拟内容相对于指定平面的倒影内容,所述指定平面为虚拟空间中所述虚拟内容的底部所在的水平面;
    根据所述位置及姿态信息,获取所述虚拟内容以及所述倒影内容在所述虚拟空间的渲染位置;及
    根据所述渲染位置,渲染所述虚拟内容以及所述倒影内容。
PCT/CN2019/129222 2018-12-29 2019-12-27 虚拟内容的交互方法及系统 WO2020135719A1 (zh)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN201811652926.3A CN111383345B (zh) 2018-12-29 2018-12-29 虚拟内容的显示方法、装置、终端设备及存储介质
CN201811641778.5 2018-12-29
CN201811652926.3 2018-12-29
CN201811641778.5A CN111381670B (zh) 2018-12-29 2018-12-29 虚拟内容的交互方法、装置、系统、终端设备及存储介质
CN201910082681.3 2019-01-28
CN201910082681.3A CN111563966B (zh) 2019-01-28 2019-01-28 虚拟内容显示方法、装置、终端设备及存储介质

Publications (1)

Publication Number Publication Date
WO2020135719A1 true WO2020135719A1 (zh) 2020-07-02

Family

ID=71127767

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/129222 WO2020135719A1 (zh) 2018-12-29 2019-12-27 虚拟内容的交互方法及系统

Country Status (1)

Country Link
WO (1) WO2020135719A1 (zh)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504632A (zh) * 2014-12-26 2015-04-08 重庆机电职业技术学院 一种虚拟仿真教学实训平台构建方法
CN107024995A (zh) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 多人虚拟现实交互系统及其控制方法
CN107340853A (zh) * 2016-11-18 2017-11-10 北京理工大学 一种基于虚拟现实与手势识别的远程呈现交互方法和系统
CN107667331A (zh) * 2015-05-28 2018-02-06 微软技术许可有限责任公司 共享空间多人沉浸式虚拟现实中的共享触觉交互和用户安全

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504632A (zh) * 2014-12-26 2015-04-08 重庆机电职业技术学院 一种虚拟仿真教学实训平台构建方法
CN107667331A (zh) * 2015-05-28 2018-02-06 微软技术许可有限责任公司 共享空间多人沉浸式虚拟现实中的共享触觉交互和用户安全
CN107340853A (zh) * 2016-11-18 2017-11-10 北京理工大学 一种基于虚拟现实与手势识别的远程呈现交互方法和系统
CN107024995A (zh) * 2017-06-05 2017-08-08 河北玛雅影视有限公司 多人虚拟现实交互系统及其控制方法

Similar Documents

Publication Publication Date Title
US20220084279A1 (en) Methods for manipulating objects in an environment
US11714592B2 (en) Gaze-based user interactions
US11557102B2 (en) Methods for manipulating objects in an environment
US8643569B2 (en) Tools for use within a three dimensional scene
CN114721470A (zh) 用于与三维环境进行交互的设备、方法和图形用户界面
CN111766937B (zh) 虚拟内容的交互方法、装置、终端设备及存储介质
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
CN110456907A (zh) 虚拟画面的控制方法、装置、终端设备及存储介质
US9979946B2 (en) I/O device, I/O program, and I/O method
US9933853B2 (en) Display control device, display control program, and display control method
US20160004320A1 (en) Tracking display system, tracking display program, tracking display method, wearable device using these, tracking display program for wearable device, and manipulation method for wearable device
US11714540B2 (en) Remote touch detection enabled by peripheral device
US9906778B2 (en) Calibration device, calibration program, and calibration method
US20220317776A1 (en) Methods for manipulating objects in an environment
CN102460373A (zh) 表面计算机用户交互
CN109992175B (zh) 用于模拟盲人感受的物体显示方法、装置及存储介质
JP2012252627A (ja) プログラム、情報記憶媒体及び画像生成システム
CN111383345B (zh) 虚拟内容的显示方法、装置、终端设备及存储介质
CN111083464A (zh) 虚拟内容的显示投放系统
US10171800B2 (en) Input/output device, input/output program, and input/output method that provide visual recognition of object to add a sense of distance
CN111766936A (zh) 虚拟内容的控制方法、装置、终端设备及存储介质
US20240062489A1 (en) Indicating a Position of an Occluded Physical Object
CN111563966A (zh) 虚拟内容显示方法、装置、终端设备及存储介质
WO2020135719A1 (zh) 虚拟内容的交互方法及系统
CN118215903A (zh) 用于在虚拟环境中呈现虚拟对象的设备、方法和图形用户界面

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19904244

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16/11/2021)

122 Ep: pct application non-entry in european phase

Ref document number: 19904244

Country of ref document: EP

Kind code of ref document: A1