CN111083463A - Virtual content display method and device, terminal equipment and display system - Google Patents

Virtual content display method and device, terminal equipment and display system Download PDF

Info

Publication number
CN111083463A
CN111083463A CN201811226280.2A CN201811226280A CN111083463A CN 111083463 A CN111083463 A CN 111083463A CN 201811226280 A CN201811226280 A CN 201811226280A CN 111083463 A CN111083463 A CN 111083463A
Authority
CN
China
Prior art keywords
virtual content
marker
virtual
display
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811226280.2A
Other languages
Chinese (zh)
Inventor
戴景文
贺杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201811496843.XA priority Critical patent/CN111083464A/en
Priority to CN201811226280.2A priority patent/CN111083463A/en
Priority to PCT/CN2019/111790 priority patent/WO2020078443A1/en
Priority to US16/731,055 priority patent/US11244511B2/en
Publication of CN111083463A publication Critical patent/CN111083463A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/0482Interaction with lists of selectable items, e.g. menus
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
    • G06F3/04883Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text

Abstract

The application discloses a virtual content display method, a virtual content display device, a terminal device and a virtual content display system, wherein the method comprises the following steps: acquiring a first image containing a first marker, wherein the first marker is a marker arranged on first interaction equipment; determining a display area corresponding to the first interaction device according to the first marker in the first image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area; acquiring a selection instruction; and selecting the second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content. After the virtual content is displayed on the interactive equipment, the interaction of AR/VR can be improved by selecting part of the virtual content in the virtual content and displaying the selected part of the virtual content on another area.

Description

Virtual content display method and device, terminal equipment and display system
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying virtual content, a terminal device, and a display system.
Background
With the development of science and technology, machine intellectualization and information intellectualization are increasingly popularized, and terminal devices related to Virtual Reality (VR) and Augmented Reality (AR) gradually enter people's daily life. The augmented reality technology builds virtual content which does not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately fuses the virtual content into a real environment by means of an image recognition and positioning technology, fuses the virtual content and the real environment into a whole by means of display equipment, and brings real sensory experience to a user. In the conventional technology, display of augmented reality or mixed reality or the like is performed by superimposing virtual content in an image of a real scene, and interactive control with the virtual content is an important research direction of augmented reality or mixed reality.
Disclosure of Invention
The application provides a display method, a display device, a terminal device and a display system of virtual contents, and the interactivity of AR/VR can be improved by selecting part of the virtual contents in the virtual contents displayed on the interactive device and displaying the selected part of the virtual contents on another area.
In a first aspect, an embodiment of the present application provides a method for displaying virtual content, where the method includes: acquiring a first image containing a first marker, wherein the first marker is a marker arranged on first interaction equipment; determining a display area corresponding to the first interaction device according to the first marker in the first image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area; acquiring a selection instruction; and selecting the second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content.
In a second aspect, an embodiment of the present application provides an apparatus for displaying virtual content, the apparatus including: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image containing a first marker, and the first marker is a marker arranged on first interaction equipment; the first display module is used for determining a display area corresponding to the first interaction device according to the first marker in the first image and displaying first virtual content, and the display position of the first virtual content corresponds to the display area; the selection module is used for acquiring a selection instruction; and the second display module is used for selecting second virtual content from the first virtual content according to the selection instruction and displaying the second virtual content.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a display, a memory, and a processor, where the display and the memory are coupled to the processor, and the memory stores instructions, and when the instructions are executed by the processor, the processor performs the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which program code is stored, and the program code can be called by a processor to execute the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a display system of virtual content, including: the first interaction device comprises a control panel, wherein the control panel is provided with a first marker and a display area; a second interaction device provided with a second marker; the terminal device is used for acquiring a first image containing the first marker, displaying first virtual content according to the first marker in the first image, wherein the display position of the first virtual content corresponds to the display area, acquiring a second image containing the second marker, acquiring a selection instruction according to the second marker in the second image, selecting second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content.
The method, the device, the terminal device and the system for displaying the virtual content, provided by the embodiment of the application, are characterized in that a first image containing a first marker is collected, wherein the first marker is a marker arranged on first interactive equipment; then determining a display area corresponding to the first interactive device according to the first marker in the first image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area; acquiring a selection instruction; and finally, selecting second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content. According to the embodiment of the application, the interactivity of AR/VR can be improved by selecting part of virtual content in the virtual content displayed on the interactive device and displaying the selected part of virtual content on another area.
Drawings
FIG. 1 is a schematic diagram illustrating an application scenario of a virtual content display system according to an embodiment of the present application;
fig. 2 is a schematic view of a scene in which first and second virtual contents are displayed according to an embodiment of the present application.
Fig. 3 is a block diagram illustrating a terminal device according to an embodiment of the present application;
FIG. 4 is a diagram illustrating an interaction between a terminal device and a server according to an embodiment of the present application;
FIG. 5 is a flowchart illustrating a virtual content display method according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a scene displaying a first virtual content in an embodiment of the present application;
FIG. 7 is a schematic diagram of a scene displaying first and second virtual contents according to an embodiment of the present application;
FIG. 8 is a schematic view of a scene displaying first and second virtual contents according to another embodiment of the present application;
FIG. 9 is a flowchart illustrating a virtual content display method according to another embodiment of the present application;
FIG. 10 is a schematic diagram of a scene in which a second virtual content is selected by a virtual pointing object according to an embodiment of the present application;
fig. 11 is a schematic view of a scene in which a second virtual content is selected through a second interactive device in an embodiment of the present application;
fig. 12A-12C are schematic diagrams illustrating a sliding motion in a touch area of an interactive device according to an embodiment of the present disclosure.
FIG. 13 illustrates a block diagram of a display device for virtual content provided by an embodiment of the present application;
FIG. 14 is a schematic structural diagram of an interactive device according to an embodiment of the present application;
fig. 15 shows a schematic structural diagram of another interactive device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
With the development of technologies such as VR (Virtual Reality), AR (Augmented Reality), etc., electronic devices related to VR/AR gradually come into people's daily lives. When people use VR/AR equipment, markers (also called Marker or Tag) in the real environment can be collected through the camera shooting assembly on the equipment, and then the virtual images bound with the markers can be displayed at corresponding positions on the display screen through corresponding image processing, so that users can enjoy science and fiction type impression experience. At present, in some exhibitions and museums adopting VR/AR related technology, virtual scenes and virtual exhibit images of various exhibition halls can be displayed to users through VR/AR equipment worn by the users. The inventor has found through research and study that, in a conventional VR/AR scene, when a user controls displayed virtual content, the user usually needs to change the orientation of a VR/AR device such as a head-mounted display through control of a controller or by rotating the direction of the head to change the displayed virtual content, for example, to see different pictures of the virtual content at different viewing angles, the operation is cumbersome, and the user needs to perform frequent movement. The inventors have studied and proposed a method and an apparatus for displaying virtual content, a terminal device, and a system for displaying virtual content in the present application.
The following describes in detail a method, an apparatus, a terminal device, and a system for displaying virtual content provided in the embodiments of the present application with specific embodiments.
Referring to fig. 1, an application scenario diagram of a display method of virtual content provided in an embodiment of the present application is shown, where the application scenario includes a display system 100 of virtual content provided in the embodiment of the present application, and the display system 100 of virtual content includes: a first interactive device 10, a second interactive device 20 and a terminal device 30.
In this embodiment, the first interaction device 10 comprises a control panel on which a first marker 11 and a display area 12 are arranged. As a way of example, the number of first markers 11 provided on the first interactive device 10 may be one or more.
In some embodiments, the terminal device 30 may capture a first image including the first marker 11, and display first virtual content according to the first marker 11 in the first image, where the display position of the first virtual content corresponds to the display area 12 on the first interactive device 10, and the user may see through the terminal device 30 that the displayed first virtual content is superimposed on the display area 12 of the first interactive device 10 in the real world, and the first virtual content is superimposed and displayed on the display area 12. By one approach, the terminal device 30 may locate the current relative position of the first interactive device 10 and the terminal device 30 through the acquired first image including the first marker 11, and display the first virtual content at the correct position (the display position corresponding to the display area 12 of the first interactive device 10).
Further, the image data corresponding to the first virtual content may be pre-stored in the terminal device 30 (or may be obtained from a server or other terminal), and may be selected by the user for display. In some application scenarios, the user may first select a virtual content to be displayed through the terminal device 30 or the first interaction device 10, then perform positioning by scanning the first marker 11 on the first interaction device 10, and finally display the selected first virtual content at a display position corresponding to the display area 12 of the first interaction device 10. The display position of the first virtual content refers to rendering coordinates of the first virtual content in the virtual space, according to which the corresponding three-dimensional stereoscopic first virtual content can be rendered in the virtual space, and the rendering coordinates can be used to represent a spatial position relationship between the first virtual content and the terminal device 30 in the virtual space. The display position of the first virtual content may be a screen display position corresponding to the display area 12 of the first interactive device 10 on the display screen of the terminal device 30 (or may be another device externally connected to the terminal device 30).
In one way, the first interactive device 10 may be held by a user or fixed on a console for the user to operate and view the virtual content. The first interactive device 10 may further be provided with a touch area for a user to perform a touch operation on the touch area, so as to control the first virtual content displayed at a position corresponding to the display area 12, where the touch area may be correspondingly provided at the display area 12. The first interaction device 10 may detect a touch operation through the touch area, generate a control instruction corresponding to the touch operation, and send the control instruction to the terminal device 30. When the terminal device 30 receives the manipulation instruction sent by the first interaction device 10, the display of the first virtual content may be controlled according to the manipulation instruction, so as to control the first virtual content (for example, control the rotation, displacement, switching, and the like of the first virtual content), which is beneficial to improving the interactivity between the user and the virtual content.
In one embodiment, a second marker 21 is provided on the second interactive device 20. By way of example, the second interactive device 20 may be a polyhedron (a sphere can be regarded as a polyhedron with an infinite number of surfaces), and a plurality of second markers 21 may be disposed on a plurality of surfaces of the second interactive device 20, so that at least one second marker 21 can be collected by the terminal device 30 when the second interactive device 20 is rotated to any angle. In some embodiments, the terminal device 30 may capture a second image including the second marker 21, acquire a selection instruction according to the second marker 21 in the second image, and select the second virtual content from the displayed first virtual content according to the selection instruction and display the second virtual content.
In one possible application scenario, as shown in fig. 2, fig. 2 is a schematic view of a scenario in which the terminal device 30 shown in fig. 1 displays a first virtual content and a second virtual content.
In fig. 2, after the user wears the head-mounted terminal device, the first virtual content displayed at the position corresponding to the first interactive device 10 can be seen through the terminal device, and in this embodiment, the first virtual content displayed at the position corresponding to the first interactive device 10 in the virtual space may be a virtual 3D medical human body model 50. The user can select and display a part of the virtual 3D medical manikin 50 through the second interactive device 20 by controlling the second interactive device 20; for example, the second virtual content selected by the second interactive device 20 may be a heart portion in the 3D medical body model 50. By one approach, the user may select a portion of the second virtual content from the first virtual content via the second interactive device 20 and follow the second interactive device 20 for display. The selected second virtual content may be displayed at a position corresponding to the second interactive device 20, and the user may see the virtual 3D heart model 60 through the terminal device and follow the second interactive device 20 for moving display. It is understood that the first virtual content can be selected by the user to be displayed and switched, for example, the first virtual content can be a mechanical model, an art exhibit, a book, a game character, etc. besides a medical human body; correspondingly, the second virtual content can also be mechanical parts, exhibit parts, pages, game equipment and the like.
The terminal device 30 may be a head-mounted display device, a mobile phone, a tablet, or the like, wherein the head-mounted display device may be an integrated head-mounted display device. The terminal device 30 may be an intelligent terminal such as a mobile phone connected to an external head-mounted display device. Referring to fig. 3, the terminal device 30 may include: a processor 31, a memory 32, a display device 33, and a camera 34. The memory 32, the display device 33 and the camera 34 are all connected to the processor 31.
The camera 34 is used for capturing an image of an object to be photographed and sending the image to the processor 31. The camera 34 may be an infrared camera, a color camera, etc., and the specific type of the camera 34 is not limited in the embodiment of the present application.
The processor 31 may comprise any suitable type of general or special purpose microprocessor, digital signal processor or microcontroller. The processor 31 may be configured to receive data and/or signals from various components of the system via, for example, a network. The processor 31 may also process the data and/or signals to determine one or more operating conditions in the system. For example, the processor 31 generates image data of a virtual world from image data stored in advance, and transmits the image data to the display device for display; the image data sent by the intelligent terminal or the computer can be received through a wired or wireless network, and the image of the virtual world is generated and displayed according to the received image data; the display device 33 can also perform recognition and positioning according to the image acquired by the camera, determine the corresponding display content in the virtual world according to the positioning information, and send the display content to the display device.
The memory 32 may be used to store software programs and modules, and the processor 31 executes various functional applications and data processing by operating the software programs and modules stored in the memory 32. The memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
The display device and the camera of the terminal device 30 are connected to a terminal device having a memory function of a memory and a processing function of a processor. It is to be understood that the processing executed by the processor in the above embodiments is executed by the processor of the terminal device, and the data stored by the memory in the above embodiments is stored by the memory of the terminal device. The terminal device 30 may further comprise a communication module, which is connected to the processor. The communication module is used for communication between the terminal device 30 and other terminals. When the markers (the first marker 11 and/or the second marker 21) are located within the field of view of the camera 34 of the terminal device 30, the camera 34 may acquire an image of the markers. The image of the marker is stored in the terminal device 30 for locating the position of the terminal device 30 relative to the marker.
When the user uses the terminal device 30, after the terminal device 30 acquires a marker image including a marker through the camera 34, the processor of the terminal device 30 acquires the marker image and related information, calculates and identifies the marker, and acquires the position and rotation relationship between the marker and the camera of the terminal device 30, thereby acquiring the position and rotation relationship of the marker relative to the terminal device 30.
Referring to fig. 4, in the embodiment of the present application, the terminal device 30 may further be communicatively connected to the server 40 through a network. Wherein, a client of the AR/VR application runs on the terminal device 30, and a server of the AR/VR application corresponding to the client runs on the server 40. By one approach, the server 40 may store identity information corresponding to each marker, virtual image data bound to the marker corresponding to the identity information, and location information of the marker in a real environment or a virtual map.
In some embodiments, different terminal devices 30 may also perform data sharing and real-time updating through the server 40, so as to improve interactivity among multiple users in an AR/VR scene.
For the above virtual content display system, an embodiment of the present application provides a method for displaying virtual content through the above system, and specifically, please refer to the following embodiments.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for displaying virtual content according to an embodiment of the present application. The virtual content display method includes the steps of firstly collecting a first image containing a first marker, wherein the first marker is the marker arranged on first interactive equipment, then determining a display area corresponding to the first interactive equipment according to the first marker in the first image, displaying first virtual content, obtaining a selection instruction when the display position of the first virtual content corresponds to the display area, finally selecting second virtual content from the first virtual content according to the selection instruction, displaying the second virtual content, and improving AR/VR interactivity by selecting part of the virtual content from the virtual content displayed on the interactive equipment and displaying the selected part of the virtual content on the other area. In a specific embodiment, the display method of the virtual content may be applied to the display apparatus 300 of the virtual content as shown in fig. 12 and the terminal device 30 (fig. 1) configured with the display apparatus 300 of the virtual content. The flow shown in fig. 5 will be described in detail below by taking HMD (Head mounted Display) as an example. The above display method of virtual content may specifically include the following steps:
step S101: a first image is acquired that includes a first marker.
In this embodiment, the first marker is a marker disposed on the first interaction device.
The marker may be any graphic or object having identifiable characteristic markings. The marker may be placed within the field of view of the camera (or other image capture device) of the terminal device, i.e., the camera may capture an image containing the marker. The image containing the marker is collected by the camera and then stored in the terminal equipment, and is used for positioning the position or the posture of the terminal equipment relative to the marker. The marker may include at least one sub-marker therein, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a circle, a triangle, or other shapes. The distribution rules of the sub-markers in different markers are different, so each marker can have different identity information, the terminal device can acquire the identity information corresponding to the markers by identifying the sub-markers included in the markers to distinguish the relative position and posture information of different markers, and the identity information can be information such as codes which can be used for uniquely identifying the markers, but is not limited to the above.
In this embodiment, the first marker may be disposed on a control panel of the first interaction device, so that the terminal device may identify the first marker and determine a relative position between the terminal device and the first interaction device.
In some embodiments, the first tag may be attached to a control panel surface of the first interactive device or may be integrated into the first interactive device. For example, the first marker may be a pattern affixed to the first interactive device control panel; for another example, when the first interactive device has an image display function, the first marker may be a pattern that can be selectively presented on an image display area of the first interactive device (the image display area is an actual optical display area, such as a display screen, instead of a display area on the first interactive device for displaying virtual content of AR/VR in this embodiment).
In some embodiments, a plurality of markers may be disposed on the first interactive device to achieve different functions or to improve the accuracy of positioning. For example, part of the markers are used for locating the relative position and posture relationship (relative position relationship and relative posture relationship) between the terminal device and the interactive device, and part of the markers are used for binding the virtual content for the terminal device to recognize and display.
Step S102: and determining a display area corresponding to the first interaction device according to the first marker in the first image, and displaying the first virtual content.
In this embodiment, a display position of the first virtual content in the virtual space corresponds to a display area of the first interactive device in the real space, and the user can see that the first virtual content is displayed by being superimposed on the first interactive device through the terminal device.
As one mode, after confirming the current relative position and posture of the first marker and the terminal device according to the first image including the first marker, the terminal device may further determine the relative position and posture relationship between the terminal device and the first interaction device and other areas on the first interaction device according to the position information of the first marker on the first interaction device.
In one embodiment, the first display area corresponding to the first interactive device may be a first display area for displaying virtual content of the AR/VR on a control panel of the first interactive device. After the terminal device determines the position of the first display area on the first interactive device, the terminal device can determine the relative spatial position information of the first display area, and superimpose and display the first virtual content on the area corresponding to the first display area according to the relative spatial position information.
In one embodiment, the display area corresponding to the first interaction device may not be a panel area on the first interaction device, and may also refer to a real display space corresponding to the first interaction device, for example, a specific space area above the first interaction device, or a specific space area in front of the first interaction device with respect to the user, and the like.
In one mode, the image data corresponding to the first virtual content may be pre-stored in the terminal device (or may be acquired from a server or other terminals), and is selected by the user for display. In some embodiments, a user may first select first virtual content to be displayed through the terminal device or the first interaction device, then scan the first marker on the first interaction device to perform positioning, and finally display the selected first virtual content at a display position corresponding to the display area of the first interaction device. In some embodiments, the terminal device may directly construct the virtual content or acquire the constructed virtual content. As one mode, the terminal device may construct virtual content according to identity information of a first marker on the first interaction device, and after identifying the first marker in the first image, may obtain virtual image data corresponding to the identity information of the first marker, and construct the first virtual content according to the virtual image data, where the virtual image data may include vertex data, color data, texture data, and the like for modeling. The first markers with different identity information can respectively display different types of first virtual contents, for example, the first marker with the identity information of "number 1" displays the first virtual content of a three-dimensional virtual automobile, and the first marker with the identity information of "number 2" displays the first virtual content of a three-dimensional virtual building. As another mode, the virtual image data of the first virtual content may also be pre-stored in the terminal device, and when the first marker with different identity information is identified, the corresponding first virtual content is directly displayed according to the pre-stored virtual image data without being affected by the identity information of the first marker. Optionally, the virtual image data of the first virtual content may also be correspondingly stored in different application program caches, and when the terminal device switches to different application programs, different types of the first virtual content may be displayed, for example, the first tag for the same identity information, the application program a displaying a three-dimensional virtual automobile, the application program B displaying a three-dimensional virtual building, and the like. It is understood that the specific virtual content displayed can be set according to actual requirements, and is not limited to the above-mentioned several ways.
As shown in fig. 6, when the terminal device 30 is an integral head-mounted display, the user can see the first interactive device 10 through the head-mounted display, and can see the first virtual content 50 displayed in the virtual space in an overlapping manner at a position corresponding to the display area of the first interactive device 10 (the first virtual content 50 is a 3D medical human body model in fig. 6).
Step S103: and acquiring a selection instruction. In this embodiment, the selection instruction may be a virtual content selection instruction issued by the user through the terminal device or the first interaction device or in another manner, and the selection instruction may be used to interact with the first virtual content displayed on the first interaction device. Optionally, the selection instruction may also be obtained through a second interaction device different from the first interaction device, or the selection instruction may also be obtained by recognizing a gesture action of the user when the gesture action of the user is recognized as the preset gesture. The terminal device can also acquire the gazing direction of the user, acquire a selection instruction according to the gazing direction, and select the second virtual content when the gazing direction is the direction of the second virtual content in the displayed first virtual content. It is understood that the manner of obtaining the selection instruction may be various, and is not limited herein.
As a mode, a communication connection may be established between the terminal device and the first interaction device, and after the user issues the selection instruction through the first interaction device, the terminal device may obtain the selection instruction of the user through the first interaction device, and perform subsequent operations according to the selection instruction.
Step S104: and selecting the second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content. In this embodiment, after the terminal device obtains the selection instruction, as a manner, the second virtual content may be selected from the first virtual content according to the selection instruction, and the second virtual content may be displayed at a preset display position. The second virtual content may be a partial virtual content (or a sub-content of the first virtual content) in the first virtual content, for example, when the first virtual content is a medical human body, the second virtual content is an organ, a bone, or the like in the medical human body; the second virtual content may also be the same virtual content as the first virtual content, for example, when the first virtual content is a medical human body, the second virtual content is also a medical human body.
In some implementations, the display location of the second virtual content in the virtual space can be associated with a location in the real-world environment. The terminal device may display a corresponding third virtual content in the virtual space by performing identification tracking on another marker disposed in the real environment (for example, the marker may be disposed on the ground, or a surface of a display stand may be disposed). For example, as shown in fig. 7, the terminal device performs recognition tracking on a third marker 61 (provided on the surface of a table) provided in the real environment, determines relative position and orientation information between the terminal device and the third marker 61, and displays a second virtual content 60 corresponding to the first virtual content 50 (displayed at a position corresponding to the display area of the first interactive device 10 in the virtual space) at a position corresponding to the third marker 61 in the virtual space. In some embodiments, the second virtual content may be a second virtual content 60 selected from the first virtual content (in fig. 7, the first virtual content 50 is a 3D medical phantom, and the second virtual content 60 is a 3D heart model). In some embodiments, the user may display the second virtual content at the display position with the third marker 61 by manipulating, for example, manipulating the touch area of the first interactive device 10 or manipulating the second interactive device 20. As one way, the user can change the display state of the first virtual content by manipulating and causing the second virtual content to change its display state correspondingly following the first virtual content. The user can see the second virtual content overlaid and displayed in the real world through the terminal equipment.
In some embodiments, the display location of the second virtual content may also be unrelated to a location in the real-world environment. As shown in fig. 8, the terminal device may define a virtual display area 62 in the virtual space for independently displaying the second virtual content 60, the virtual display area 62 may be associated with rendering coordinates of the first virtual content 50 (displayed at a position corresponding to the display area of the first interactive device 10 in the virtual space), and render and display the second virtual content 60 selected from the first virtual content 50 in the virtual display area 62.
In some embodiments, a display area for displaying the second virtual content, which may be associated with rendering coordinates of the second virtual content in the virtual space and renders and displays the second virtual content in a display position of the display area of the second virtual content in the virtual space, may be further pre-defined to be located 2 meters ahead of the display area corresponding to the first virtual content (the orientation and the distance may be arbitrary).
In some embodiments, the displayed second virtual content may be the same size as the second virtual content in the displayed first virtual content, or the second virtual content in the first virtual content may be displayed after being enlarged or reduced.
In some embodiments, in the process of displaying the virtual content by the terminal device, the user may further interact with the displayed virtual content by other methods, such as gestures, operating an interaction device, and the like, and perform data synchronization update with a plurality of different terminal devices connected to the same server by using the connection server, so as to implement multi-user interaction in the same virtual scene.
In some application scenarios, for example, when the first virtual content displayed at the position corresponding to the display area of the first interaction device is a vehicle, the user may select parts in the vehicle, such as doors, tires, and engines, as the second virtual content to display, and may also zoom in or zoom out the displayed virtual content to more flexibly view details of the parts in the vehicle, which is helpful for the user to know the structure inside the vehicle, and if the image data of the displayed virtual content is three-dimensional modeling data obtained by physically scanning a faulty vehicle, the display method may also be helpful for the user to check a location where a fault exists. For another example, when the first virtual content displayed at the position corresponding to the display area of the first interaction device is a game character, the user may select a weapon, a piece of armour, and the like equipped on the game character as the second virtual content to display, and may further enjoy modeling and special effect details of the game equipment by enlarging or reducing the displayed virtual content, so as to bring a more real game experience to the user.
The above examples are only part of practical applications of the display method of the virtual content provided by the embodiment, and with further development and popularization of the AR/VR technology, the display method of the virtual content provided by the embodiment can play a role in more practical application scenes.
Referring to fig. 9, fig. 9 is a schematic flowchart illustrating another virtual content display method according to an embodiment of the present disclosure. The flow shown in fig. 9 will be described in detail below. The above display method of virtual content may specifically include the following steps:
step S201: a first image is acquired that includes a first marker.
In this embodiment, the terminal device may acquire an image including the first marker through the image acquisition module to identify the first marker. In other possible embodiments, the terminal device may also identify the first marker by other sensor modules.
Step S202: and identifying a first marker in the first image, and acquiring first relative position and posture information of the first interaction equipment and the terminal equipment. In this embodiment, the first relative position and posture information includes relative position information and relative posture information of the first interaction device and the terminal device.
In some embodiments, the terminal device may calculate position information, orientation information, and the like of the first marker with respect to the terminal device from coordinate data of feature points in the first marker in the first image. As one mode, the terminal device may use the position information and the posture information of the first marker relative to the terminal device, etc. as the first relative position posture information of the terminal device and the first interaction device. Further, the terminal device may also determine the first relative position and posture information between the terminal device and the first interactive device according to the position relationship of the first marker with respect to the entire first interactive device, and the position information and posture information of the first marker with respect to the terminal device, so as to more accurately obtain the relative position and posture relationship between the terminal device and the first interactive device.
Step S203: and determining a display area corresponding to the first interactive device according to the first relative position and posture information, and displaying the first virtual content.
In this embodiment, after the terminal device obtains the first relative position and posture information of the first interactive device, the terminal device may calculate the relative position and posture relationship between the display area on the first interactive device and the terminal device according to the position and posture information of the display area on the first interactive device relative to the whole first interactive device, determine rendering coordinates corresponding to the display area of the first interactive device and used for displaying virtual content, and render and display the first virtual content according to the rendering coordinates, where the rendering coordinates may be used to represent the relative spatial position relationship between the virtual content displayed in the virtual space and the terminal device.
As one mode, the terminal device may convert a first relative position and posture relationship between the terminal device and the first interaction device in the real space into first relative coordinate information of the virtual space, and further, may convert a relative position and posture relationship between the terminal device and the display area in the real space into first relative coordinate information of the virtual space, and calculate rendering coordinates of the first virtual content in the virtual space according to the first relative coordinate information, so that the first virtual content may be accurately displayed on the display area of the first interaction device.
In some embodiments, the terminal device may first acquire position information of a display area corresponding to the first interaction device, where the position information may include a relative position relationship between the display area and the first marker on the first interaction device. The position information of the display area of the first interaction device in the whole first interaction device may be pre-stored in the terminal device, or the terminal device may collect an image including the first interaction device, identify the first interaction device in the image, and divide the display area of the first interaction device according to a preset division rule. For example, a right area of the first interaction device, which is 50% of the area of the first interaction device with respect to the user, may be set as the display area, or an area on the first interaction device except for the marker setting area and the touch area may be set as the display area, or the entire first interaction device may be set as the display area, or an area where the first marker is located may be set as the display area, or another area associated with the entire first interaction device may be set as the display area, but is not limited thereto.
After the terminal device obtains the position information of the display area on the first interactive device in the whole first interactive device, the terminal device can obtain the relative position posture relation information of the display area on the terminal device and the first interactive device according to the first relative position posture information and the like between the terminal device and the first interactive device so as to determine the display area corresponding to the first interactive device.
Step S204: a second image comprising a second marker is acquired.
In this embodiment, the second marker is a marker disposed on the second interactive device. The second interactive device may be a handle, a polyhedral controller, or the like, and is not limited herein.
Step S205: and identifying a second marker in the second image, and acquiring second relative position and posture information of the second interactive device and the terminal device. In this embodiment, the second relative position and posture information includes relative position information and relative posture information of the second interaction device and the terminal device.
In some embodiments, the terminal device may obtain the second relative position and posture information of the terminal device and the second interactive device by obtaining the first relative position and posture information and calculating a relative position and posture relationship between the second marker and the terminal device according to the second image containing the second marker. As one way, after the second relative position and orientation information is obtained, the second interaction device may be located in the virtual space coordinate system to obtain a virtual space coordinate corresponding to the second interaction device.
In this embodiment, after the second relative position and posture information is obtained, the selection instruction acting on the second virtual content in the first virtual content may be obtained according to the second relative position and posture information. Alternatively, after step S205, the user can choose to perform step S206, step S207, or step S208, step S209.
Step S206: and displaying the virtual indication object according to the second relative position and posture information. In this embodiment, after the second relative position and posture information is obtained, a virtual indication object may be rendered and displayed in the virtual space, where the virtual indication object is used to represent a direction indicated by the second interactive device, and the virtual indication object may be a virtual key head, a virtual ray consistent with the direction indicated by the second interactive device, or the like.
Step S207: and when the virtual indication object points to the second virtual content in the first virtual content, generating a selection instruction. In this embodiment, when the terminal device detects that the virtual indication object in the virtual space points to a second virtual content in the first virtual content displayed on the display area of the first interaction device, a selection instruction corresponding to the second virtual content may be generated.
As a way, each local virtual content in the first virtual content has its collision volume in the virtual space, and the virtual indication object associated with the second interactive device also has its collision volume in the virtual space, when the terminal device detects that the virtual indication object in the virtual space intersects with a certain local virtual content in the first virtual content (or the end of the virtual indication object collides with the local virtual content in the first virtual content), it can regard the current virtual indication object as pointing to the local virtual content in the first virtual content, and at this time, the pointed local virtual content can be used as the second virtual content, and a selection instruction corresponding to the second virtual content can be generated.
As a possible application scenario, as shown in fig. 10, a ray 22 may be displayed at a position corresponding to the second interactive device 20 in the virtual space, and the ray 22 may be used as a virtual pointing object to indicate the direction pointed by the second interactive device 20. In fig. 10, the ray 22 emitted by the second interactive device 20 is directed to a heart region of the 3D medical body model as the first virtual content 50 (displayed at a position corresponding to the display area of the first interactive device 10 in the virtual space), and then the 3D heart model in the 3D medical body model can be used as the second virtual content 60 to generate a selection instruction corresponding to the second virtual content 60. In fig. 10, the second interactive device 20 is a handle, the emitting end of the ray 22 displayed in the virtual space can be displayed in match with a certain shape feature of the handle (for example, the ray 22 can be displayed in the virtual space to be emitted from the end or the opening of the handle), and the direction of the ray 22 can also be the same as the orientation of the certain shape feature of the handle (for example, the same as the length extension direction of the handle or the orientation of the opening). It will be appreciated that in other possible embodiments, the second interaction device may also be another type of handheld interaction device, such as a polyhedral controller.
Further, as a mode, the user may change the relative position relationship and the relative posture relationship between the second marker and the terminal device, which are set on the second interaction device, by moving and rotating the second interaction device, that is, change the second relative position posture information between the second interaction device and the terminal device, and then change the coordinate and the pointing direction of the virtual indication object in the space, so as to select the virtual content that needs to be selected according to the user's desire.
Further, in some embodiments, the second interaction device may have built-in sensor modules and communication modules such as an inertia detection unit, the sensor modules may detect current position and posture information of the second interaction device itself, and send the position and posture information to the terminal device through the communication module, and after the terminal device acquires the position and posture information detected by the sensor module of the second interaction device itself, the terminal device may correct the display of the virtual indication object in the virtual space by combining second relative position and posture information acquired by acquiring the second marker and the second interaction device, so as to improve the accuracy of selecting the second virtual content by the virtual indication object.
In this embodiment, in addition to the selection of the second virtual content in the first virtual content through the virtual indication object provided in step S206 and step S207, the second virtual content in the first virtual content may also be selected through the manners provided in step S208 and step S209.
Step S208: and judging whether the current spatial position of the second interactive device is superposed with the display spatial position of the second virtual content in the first virtual content or not according to the second relative position posture information.
Step S209: and when the current spatial position of the second interactive device is superposed with the display spatial position of the second virtual content in the first virtual content, generating a selection instruction.
In this embodiment, as another mode, the second virtual content in the first virtual content may be selected directly through a projection of the second interactive device itself in the virtual space, where the projection has the collision volume. In one embodiment, the terminal device may obtain second relative position and posture information with the second interactive device through a second marker of the second interactive device, may convert the current spatial position of the second interactive device into second coordinate information in the virtual space in real time according to the second relative position and posture information, and determine whether the second coordinate information is included in the rendering coordinates of the first virtual content. If the second coordinate information is included in the rendering coordinates of the first virtual content, it may be indicated that the second interaction device overlaps with a part of the content of the first virtual content. When the second coordinate information is consistent with or included in rendering coordinates of second virtual content in the first virtual content, it may be indicated that the current spatial position of the second interaction device coincides with a display spatial position of the second virtual content in the first virtual content, and then a selection instruction may be generated, and the second virtual content may be selected.
As a possible application scenario, as shown in fig. 11, in some possible cases, due to the limited area of the display area on the first interaction device 10, the volume of the first virtual content 50 displayed at the corresponding position of the display area in the virtual space may be small, and it is not easy to select the second virtual content 60 with higher definition in the first virtual content 50, as a manner, the first virtual content 60 displayed in the virtual space corresponding to the display area of the first interaction device 10 may be projected to a certain position in the environment and enlarged, and at this time, the second virtual content 60 in the enlarged first virtual content 50 is selected by the second interaction device 20, so as to achieve accurate selection of the second virtual content 60 (the first virtual content 50 in fig. 11 is a 3D medical human body model, and the second virtual content 60 is a 3D heart model).
In some embodiments, when the terminal device detects that the virtual indication object or the second interactive device is overlapped with the local virtual content in the first virtual content in the virtual space, a confirmation option may be provided for the user, and after the user confirms the selection, the local virtual content is used as the second virtual content, and a selection instruction corresponding to the second virtual content is generated. As a mode, a confirmation option can be directly displayed on a display screen of the terminal device for the user to confirm; as another mode, a confirmation key (which may be a physical key or a virtual key) may be set on the second interactive device with the communication module, after it is detected that the user clicks the confirmation key, the confirmation information is sent to the terminal device, and the terminal device generates a corresponding selection instruction according to the confirmation information; or in other ways, and is not limited herein.
In this embodiment, after the selection instruction for the second virtual content in the first virtual content is generated in step S207 or step S209, step S210 may be performed.
Step S210: and acquiring rendering data of the second virtual content according to the second relative position and posture information.
In this embodiment, after the selection instruction is generated, the terminal device may first obtain rendering data of the second virtual content, where the rendering data may be virtual image data of the second virtual content in a certain posture.
Step S211: rendering and displaying the second virtual content based on the rendering data.
In this embodiment, the image of the second virtual content currently displayed by the terminal device is rendering data of the second virtual content obtained based on the current second relative position and posture information, and is rendered and displayed in the virtual display space according to the rendering data.
Step S212: and when the second relative position and posture information is changed, re-acquiring the rendering data of the second virtual content according to the changed second relative position and posture information.
Step S213: rendering and displaying the second virtual content based on the re-acquired rendering data.
In this embodiment, when the second relative position and posture information changes, it indicates that the relative position and posture of the second interactive device and the terminal device has changed, and at this time, the viewing angle of the user viewing the second virtual content associated with the position and posture of the second interactive device through the terminal device also changes, so that the second virtual content image that the terminal device needs to display changes accordingly, and at this time, the terminal device may obtain rendering data of the second virtual content again according to the changed second relative position and posture information.
As a mode, the terminal device may update rendering data of the second virtual content that needs to be displayed in real time by acquiring second relative position and posture information of the second interactive device in real time, so that the terminal device can display virtual pictures corresponding to the second virtual content at different positions and angles of view under different positions and angles of view relative to the second interactive device, and interactivity between a user wearing the terminal device and the virtual content is increased.
As one way, the user may change the second relative position and posture information of the current terminal device and the second interactive device by holding and moving, rotating, or moving around the second interactive device whose position and posture are fixed, and changing the pitch posture, so as to view different pictures of the second virtual content associated with the second interactive device in different positions and directions.
In some implementations, the user changes the position and orientation of the displayed second virtual content while following the second interactive device as the user moves, rotates, etc. the second interactive device. Particularly, when the second interactive device selects and displays the second virtual content by transmitting the virtual indication object, the position and the view angle of the second virtual content can be controlled by the virtual indication object.
In this embodiment, during the process of displaying the first virtual content or the second virtual content, the displayed virtual content may be switched at any time. After the second virtual content is displayed, if it is desired to switch the displayed second virtual content to another local virtual content in the first virtual content, steps S214 and S215 may be performed.
Step S214: and acquiring a content switching instruction.
Step S215: and switching the displayed second virtual content to a third virtual content in the first virtual content according to the content switching instruction. In this embodiment, the third virtual content and the second virtual content are both local virtual content in the first virtual content (or a sub-object of a virtual object corresponding to the first virtual content).
As one way, the content switching instruction corresponding to the local virtual content in the first virtual content may be acquired in various possible ways. For example, the second virtual content displayed at the position corresponding to the current second interactive device may be switched through a physical key or a virtual key arranged on the terminal device, the first interactive device, or the second interactive device; by presetting gesture switching logic, when detecting that the posture information of the second relative position changes, for example, the relative position change or the angle change of the terminal device and the second interactive device exceeds a certain value, the gesture can be correlated to switch the second virtual content into third virtual content; it may also be detected that when the second interactive device or a virtual indication object transmitted by it coincides with a third virtual content in the first virtual content, the displayed second virtual content is switched to the third virtual content; if the third virtual content is not directly selected, the number of each sub-object in the first virtual content can be sorted in advance, and when the gesture of the user is detected, the currently displayed second virtual content is directly switched to the third virtual content with the number sorted behind the current second virtual content; or otherwise, and is not limited thereto.
In some possible embodiments, when the display position of the second virtual content in the virtual space is associated with the position in the real environment, as shown in fig. 7, the terminal device acquires the relative position and posture information of the terminal device and the third marker 61 by performing recognition and tracking on the third marker 61 set in the real environment, determines the display area corresponding to the third marker 61, and displays the second virtual content corresponding to the first virtual content; the method comprises the steps of obtaining a touch operation detected by first interaction equipment, generating a control instruction corresponding to the touch operation, and changing display modes of first virtual content and second virtual content according to the control instruction. The terminal device receives the control instruction sent by the interactive device so as to change the display modes of the first virtual content and the second virtual content according to the control instruction.
As one way, the display manner of the second virtual content may be changed in synchronization with the first virtual content. As another mode, different touch areas corresponding to the first virtual content and the second virtual content respectively may be further disposed on the control panel of the interactive device, where the different touch areas are used to control display modes of the first virtual content and the second virtual content respectively.
In some embodiments, the touch operation may include, but is not limited to, a single-finger swipe, a click, a press, a multi-finger-fit swipe, etc. acting on the touch area. The control instruction is used for controlling the display of the virtual content, which may be to control the virtual content to rotate, enlarge, reduce, and implement a specific action effect, or to switch the virtual content to a new virtual content, or to add a new virtual content to the current augmented reality scene.
Referring to fig. 12A, in some embodiments, when the terminal device displays virtual content, when it is acquired that a touch action detected by the touch area is a single finger sliding left or right relative to a user, a control instruction for switching the displayed new virtual content is generated, for example, the control instruction is used to control the terminal device to switch a currently displayed virtual table lamp to a new virtual content such as a virtual building model or a virtual automobile; when the touch action detected by the touch area is acquired as the single finger sliding upwards or downwards relative to the user, a control instruction for controlling the current virtual content is generated, for example, the control instruction is used for controlling the terminal device to adjust the brightness, the illumination color and the like of the currently displayed virtual table lamp.
Referring to fig. 12B-12C, in some embodiments, when the terminal device displays virtual content, when it is acquired that the touch action detected by the touch area is that the distance between two fingers is relatively contracted and merged, a control instruction for reducing the currently displayed virtual content is generated, for example, the control instruction is used for controlling the terminal device to reduce the viewing angle of the currently displayed virtual table lamp relative to the user; when the touch action detected by the touch area is acquired as the distance between the two fingers is relatively enlarged, a control instruction for enlarging the currently displayed virtual content is generated, for example, the control instruction is used for controlling the terminal device to enlarge the viewing angle of the currently displayed virtual table lamp relative to the user.
In some embodiments, according to different virtual contents, the same touch operation may correspond to different control instructions, and after a control gesture detected through the touch area is obtained, a control instruction corresponding to the touch operation is generated according to the type of the currently displayed virtual content and the touch operation. For example, when the virtual content is a vehicle: when the touch operation detected in the touch area slides leftwards relative to the user, a control instruction for opening the vehicle door is generated; when the touch operation detected in the touch area slides rightwards relative to the user, generating a control instruction for closing the vehicle door; and when the touch operation detected by the touch area is double-click, generating a control instruction for turning on the vehicle lamp. As another example, when the virtual content is a 3D medical human body model: when the touch operation detected in the touch area is leftward sliding, generating a control instruction converted into a 3D3 muscle anatomical model; when the touch operation detected in the touch area is rightward sliding, generating a control instruction converted into a 3D skeleton anatomical model; and when the touch operation detected in the touch area is double-click, generating a control instruction converted into the 3D nerve anatomical model.
In some embodiments, the touch area further includes a control area, and the virtual content may include a virtual interactive interface, that is, the terminal device presents the virtual content with the virtual interactive interface to the user, and the control area corresponds to the virtual interactive interface. The form of the virtual interactive space may include, but is not limited to, including: buttons, pop-up windows, lists, etc. And the terminal equipment acquires the control action received by the control area and generates an interactive instruction according to the control action, wherein the interactive instruction is used for indicating the terminal equipment to control the virtual interactive interface. For example, in some specific examples, when the terminal device displays the virtual content, the virtual interaction interface of the virtual content may present an interaction menu, and the control area of the interaction apparatus corresponds to the interaction menu, and the selection or/and input of the interaction menu can be implemented by obtaining the control action of the control area. When the virtual content is presented as the table lamp, if a user double clicks the touch area, the virtual interaction interface of the virtual content presents menu buttons of whether to turn off the lamp, yes and no, meanwhile, the touch area comprises a first control area corresponding to the yes button and a second control area corresponding to the no button, through obtaining the control actions of the first control area and the second control area, if the first control area receives the control actions, a control instruction of turning off the lamp and yes is generated, and if the second control area receives the control actions, a control instruction of turning off the lamp and no is generated and the current state of the table lamp is maintained unchanged. In some embodiments, the number of the control areas is multiple, when the virtual content presents multiple virtual interaction interfaces, the multiple control areas are defined to correspond to the multiple virtual interaction interfaces one to one, at this time, the terminal device obtains one or more control actions received by one or more control areas in the multiple control areas, generates one or more interaction instructions corresponding to the one or more control actions, and the interaction instructions are used for instructing the terminal device to control the virtual interaction interface matched with the control area corresponding to the interaction instructions.
In some possible embodiments, the terminal device may not only display corresponding virtual content to users currently operating the first interactive device and the second interactive device, but also perform data sharing with other terminal devices in the surrounding environment through a Wi-Fi network, bluetooth, near field communication, and the like, so that other users not operating the first interactive device and the second interactive device but wearing the terminal devices can also see corresponding virtual content. The virtual image data sent by the terminal device to the other terminal device may include virtual model data of the second virtual content and rendering coordinates in the virtual space, where the virtual model data is data used by the other terminal device to render and display the second virtual content in the virtual space, and may include colors used to establish a model corresponding to the second virtual content, vertex coordinates in the 3D model, and the like.
In some embodiments, the virtual image data may further include a sharing instruction, where the sharing instruction is configured to instruct the other terminal device to acquire an image including a third marker, identify the third marker, acquire relative spatial position information of a display area corresponding to the third marker, and display the second virtual content according to the relative spatial position information.
In some embodiments, the mode of sharing and displaying the virtual content at least includes, according to the position of the other terminal device: a near-field sharing mode and a far-field sharing mode.
As an implementation scenario of a near field sharing mode, when other terminal devices and a main terminal device connected with an interactive device (i.e., a terminal device displaying a first virtual content and delivering a second content) are in the same real environment, the other terminal devices and the main terminal device can both recognize the same third marker set in the scenario. At this time, when the master terminal device determines a corresponding display area by recognizing the third marker in the scene and displays the second virtual content at the display position in the virtual space corresponding to the display area, the virtual image data corresponding to the second virtual content may be transmitted to other terminal devices located in the same scene through a Wi-Fi network, bluetooth, near field communication, or the like. After other terminal devices in the same scene acquire the virtual image data of the second virtual content sent by the main terminal device, the near field sharing display mode can be started according to the sharing instruction in the virtual image data. At this time, the other terminal device may acquire an image including the third marker, determine a display area corresponding to the third marker, acquire relative spatial position information with respect to the display area, further determine a display position of the second virtual content in the virtual space according to the relative spatial position information, and render and display the second virtual content at the display position of the acquired virtual image data of the second virtual content in the virtual space.
As an implementation scenario of the far-field sharing mode, when the other terminal device is in a different scenario (e.g., a different room with a far geographical distance) from the main terminal device connected with the interaction device, the main terminal device can recognize the third marker set in the main scenario (the scenario in which the main terminal device is located), and the other terminal device can recognize the fourth marker set in the sub-scenario in which the other terminal device is located. The third marker in the main scene and the fourth marker in the sub scene are not the same marker, but may be markers whose identity information is correlated with each other. When the master terminal device determines a display area by recognizing the third marker in the scene and displays the second virtual content at a display position in the virtual space corresponding to the display area, virtual image data corresponding to the second virtual content may be transmitted to other terminal devices located in the sub-scene through a wireless communication network connecting the same server. When the other terminal devices in the sub-scene acquire the virtual image data of the second virtual content sent by the main terminal device, the far-field sharing display mode can be started according to the sharing instruction in the virtual image data. The other terminal equipment can determine the display area corresponding to the fourth marker by acquiring the image containing the fourth marker, and further determine the relative position relation between the display area and the other terminal equipment, so that the corresponding display position in the virtual space can be obtained according to the relative position relation, and the second virtual content is rendered and displayed at the display position in the virtual space through the obtained virtual image data of the second virtual content.
In the far-field sharing mode, as one mode, the third marker and the fourth marker each have an orientation in the real environment (the orientations of the front, the rear, the left side, the right side, and the like of the marker may be divided according to the position distribution of the sub-markers and the feature points on the marker), and the third marker and the fourth marker may be associated with each other (that is, the third marker and the fourth marker each have an associated orientation of the front, the rear, the left side, the right side, and the like). For example, when the main terminal device is located at a front position in the main scene relative to the third marker, the displayed second virtual content is a direction in which the front surface faces the main terminal device, and at this time, the other terminal device that establishes far-field sharing with the main terminal device is located at a rear position in the sub scene relative to the fourth marker, and then the second virtual content displayed by the other terminal device at the display position in the virtual space corresponding to the corresponding display area may be a direction in which the rear surface faces the other terminal device.
In the far-field sharing mode, as another way, the direction of the second virtual content displayed by the other terminal devices may be kept consistent with the second virtual content displayed by the main terminal device, for example, when the main terminal device is located in front of the main scene relative to the third marker and the other terminal devices are located in back of the sub scene relative to the fourth marker, the second virtual content displayed by the main terminal device and the other terminal devices are both oriented in their respective directions.
In the near field sharing mode, the second virtual content displayed by other terminal devices in the same scene can be displayed in a direction different from that of the main terminal device according to different relative spatial position information of the second virtual content relative to the second marker; the relative spatial position of the third marker itself and the host terminal device may be ignored, and a virtual image whose orientation is identical to the orientation of the second virtual content displayed by the host terminal device may be displayed.
It can be understood that other terminal devices can display virtual contents at different positions and visual angles on their respective display screens according to their respective relative spatial position information with respect to the third marker or the fourth marker, or can display virtual contents at the same positions and visual angles.
As one mode, after the user finishes using, the terminal device may also upload operation records (for example, which virtual contents are selected to be displayed and what interaction is performed, etc.) of the user in the using process to the server in the form of a log, so as to facilitate subsequent use, such as statistics of user preferences and optimization of virtual display experience.
Referring to fig. 13, fig. 13 is a block diagram illustrating a display apparatus 300 for virtual content according to an embodiment of the present disclosure. As will be explained below with respect to the block diagram of fig. 13, the display device 300 of virtual content includes: an acquisition module 310, a first display module 320, a selection module 330, and a second display module 340, wherein: the acquisition module 310 is configured to acquire a first image including a first marker, where the first marker is a marker disposed on a first interactive device.
A first display module 320, configured to determine, according to the first marker in the first image, a display area corresponding to the first interaction device, and display first virtual content, where a display position of the first virtual content corresponds to the display area. Further, the first display module 320 includes: the first identification unit is used for identifying the first marker in the first image and acquiring first relative position and posture information of the first interaction device and a terminal device, wherein the first relative position and posture information comprises relative position information and relative posture information of the first interaction device and the terminal device; and the first display unit is used for determining a display area corresponding to the first interactive device according to the first relative position and posture information and displaying first virtual content.
The selecting module 330 is configured to obtain a selecting instruction. Further, the selecting module 330 includes: the acquisition unit is used for acquiring a second image containing a second marker, and the second marker is a marker arranged on second interaction equipment; a second recognition unit, configured to recognize the second marker in the second image, and acquire second relative position and posture information of the second interaction device and the terminal device, where the second relative position and posture information includes relative position information and relative posture information of the second interaction device and the terminal device; and the instruction unit is used for acquiring a selection instruction according to the second relative position and posture information. Further, the instruction unit includes: a first instruction subunit, configured to display a virtual indication object according to the second relative position and posture information, where the virtual indication object is used to represent a direction indicated by the second interaction device; the first generation subunit is used for generating a selection instruction when the virtual indication object points to a second virtual content in the first virtual content; the second instruction subunit is configured to determine, according to the second relative position and posture information, whether a current spatial position of the second interaction device coincides with a display spatial position of a second virtual content in the first virtual content; and the second generating subunit is configured to generate a selection instruction when the current spatial position of the second interaction device coincides with the display spatial position of the second virtual content in the first virtual content.
The second display module 340 is configured to select a second virtual content from the first virtual content according to the selection instruction, and display the second virtual content. Further, the second display module 340 includes: an obtaining unit, configured to obtain rendering data of the second virtual content according to the second relative position and posture information; a rendering unit configured to render and display the second virtual content based on the rendering data; a reacquiring unit, configured to reacquire rendering data of the second virtual content according to the changed second relative position and posture information when the second relative position and posture information changes; and a re-rendering unit for rendering and displaying the second virtual content based on the re-acquired rendering data.
The display apparatus 300 of virtual content may further include: the acquisition module is used for acquiring a content switching instruction; and the switching module is used for switching the displayed second virtual content into a third virtual content in the first virtual content according to the content switching instruction.
Referring to fig. 14, an interactive device 400 (which may be a first interactive device in the above embodiments) is provided in the present embodiment, where the interactive device 400 includes a substrate 410, a control panel 430, and a first marker 450, the control panel 430 is disposed on the substrate 410, and the first marker 450 is integrated in the control panel 430. In this embodiment, the interactive apparatus 400 is a substantially flat panel structure. The substrate 410 may be a plate structure or a housing structure, and is used for carrying the control panel 430.
The control panel 430 is stacked on one side of the substrate 410 and is used for receiving a manipulation instruction of a user to facilitate the interactive device 400 to generate an image control instruction. The control panel 430 has a touch area 431 and is divided into a display area 433.
The touch area 431 is used for receiving a manipulation instruction of a user. In some embodiments, the touch area 431 may include a touch screen, and by detecting a touch state of the touch screen, if it is determined that a touch signal is generated according to the touch screen, the touch area 431 is considered to receive the manipulation instruction. In some embodiments, the touch area 431 includes a key, and the touch area 431 is considered to receive the manipulation instruction if the key is determined to generate the pressure signal by detecting the pressed state of the key. In some embodiments, the touch area 431 may be a plurality of touch areas, and the plurality of touch areas 431 includes at least one of a touch screen and a key.
The first marker 450 is a planar first marker integrated into the display region 433 corresponding to the control panel 430, which may be a predetermined symbol or pattern. The first marker 450 is used for being recognized by the terminal device and then displayed on the terminal device in the form of a virtual model.
The interaction device 400 further comprises a filter layer 470, the filter layer 470 being stacked on a side of the first marker 450 facing away from the substrate 410. The filter layer 470 is used to filter light other than the light emitted from the illumination device of the terminal equipment to the first marker 450, so as to prevent the first marker 450 from being affected by the ambient light when the light is reflected, thereby making the first marker 450 more easily recognized. In some embodiments, the filtering performance of the filter layer 470 may be set according to actual needs. For example, when the first marker 450 enters the field of view of the camera to be identified, in order to improve the identification efficiency, the camera usually uses an auxiliary light source to assist in capturing images, for example, when an infrared light source is used for assistance, the filter layer 470 is used to filter out light other than infrared light (such as visible light, ultraviolet light, etc.), so that light other than infrared light cannot pass through the filter layer 470 and infrared light can pass through and reach the first marker 450. When the auxiliary light source projects the infrared light to the first marker 450, the filter layer 470 filters the ambient light except the infrared light, so that only the infrared light reaches the first marker 450 and is reflected by the first marker 450 to the near-infrared image capturing device, thereby reducing the influence of the ambient light on the recognition process.
In some embodiments, the control panel 430 may further include a pressure area (not shown), and the pressure area is provided with a pressure sensor for sensing the external pressure, so that the interactive device 400 generates a corresponding control command according to the external pressure. The pressure area may be disposed in a partial area of the control panel 430, or may completely cover the surface of the control panel 430, or may be disposed to overlap with or be parallel to or spaced apart from the touch area 431 or/and the display area 433. The terminal equipment acquires pressure data detected by the pressure area and generates a control instruction according to the pressure data, wherein the control instruction is an instruction for controlling the virtual content to be displayed in a deformation state, so that the virtual content is displayed in a deformation state due to extrusion of an external force. Further, according to the pressure data, a preset functional relation can be adopted to calculate the deformation state of the virtual content; for example, the amount of deformation of the virtual content is in a proportional relationship with the pressure value, the larger the pressure value is, the larger the amount of deformation of the virtual content is, and even when the pressure value exceeds a set threshold, a preset image display operation (for example, explosion or disappearance of the virtual content, switching of the virtual content, etc.) may be performed. Or, the control instruction is an instruction for controlling the display state of the virtual content, so that the display state (such as a color gradient, brightness or transparency state) of the virtual content display changes along with the pressure magnitude. Specifically, the actual pressure acquired by the pressure area is compared with a pressure threshold value according to a preset pressure threshold value, and the variation of the display state of the virtual content is controlled according to the ratio of the actual pressure to the pressure threshold value.
In the embodiment shown in fig. 14, the touch area 431 and the display area 433 are arranged in parallel, and the first marker 450 is overlapped on the display area 433. In other embodiments, the touch area 431 and the display area 433 may be disposed in an overlapping manner, for example, the touch area 431 is a transparent touch screen which is overlapped on the display area 433, and the first marker 450 is disposed between the touch area 431 and the display area 433, so that the volume of the interactive device 400 can be further reduced, and the portability thereof can be improved.
Referring to fig. 15, fig. 15 is a schematic structural diagram of another interactive device 500 (which may be a second interactive device in the foregoing embodiments) provided in the embodiment of the present application. The interactive device 500 is provided with a second marker 510, and the terminal device can acquire second relative position and posture information of the interactive device 500 and the terminal device by acquiring an image containing the second marker 510. The specific morphological structure of the interactive device 500 is not limited. By one approach, the interaction device 500 may be a twenty-hexahedron that includes eighteen square faces and eight triangular faces. Further, the interaction device 500 includes multiple surfaces that are not coplanar. Wherein the plurality of surfaces that are not coplanar may be respectively provided with different second markers 510. By arranging different second markers 510 on different surfaces which are not coplanar, the terminal device can recognize the image containing the second markers 510 under multiple angles, so as to acquire second relative position and posture information of the terminal device and the interactive device 500 under different postures.
When the user uses the system, the terminal device acquires information of the second marker 510 on the interactive device 500 in real time by rotating or/and displacing the different second marker 510 on the interactive device 500 within the visual field of the terminal device, and further acquires second relative position and posture information of the terminal device and the interactive device 500, so as to display corresponding virtual content in a virtual space according to the second relative position and posture information. In some embodiments, the shape of the interactive device 500 may be other shapes, such that the interactive device 500 may include at least a plurality of non-coplanar surfaces and second markers 510 respectively disposed on the non-coplanar surfaces, so that the interactive device 500 can be recognized by the terminal device and acquire/track gestures via the second markers 510.
The interactive device 500 may further include a manipulation area 520 for receiving a manipulation action and transmitting the manipulation action to the terminal device, so that the terminal device can generate a control command for controlling the display of the virtual content. The manipulation region 520 may be one or more of a touch screen, a key, and the like. The number of the manipulation areas 520 may be multiple, and the multiple manipulation areas 520 may correspond to the same or different control commands, and by inputting a manipulation action to any one or more of the multiple manipulation areas 520, the terminal device may generate a control command according to the manipulation action, and further control the virtual content to be displayed in a corresponding state.
In some embodiments, the interactive device 500 may further include an inertial measurement sensor for sensing and acquiring pose information of the interactive device 500.
In some embodiments, the interactive device 500 may further include a pressure area (not shown), where the pressure area is provided with a pressure sensor, and the pressure sensor is configured to sense the external pressure received by the interactive device, so that the terminal device generates a corresponding control instruction according to the external pressure received by the interactive device 500. The pressure zone may be located in a partial region of the interaction device 500, may completely cover the outer surface of the interaction device 500, and may overlap the manipulation zone 520 or the surface on which the second marker 510 is located.
The specific pattern displayed by the marker in the present application is not limited, and may be any pattern that can be acquired by a camera of the terminal device. For example, the specific pattern of the marker may be a combination of one or more of any of the following: circular, triangular, rectangular, oval, wavy, straight, curved, etc., and are not limited to what is described in this specification. In other embodiments, the marker may be other types of patterns that enable the marker to be more effectively recognized by the camera. For example, the specific pattern of the marker may be a geometric figure (e.g., a circle, a triangle, a rectangle, an ellipse, a wavy line, a straight line, a curved line, etc.), a predetermined pattern (e.g., an animal head, a common schematic symbol such as a traffic sign, etc.), or other patterns that can be resolved by the camera to form the marker, and is not limited to the description in the specification. In other embodiments, the marker may be a bar code, a two-dimensional code, or the like.
An embodiment of the present application provides a terminal device, including a display, a memory, and a processor, where the display and the memory are coupled to the processor, and the memory stores instructions that are executed by the processor: acquiring an image containing a first marker, wherein the first marker is a marker arranged on first interaction equipment; determining a display area corresponding to the first interactive device according to the first marker in the image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area; acquiring a selection instruction; and selecting second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content.
An embodiment of the present application provides a computer-readable storage medium, in which program codes are stored, and the program codes can be called by a processor to execute: acquiring an image containing a first marker, wherein the first marker is a marker arranged on first interaction equipment; determining a display area corresponding to the first interactive device according to the first marker in the image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area; acquiring a selection instruction; and selecting the second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content.
To sum up, according to the method and the device for displaying virtual content, the terminal device and the system for displaying virtual content provided by the embodiment of the present application, an image including a first marker is collected first, and the first marker is a marker arranged on a first interactive device; then determining a display area corresponding to the first interactive device according to the first marker in the first image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area; acquiring a selection instruction; and finally, selecting second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content. According to the embodiment of the application, the interactivity of AR/VR can be improved by selecting part of virtual content in the virtual content displayed on the interactive device and displaying the selected part of virtual content on another area.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (terminal device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical capture of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application. The above-described embodiments may be combined with each other and the features of the respective embodiments may be combined with each other without conflict, and the embodiments are not limited to the embodiments.

Claims (11)

1. A method for displaying virtual content, the method comprising:
acquiring a first image containing a first marker, wherein the first marker is a marker arranged on first interaction equipment;
determining a display area corresponding to the first interaction device according to the first marker in the first image, and displaying first virtual content, wherein the display position of the first virtual content corresponds to the display area;
acquiring a selection instruction;
and selecting second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content.
2. The method of claim 1, wherein determining a display area corresponding to the first interactive device from the first marker in the first image and displaying a first virtual content comprises:
identifying the first marker in the first image, and acquiring first relative position and posture information of the first interaction device and a terminal device, wherein the first relative position and posture information comprises relative position information and relative posture information of the first interaction device and the terminal device;
and determining a display area corresponding to the first interactive device according to the first relative position and posture information, and displaying first virtual content.
3. The method of claim 2, wherein obtaining a pick instruction comprises:
acquiring a second image comprising a second marker, the second marker being a marker disposed on a second interaction device;
identifying the second marker in the second image, and acquiring second relative position and posture information of the second interactive device and the terminal device, wherein the second relative position and posture information comprises the relative position information and the relative posture information of the second interactive device and the terminal device;
and acquiring a selection instruction according to the second relative position and posture information.
4. The method according to claim 3, wherein obtaining a pick command according to the second relative position and orientation information comprises:
displaying a virtual indicating object according to the second relative position and posture information, wherein the virtual indicating object is used for representing the direction indicated by the second interactive equipment;
and when the virtual indication object points to second virtual content in the first virtual content, generating a selection instruction.
5. The method according to claim 3, wherein obtaining a pick command according to the second relative position and orientation information comprises:
judging whether the current spatial position of the second interactive device is overlapped with the display spatial position of second virtual content in the first virtual content or not according to the second relative position and posture information;
and generating a selection instruction when the current spatial position of the second interactive device is superposed with the display spatial position of the second virtual content in the first virtual content.
6. The method according to any one of claims 3 to 5, wherein selecting a second virtual content from the first virtual content according to the selection instruction and displaying the second virtual content comprises:
acquiring rendering data of the second virtual content according to the second relative position and posture information;
rendering and displaying the second virtual content based on the rendering data;
when the second relative position and posture information changes, re-acquiring rendering data of the second virtual content according to the changed second relative position and posture information;
rendering and displaying the second virtual content based on the re-acquired rendering data.
7. The method of claim 1, further comprising:
acquiring a content switching instruction;
and switching the displayed second virtual content to a third virtual content in the first virtual content according to the content switching instruction.
8. An apparatus for displaying virtual content, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image containing a first marker, and the first marker is a marker arranged on first interaction equipment;
the first display module is used for determining a display area corresponding to the first interaction device according to the first marker in the first image and displaying first virtual content, and the display position of the first virtual content corresponds to the display area;
the selection module is used for acquiring a selection instruction;
and the second display module is used for selecting second virtual content from the first virtual content according to the selection instruction and displaying the second virtual content.
9. A terminal device comprising a display, a memory, and a processor, the display and the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-7.
10. A computer-readable storage medium, having stored thereon program code that can be invoked by a processor to perform the method according to any one of claims 1 to 7.
11. A system for displaying virtual content, comprising:
the first interaction device comprises a control panel, wherein the control panel is provided with a first marker and a display area;
a second interaction device provided with a second marker;
the terminal equipment is used for acquiring a first image containing the first marker, displaying first virtual content according to the first marker in the first image, wherein the display position of the first virtual content corresponds to the display area,
the terminal device is further configured to acquire a second image including the second marker, acquire a selection instruction according to the second marker in the second image, select second virtual content from the first virtual content according to the selection instruction, and display the second virtual content.
CN201811226280.2A 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and display system Pending CN111083463A (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201811496843.XA CN111083464A (en) 2018-10-18 2018-10-18 Virtual content display delivery system
CN201811226280.2A CN111083463A (en) 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and display system
PCT/CN2019/111790 WO2020078443A1 (en) 2018-10-18 2019-10-18 Method and system for displaying virtual content based on augmented reality and terminal device
US16/731,055 US11244511B2 (en) 2018-10-18 2019-12-31 Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811226280.2A CN111083463A (en) 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and display system

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN201811496843.XA Division CN111083464A (en) 2018-10-18 2018-10-18 Virtual content display delivery system

Publications (1)

Publication Number Publication Date
CN111083463A true CN111083463A (en) 2020-04-28

Family

ID=70308341

Family Applications (2)

Application Number Title Priority Date Filing Date
CN201811226280.2A Pending CN111083463A (en) 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and display system
CN201811496843.XA Pending CN111083464A (en) 2018-10-18 2018-10-18 Virtual content display delivery system

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN201811496843.XA Pending CN111083464A (en) 2018-10-18 2018-10-18 Virtual content display delivery system

Country Status (1)

Country Link
CN (2) CN111083463A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111627097A (en) * 2020-06-01 2020-09-04 上海商汤智能科技有限公司 Virtual scene display method and device
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN114554276A (en) * 2020-11-26 2022-05-27 中移物联网有限公司 Method, device and system for sharing content between devices
CN115268655A (en) * 2022-08-22 2022-11-01 江苏泽景汽车电子股份有限公司 Interaction method and system based on augmented reality, vehicle and storage medium

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113126132B (en) * 2021-04-09 2022-11-25 内蒙古科电数据服务有限公司 Method and system for calibrating and analyzing track in mobile inspection

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100032267A (en) * 2008-09-17 2010-03-25 (주)지아트 Device for authoring augmented reality, method and system for authoring augmented reality using the same
CN102722338A (en) * 2012-06-15 2012-10-10 杭州电子科技大学 Touch screen based three-dimensional human model displaying and interacting method
CN105814626A (en) * 2013-09-30 2016-07-27 Pcms控股公司 Methods, apparatus, systems, devices, and computer program products for providing an augmented reality display and/or user interface
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing
CN108292146A (en) * 2016-02-08 2018-07-17 谷歌有限责任公司 Laser designator interaction in virtual reality and scaling

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6194711B2 (en) * 2013-09-11 2017-09-13 株式会社リコー Image forming apparatus, printing method, and program
CN106131538B (en) * 2016-06-03 2018-06-26 深圳市乐得瑞科技有限公司 A kind of method and device of audio video synchronization

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100032267A (en) * 2008-09-17 2010-03-25 (주)지아트 Device for authoring augmented reality, method and system for authoring augmented reality using the same
CN102722338A (en) * 2012-06-15 2012-10-10 杭州电子科技大学 Touch screen based three-dimensional human model displaying and interacting method
CN105814626A (en) * 2013-09-30 2016-07-27 Pcms控股公司 Methods, apparatus, systems, devices, and computer program products for providing an augmented reality display and/or user interface
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing
CN108292146A (en) * 2016-02-08 2018-07-17 谷歌有限责任公司 Laser designator interaction in virtual reality and scaling

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111627097A (en) * 2020-06-01 2020-09-04 上海商汤智能科技有限公司 Virtual scene display method and device
CN111627097B (en) * 2020-06-01 2023-12-01 上海商汤智能科技有限公司 Virtual scene display method and device
CN114554276A (en) * 2020-11-26 2022-05-27 中移物联网有限公司 Method, device and system for sharing content between devices
CN114554276B (en) * 2020-11-26 2023-12-12 中移物联网有限公司 Method, device and system for sharing content between devices
CN113421343A (en) * 2021-05-27 2021-09-21 深圳市晨北科技有限公司 Method for observing internal structure of equipment based on augmented reality
CN115268655A (en) * 2022-08-22 2022-11-01 江苏泽景汽车电子股份有限公司 Interaction method and system based on augmented reality, vehicle and storage medium

Also Published As

Publication number Publication date
CN111083464A (en) 2020-04-28

Similar Documents

Publication Publication Date Title
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
US10175492B2 (en) Systems and methods for transition between augmented reality and virtual reality
CN111083463A (en) Virtual content display method and device, terminal equipment and display system
CN106062862B (en) System and method for immersive and interactive multimedia generation
JP5604739B2 (en) Image recognition apparatus, operation determination method, and program
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
US20160314624A1 (en) Systems and methods for transition between augmented reality and virtual reality
US11049324B2 (en) Method of displaying virtual content based on markers
EP3262505B1 (en) Interactive system control apparatus and method
CN111383345B (en) Virtual content display method and device, terminal equipment and storage medium
JPWO2005119591A1 (en) Display control method and apparatus, program, and portable device
US11087545B2 (en) Augmented reality method for displaying virtual object and terminal device therefor
CN111766937A (en) Virtual content interaction method and device, terminal equipment and storage medium
KR100971667B1 (en) Apparatus and method for providing realistic contents through augmented book
CN111766936A (en) Virtual content control method and device, terminal equipment and storage medium
CN111813214B (en) Virtual content processing method and device, terminal equipment and storage medium
US20220137705A1 (en) Head mounted display apparatus
KR20130128910A (en) System and method for moving virtual object tridimentionally in multi touchable terminal
US20220244788A1 (en) Head-mounted display
KR101983233B1 (en) Augmented reality image display system and method using depth map
CN111399630B (en) Virtual content interaction method and device, terminal equipment and storage medium
CN111077983A (en) Virtual content display method and device, terminal equipment and interactive equipment
GB2535730A (en) Interactive system control apparatus and method
CN111381670B (en) Virtual content interaction method, device, system, terminal equipment and storage medium
CN111857364B (en) Interaction device, virtual content processing method and device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428

RJ01 Rejection of invention patent application after publication