CN111077983A - Virtual content display method and device, terminal equipment and interactive equipment - Google Patents

Virtual content display method and device, terminal equipment and interactive equipment Download PDF

Info

Publication number
CN111077983A
CN111077983A CN201811217303.3A CN201811217303A CN111077983A CN 111077983 A CN111077983 A CN 111077983A CN 201811217303 A CN201811217303 A CN 201811217303A CN 111077983 A CN111077983 A CN 111077983A
Authority
CN
China
Prior art keywords
virtual content
marker
display
virtual
terminal device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811217303.3A
Other languages
Chinese (zh)
Inventor
贺杰
戴景文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Virtual Reality Technology Co Ltd
Original Assignee
Guangdong Virtual Reality Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Virtual Reality Technology Co Ltd filed Critical Guangdong Virtual Reality Technology Co Ltd
Priority to CN201811217303.3A priority Critical patent/CN111077983A/en
Priority to PCT/CN2019/111790 priority patent/WO2020078443A1/en
Priority to US16/731,055 priority patent/US11244511B2/en
Publication of CN111077983A publication Critical patent/CN111077983A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0487Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
    • G06F3/0488Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The application discloses a virtual content display method, a virtual content display device, terminal equipment and interactive equipment, wherein the method comprises the following steps: acquiring a first image containing a first marker, wherein the first marker is a marker arranged on the interaction equipment; determining a first display area corresponding to the interactive device according to a first marker in the first image, and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area; and displaying second virtual content corresponding to the first virtual content, wherein a second display position of the second virtual content corresponds to a preset second display area. After the virtual content is displayed on the interactive equipment, the corresponding virtual content can be released and displayed on another area, and the interactivity of AR/VR is improved.

Description

Virtual content display method and device, terminal equipment and interactive equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for displaying virtual content, a terminal device, and an interaction device.
Background
With the development of science and technology, machine intellectualization and information intellectualization are increasingly popularized, and terminal devices related to Virtual Reality (VR) and Augmented Reality (AR) gradually enter people's daily life. The augmented reality technology builds virtual content which does not exist in a real environment by means of a computer graphics technology and a visualization technology, accurately fuses the virtual content into a real environment by means of an image recognition and positioning technology, fuses the virtual content and the real environment into a whole by means of display equipment, and brings real sensory experience to a user. In the conventional technology, display of augmented reality or mixed reality or the like is performed by superimposing virtual content in an image of a real scene, and interactive control with the virtual content is an important research direction of augmented reality or mixed reality.
Disclosure of Invention
The application provides a virtual content display method and device, terminal equipment and interactive equipment, and the interactivity of AR/VR can be improved by selecting part of virtual content in the virtual content displayed on the interactive equipment and displaying the selected part of virtual content on another area.
In a first aspect, an embodiment of the present application provides a method for displaying virtual content, where the method includes: acquiring a first image containing a first marker, wherein the first marker is a marker arranged on the interaction equipment; determining a first display area corresponding to the interactive device according to a first marker in the first image, and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area; and displaying second virtual content corresponding to the first virtual content, wherein a second display position of the second virtual content corresponds to a preset second display area.
In a second aspect, an embodiment of the present application provides an apparatus for displaying virtual content, the apparatus including: the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image containing a first marker, and the first marker is a marker arranged on the interactive equipment; the first display module is used for determining a first display area corresponding to the interaction device according to the first marker in the first image and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area; and the second display module is used for displaying second virtual content corresponding to the first virtual content, and a second display position of the second virtual content corresponds to a preset second display area.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a display, a memory, and a processor, where the display and the memory are coupled to the processor, and the memory stores instructions, and when the instructions are executed by the processor, the processor performs the method of the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, in which program code is stored, and the program code can be called by a processor to execute the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides an interaction device, which includes: the control panel is provided with a first marker and a touch area, the first marker is used for terminal equipment to identify and determine first relative pose information of the interaction equipment and the terminal equipment, the touch area is used for detecting a touch action of a user to generate a control instruction corresponding to the touch action, and the control instruction is used for controlling virtual content displayed by the terminal equipment.
The method, the device, the terminal equipment and the interactive equipment for displaying the virtual content, provided by the embodiment of the application, are characterized in that a first image containing a first marker is collected firstly, and the first marker is a marker arranged on the interactive equipment; then, a first display area corresponding to the interactive device is determined according to a first marker in the first image, and first virtual content is displayed, wherein a first display position of the first virtual content corresponds to the first display area; and finally, displaying second virtual content corresponding to the first virtual content, wherein a second display position of the second virtual content corresponds to a preset second display area. After the virtual content is displayed on the interactive equipment, the corresponding virtual content can be released and displayed on another area, and the interactivity of AR/VR is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic view illustrating an application scenario of a display system of virtual content provided in an embodiment of the present application;
fig. 2 is a schematic view of a scene in which the first virtual content and the second virtual content are displayed in the embodiment of the present application.
Fig. 3 shows a block diagram of a terminal device according to an embodiment of the present application;
fig. 4 shows an interaction diagram of a terminal device and a server according to an embodiment of the present application;
fig. 5 is a flowchart illustrating a display method of virtual content according to an embodiment of the present application;
FIG. 6 is a schematic diagram illustrating a scene displaying a first virtual content in an embodiment of the present application;
fig. 7 is a schematic view illustrating a scene displaying a first virtual content and a second virtual content in an embodiment of the present application;
fig. 8A, 8B, and 8C are schematic diagrams illustrating a sliding motion in a touch area of an interactive device in an embodiment of the present application.
FIG. 9 is a block diagram of a display device for virtual content provided by an embodiment of the present application;
fig. 10 is an exploded schematic diagram of an interaction device provided in an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments.
With the development of technologies such as VR (Virtual Reality), AR (Augmented Reality), etc., electronic devices related to VR/AR gradually come into people's daily lives. When people use VR/AR equipment, markers (also called Marker or Tag) in the real environment can be collected through the camera shooting assembly on the equipment, and then the virtual images bound with the markers can be displayed at corresponding positions on the display screen through corresponding image processing, so that users can enjoy science and fiction type impression experience. At present, in some exhibitions and museums adopting VR/AR related technology, virtual scenes and virtual exhibit images of various exhibition halls can be displayed to users through VR/AR equipment worn by the users. However, through research and study, the inventor finds that, in a conventional VR/AR scene, when a user controls displayed virtual content, it is usually necessary to change the orientation of a VR/AR device such as a head mounted display through control of a controller or by turning the direction of the head to change the displayed virtual content, for example, to see different pictures of the virtual content at different viewing angles, which is cumbersome to operate and requires frequent movement of the user, and since there are limitations to the actual environment for viewing the virtual content, for example, there may be obstacles or steps around the display position of the virtual content, which makes it impossible for the user to easily view the pictures at different angles of the virtual content. Through research, the inventor proposes a display method and device of virtual content, a terminal device and an interactive device in the embodiment of the application.
The following describes in detail a method, an apparatus, a terminal device, and an interactive device for displaying virtual content according to embodiments of the present application.
Referring to fig. 1, an application scenario diagram of a display method of virtual content provided in an embodiment of the present application is shown, where the application scenario includes a display system 100 of virtual content provided in the embodiment of the present application, and the display system 100 of virtual content includes: an interaction device 10, a second marker 20 and a terminal device 30.
In this embodiment, the interactive device 10 includes a control panel, which is provided with a first marker 11 and a touch area 12. The first marker 11 is used by the terminal device 30 to identify and determine first relative pose information of the interaction device 10 and the terminal device 30, and the touch area 12 is used to detect a touch action of a user to generate a manipulation instruction corresponding to the touch action, where the manipulation instruction is used to control virtual content displayed by the terminal device 30. Wherein, the number of the first markers 11 arranged on the interactive device 10 can be one or more. Both the area where the first marker 11 is disposed and the touch area 12 can be used as a first display area corresponding to a first display position where the first virtual content is displayed.
In some embodiments, the terminal device 30 may capture a first image including the first marker 11, and display first virtual content according to the first marker 11 in the first image, where a first display position of the first virtual content corresponds to a first display area on the interactive device 10, and the user may see the displayed first virtual content through the terminal device 30 to overlap with the first display area of the interactive device 10 in the real world, where the first virtual content is displayed in an overlapping manner.
The terminal device 30 can locate the current relative position of the interactive device 10 and the terminal device 30 through the acquired first image containing the first marker 11, and display the first virtual content on the corresponding position (the first display position corresponding to the first display area of the interactive device 10).
Further, the image data corresponding to the first virtual content may be pre-stored in the terminal device 30 (or may be obtained from a server or other terminal), and may be selected by the user for display. In some application scenarios, a user may first select a virtual content to be displayed through the terminal device 30 or the interactive device 10, then scan the first marker 11 on the interactive device 10 for positioning, and finally display the selected first virtual content at a first display position corresponding to a first display area of the interactive device 10. The first display position of the first virtual content refers to rendering coordinates of the first virtual content in the virtual space, according to which the corresponding three-dimensional first virtual content can be rendered in the virtual space, and the rendering coordinates can be used to represent a spatial position relationship between the first virtual content and the terminal device 30 in the virtual space. The first display position of the first virtual content may be a screen display position corresponding to the first display area of the interactive device 10 on the display screen of the terminal device 30 (or may be another device externally connected to the terminal device 30).
As one way, the interactive device 10 may be held by a user or fixed on a console for the user to operate and view the virtual content. The touch area 12 provided on the interactive device 10 is configured to allow a user to perform a touch operation on the touch area 12, so as to control the first virtual content displayed at the first display position corresponding to the first display area. The interactive device 10 may detect a touch action through the touch area 12, generate a manipulation instruction corresponding to the touch action, and send the manipulation instruction to the terminal device 30. When the terminal device 30 receives the manipulation instruction sent by the interaction device 10, the display of the first virtual content may be controlled according to the manipulation instruction, so as to control the first virtual content (for example, control the first virtual content to rotate, displace, switch, and the like), which is beneficial to improving the interactivity between the user and the virtual content.
In this embodiment, the second marker 20 may be disposed in a scene (which may be the ground, or the surface of another object such as a display stand). As one mode, the number of the second markers 20 may be single or multiple, and the multiple second markers 20 may also be respectively set in multiple different scenes, so that the terminal devices in the different scenes can identify the second markers 20 and perform virtual content display.
In some embodiments, the terminal device 30 may capture a second image including the second marker 20, determine a second display area corresponding to the second marker 20 in the second image, and display the second virtual content corresponding to the first virtual content on the display position corresponding to the second display area.
In one possible application scenario, as shown in fig. 2, fig. 2 is a schematic view of a scenario in which a first virtual content 50 and a second virtual content 60 are displayed by the terminal device 30 shown in fig. 1.
In fig. 2, after the user wears the head-mounted terminal device 30, the first virtual content 50 displayed in an overlapping manner with the first display area on the real-world interaction device 10 and the second virtual content 60 displayed in an overlapping manner with the second display area corresponding to the second marker 20 in the real world can be seen through the terminal device 30. In this embodiment, the first virtual content 50 displayed at the corresponding position of the interactive device 10 may be a virtual 3D medical human body model, and the user simultaneously displays the virtual 3D medical human body model at a second display position corresponding to the second display area of the second marker 20 through the touch area on the interactive device 10. In the present embodiment, the second virtual content 60 may be a 3D medical human body model, or may be a part of the 3D medical human body model. As one way, the user can change the display manner of the first virtual content 50 through the touch area 12 and cause the second virtual content 60 to change its display manner following the first virtual content 50. The second virtual content 60 may be displayed at a second display position corresponding to a second display area of the second marker 20, where the virtual 3D medical phantom may be seen by the user via the terminal device to be displayed in a second display area of the real world (which may be, for example, the area of fig. 2 where the second marker 20 is located). It is understood that the first virtual content 50 can be selected by the user to be displayed and switched, for example, the first virtual content 50 can be a mechanical model, an art exhibit, a book, a game character, etc. in addition to the 3D medical human body model; correspondingly, the second virtual content 60 may be the same as the first virtual content 50, or may be a part of sub-objects in the first virtual content 50, such as mechanical parts, parts of exhibits, pages of a book, game equipment, and the like; other virtual content associated with the first virtual content 50 is also possible.
In some embodiments, the terminal device 30 may be a head-mounted display device, a mobile phone, a tablet, or the like, wherein the head-mounted display device may be an integrated head-mounted display device. The terminal device 30 may be an intelligent terminal such as a mobile phone connected to an external head-mounted display device. Referring to fig. 3, as an embodiment, the terminal device 30 may include: a processor 31, a memory 32, a display device 33, and a camera 34. The memory 32, the display device 33 and the camera 34 are all connected to the processor 31.
The camera 34 is used for capturing an image of an object to be photographed and sending the image to the processor 31. The camera 34 may be an infrared camera, a color camera, etc., and the specific type of the camera 34 is not limited in the embodiment of the present application.
The processor 31 may comprise any suitable type of general or special purpose microprocessor, digital signal processor or microcontroller. The processor 31 may be configured to receive data and/or signals from various components of the system via, for example, a network. The processor 31 may also process the data and/or signals to determine one or more operating conditions in the system. For example, the processor 31 generates image data of a virtual world from image data stored in advance, and transmits the image data to the display device for display; the image data sent by the intelligent terminal or the computer can be received through a wired or wireless network, and the image of the virtual world is generated and displayed according to the received image data; the display device 33 can also perform recognition and positioning according to the image acquired by the camera, determine the corresponding display content in the virtual world according to the positioning information, and send the display content to the display device.
The memory 32 may be used to store software programs and modules, and the processor 31 executes various functional applications and data processing by operating the software programs and modules stored in the memory 32. The memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory.
The display device and the camera of the terminal device 30 are connected to a terminal device having a memory function of a memory and a processing function of a processor. It is to be understood that the processing executed by the processor in the above embodiments is executed by the processor of the terminal device, and the data stored by the memory in the above embodiments is stored by the memory of the terminal device.
In some embodiments, the terminal device 30 may further include a communication module, which is connected with the processor. The communication module is used for communication between the terminal device 30 and other terminals.
In some embodiments, when the markers (first marker 11 and/or second marker 20) are within the field of view of the camera 34 of the terminal device 30, the camera 34 may capture an image of the markers. The image of the marker is stored in the terminal device 30 for locating the position of the terminal device 30 relative to the marker.
When the user uses the terminal device 30, after the terminal device 30 acquires a marker image including a marker through the camera 34, the processor of the terminal device 30 acquires the marker image and related information, calculates and identifies the marker, and acquires the position and rotation relationship between the marker and the camera of the terminal device 30, thereby acquiring the position and rotation relationship of the marker relative to the terminal device 30.
Referring to fig. 4, in some embodiments, the terminal device 30 may also be communicatively connected to the server 40 via a network. Wherein, a client of the AR/VR application runs on the terminal device 30, and a server of the AR/VR application corresponding to the client runs on the server 40. By one approach, the server 40 may store identity information corresponding to each marker, virtual image data bound to the marker corresponding to the identity information, and location information of the marker in a real environment or a virtual map.
In some embodiments, different terminal devices 30 may also perform data sharing and real-time updating through the server 40, so as to improve interactivity among multiple users in an AR/VR scene.
For the above virtual content display system, an embodiment of the present application provides a method for displaying virtual content through the above system, and specifically, please refer to the following embodiments.
Referring to fig. 5, fig. 5 is a flowchart illustrating a method for displaying virtual content according to an embodiment of the present application. The virtual content display method includes the steps that a first image containing a first marker is collected, the first marker is a marker arranged on an interactive device, a first display area corresponding to the interactive device is determined according to the first marker in the first image, first virtual content is displayed, a first display position of the first virtual content corresponds to the first display area, second virtual content corresponding to the first virtual content is displayed finally, a second display position of the second virtual content corresponds to a preset second display area, after the virtual content is displayed on the interactive device, the corresponding virtual content can be released and displayed on the other area, and AR/VR interactivity is improved. In a specific embodiment, the display method of the virtual content may be applied to the display apparatus 300 of the virtual content as shown in fig. 9 and the terminal device 30 (fig. 1) configured with the display apparatus 300 of the virtual content. The flow shown in fig. 5 will be described in detail below by taking HMD (Head mounted Display) as an example. The above display method of virtual content may specifically include the following steps:
step S101: a first image is acquired that includes a first marker.
In this embodiment, the first marker is a marker disposed on the interactive device.
The marker may be any graphic or object having identifiable characteristic markings. The marker may be placed within the field of view of the camera (or other image capture device) of the terminal device, i.e., the camera may capture an image containing the marker. The image containing the marker is collected by the camera and then can be stored in the terminal equipment, and the image containing the marker is used for determining the position or the posture and other information of the terminal equipment relative to the marker. The marker may include at least one sub-marker therein, and the sub-marker may be a pattern having a certain shape. In one embodiment, each sub-marker may have one or more feature points, wherein the shape of the feature points is not limited, and may be a circle, a triangle, or other shapes. The distribution rules of the sub-markers in different markers are different, so that each marker can have different identity information, the terminal device can acquire the identity information corresponding to the markers by identifying the sub-markers included in the markers to distinguish the relative position information of the different markers, and the identity information can be information such as codes which can be used for uniquely identifying the markers, but is not limited to the above.
In this embodiment, the first marker may be disposed on a control panel of the interactive device, so that the terminal device may identify the first marker and determine a relative position between the terminal device and the interactive device.
In some embodiments, the first marker may be affixed to a control panel surface of the interactive device, or may be integrated into the interactive device. For example, the first marker may be a pattern fixedly presented on the control panel of the interactive device; for another example, when the interactive device has an image display function, the first marker may be a pattern that can be selectively presented on an image first display area of the interactive device (the image first display area is an actual optical first display area, such as a display screen, instead of the first display area for displaying the virtual content of the AR/VR on the interactive device in this embodiment).
In some embodiments, multiple markers may be disposed on the interaction device to perform different functions or to improve the accuracy of positioning. For example, part of the markers are used for locating relative pose relationships (relative position relationships and relative posture relationships) between the terminal device and the interactive device, and part of the markers are used for binding virtual content for the terminal device to recognize and display.
Step S102: and determining a first display area corresponding to the interactive device according to the first marker in the first image, and displaying the first virtual content.
In this embodiment, a first display position of the first virtual content in the virtual space corresponds to a first display area of the interactive device in the real space, and the user can see that the first virtual content is displayed by being superimposed on the interactive device through the terminal device.
As one mode, after confirming the current relative position and posture of the first marker and the terminal device according to the first image including the first marker, the terminal device may further determine the relative position and posture relationship between the terminal device and the interactive device and between the terminal device and other areas on the interactive device according to the position information of the first marker on the interactive device.
In this embodiment, the first display area corresponding to the interactive device may be a first display area for displaying virtual content of the AR/VR on a control panel of the interactive device. After the terminal device determines the position of the first display area corresponding to the interactive device, the terminal device can determine the relative spatial position information of the first display area, and superimpose and display the first virtual content on the area corresponding to the first display area according to the relative spatial position information.
In one embodiment, the first display area corresponding to the interactive device may not be a panel area on the interactive device, and may also refer to a real display space corresponding to the interactive device, for example, a specific space area above the interactive device, a specific space area in front of the interactive device relative to the user, or the like.
In one mode, the image data corresponding to the first virtual content may be pre-stored in the terminal device (or may be acquired from a server or other terminals), and is selected by the user for display. In some embodiments, a user may first select a first virtual content to be displayed through a terminal device or an interactive device, then scan a first marker on the interactive device for positioning, and finally display the selected first virtual content at a first display position corresponding to a first display area of the interactive device.
In some embodiments, the terminal device may directly construct the virtual content or acquire the constructed virtual content. As one mode, the terminal device may construct virtual content according to identity information of a first marker on the interactive device, and after identifying the first marker in the image, may obtain virtual image data corresponding to the identity information of the first marker, and construct virtual content according to the virtual image data, where the virtual image data may include vertex data, color data, texture data, and the like for modeling. The first markers with different identity information can respectively display different types of virtual contents, for example, the virtual content displayed by the first marker with the identity information of "number 1" is a three-dimensional virtual automobile, and the virtual content displayed by the first marker with the identity information of "number 2" is a three-dimensional virtual building. As another mode, the virtual image data of the virtual content may also be pre-stored in the terminal device, and when the first marker with different identity information is identified, the corresponding virtual content is directly displayed according to the pre-stored virtual image data without being affected by the identity information of the first marker. Optionally, the virtual image data of the virtual content may also be correspondingly stored in different application caches, and when the terminal device switches between different applications, different types of virtual content may be displayed, for example, the first tag for the same identity information, the application a shows a three-dimensional virtual automobile, the application B shows a three-dimensional virtual building, and the like. It is understood that the specific virtual content displayed can be set according to actual requirements, and is not limited to the above-mentioned several ways.
As shown in fig. 6, specifically, when the terminal device 30 is an integral head-mounted display, a user wearing the terminal device 30 can see the interactive device 10 through AR/VR glasses (which may be optical lenses or display screens), and can see the first virtual content 50 displayed in an overlapping manner at a position corresponding to the first display area of the interactive device 10 (the first virtual content 50 is a 3D medical human body model in fig. 6).
Step S103: displaying second virtual content corresponding to the first virtual content.
In this embodiment, a second display position of the second virtual content in the virtual space corresponds to a preset second display area in the real space, and the user can see that the second virtual content is displayed in the preset second display area in an overlapping manner through the terminal device.
In this embodiment, after the terminal device displays the first virtual content at the first display position in the virtual space corresponding to the first display area, the second virtual content may be displayed at the second display position in the virtual space corresponding to the preset second display area. In one embodiment, the second virtual content may be the same virtual content as the first virtual content, or may be a part of the virtual content (or a sub-content of the first virtual content) in the first virtual content, for example, when the first virtual content is a 3D medical human body model, the second virtual content may be the 3D medical human body model, or may be an organ, a bone, or the like in the 3D medical human body model.
In some embodiments, the second display position of the second virtual content may be fixedly associated with a certain position in the real environment, for example, the terminal device may locate another marker disposed in the environment, and display the second virtual content selected from the first virtual content at a position corresponding to the marker. In other embodiments, the second display position of the second virtual content may also be not fixedly associated with a position in the real environment, for example, a second display area for displaying the second virtual content may be pre-defined (the position and distance may be arbitrary) at a position 2 meters ahead of the first display area corresponding to the first virtual content, and the second display area may be associated with rendering coordinates of the second virtual content in the virtual space, and the second virtual content may be rendered and displayed in the second display position of the second display area corresponding to the virtual space.
As one approach, the second display area of the second virtual content may also be juxtaposed to the first display area on the interactive device. For example, the user can see through the terminal device that the first and second virtual contents in different postures are displayed in parallel at the position corresponding to the interactive device in the virtual space.
In one embodiment, the second virtual content displayed may be the same size as the first virtual content displayed, or the first virtual content may be displayed after being enlarged or reduced. In some embodiments, in the process of displaying the virtual content by the terminal device, the user may further interact with the displayed virtual content by other methods, for example, by touching the touch area on the interaction device, and perform data synchronous update with a plurality of different terminal devices connected to the same server by using the connection server, so as to implement multi-user interaction in the same virtual scene.
In some specific application scenarios, for example, when the first virtual content displayed at the position corresponding to the first display area of the interactive device is a vehicle, the user can display an enlarged image of the vehicle at a second display position corresponding to a preset second display area, so as to more flexibly view details of the overall structure of the vehicle, which helps the user to know the decomposition structure inside the vehicle, and if the image data of the displayed virtual content is three-dimensional modeling data obtained by physically scanning a faulty vehicle, the display method also helps the user to check the location of the faulty vehicle.
For another example, when the first virtual content displayed at the position corresponding to the first display area of the interactive device is game equipment, the user may select weapons, defense equipment and other parts of the game equipment as second virtual content to launch and display, and may further enjoy modeling and special effect details of the game equipment by enlarging or reducing the displayed virtual content, thereby bringing a more real game experience to the user.
The above examples are only part of practical applications of the display method of the virtual content provided in this embodiment, and it can be understood that, with further development and popularization of the AR/VR technology, the display method of the virtual content provided in this embodiment can play a role in more practical application scenarios.
According to the virtual content display method provided by the embodiment of the application, after the virtual content is displayed on the interactive device, the corresponding virtual content can be released and displayed on another area, so that the interactivity of AR/VR is improved.
Referring to fig. 7, fig. 7 is a schematic flowchart illustrating another virtual content display method according to an embodiment of the present disclosure. The flow shown in fig. 7 will be described in detail below. The above display method of virtual content may specifically include the following steps:
step S201: a first image is acquired that includes a first marker.
In this embodiment, the terminal device may collect an image including the first marker through the camera module to identify the first marker. It will be appreciated that in other possible embodiments, the terminal device may also identify the first marker by means of other sensor modules.
Step S202: and identifying a first marker in the first image, and acquiring first relative pose information of the interaction equipment and the terminal equipment.
In this embodiment, the first relative pose information includes relative position information and relative posture information of the interaction device and the terminal device.
In some embodiments, the terminal device may calculate position information, orientation information, and the like of the first marker with respect to the terminal device from coordinate data and the like of each feature pattern in the first marker in the first image, wherein the coordinate data may include pixel coordinates, physical coordinates, and the like in the first image. As one way, the terminal device may use the position information and the posture information of the first marker relative to the terminal device as the first relative pose information of the terminal device and the interaction device. Further, the terminal device may also determine the first relative pose information between the terminal device and the interaction device according to the position relationship of the first marker with respect to the entire interaction device, and the position information and the posture information of the first marker with respect to the terminal device, so as to more accurately obtain the relative pose relationship between the terminal device and the interaction device.
Step S203: and determining a first display area corresponding to the interactive device according to the first relative pose information, and displaying the first virtual content.
In this embodiment, after the terminal device obtains the first relative pose information of the interactive device, the terminal device may calculate the relative pose relationship between the first display area corresponding to the interactive device and the terminal device according to the relative pose relationship between the first display area corresponding to the interactive device and the entire interactive device, determine rendering coordinates of the first virtual content according to the relative pose relationship between the first display area and the terminal device, render and display the first virtual content according to the rendering coordinates, where the rendering coordinates may be used to represent the relative spatial position relationship between the first virtual content displayed in the virtual space and the terminal device.
As a manner, the terminal device may convert a first relative pose relationship between the terminal device and the interaction device in the real space into first relative coordinate information of the virtual space, and further, may convert a relative pose relationship between the terminal device and the first display area in the real space into first relative coordinate information of the virtual space, and calculate rendering coordinates of the first virtual content in the virtual space according to the first relative coordinate information, so that the first virtual content may be accurately displayed on the first display area corresponding to the interaction device.
In some embodiments, the terminal device may first acquire position information of a first display area corresponding to the interactive device, where the position information may include a relative position relationship between the first display area and a first marker on the interactive device. The position information of the first display area of the interactive device may be pre-stored in the terminal device, or the terminal device may collect an image including the interactive device, identify the interactive device in the image, and divide the first display area of the interactive device according to a preset division rule. For example, a right area of the interactive device, which is 50% of a user, may be set as the first display area, or an area of the interactive device, which is other than a marker setting area and a touch area, may be set as the first display area, or the entire interactive device may be set as the first display area, or an area where the first marker is located may be set as the first display area, or another area associated with the entire interactive device may be set as the first display area, but is not limited thereto.
After the terminal device obtains the relative position relationship between the first display area corresponding to the interactive device and the first marker, the terminal device can obtain the relative position relationship between the terminal device and the first display area corresponding to the interactive device according to the first relative position and posture information between the terminal device and the interactive device, and the like, so as to determine the first display area corresponding to the interactive device.
Step S204: a second image comprising a second marker is acquired.
In this embodiment, the second marker is a marker disposed in the scene. As one way, the scene may be an AR/VR scene in which the user is currently located in the real environment, and a plurality of different markers may be disposed at a plurality of positions in the scene to implement different functions, where the second marker is one of the different markers and may be used to determine a second display area in which the second virtual content is displayed in superposition with the real environment.
In this embodiment, after the second image including the second marker is acquired, the second display area corresponding to the second marker may be determined according to the second marker in the second image, and the second virtual content corresponding to the first virtual content may be displayed. As one way, after step S204, step S205 may be performed.
Step S205: and identifying a second marker in the second image, and acquiring second relative position and orientation information of the terminal equipment and the second marker.
In this embodiment, the second relative position and orientation information includes relative position information and relative orientation information of the second marker and the terminal device.
In some embodiments, the terminal device may calculate the relative position relationship and the relative pose relationship between the second marker and the terminal device from the second image containing the second marker in a manner similar to the manner of acquiring the first relative pose information to obtain the second relative pose information of the terminal device and the second marker. As one mode, after the second relative pose information is acquired, the second display area corresponding to the second marker may be positioned.
Step S206: and determining a second display area corresponding to the second marker according to the second relative posture information, and displaying second virtual content corresponding to the first virtual content.
In this embodiment, after the second relative pose information is obtained, the terminal device may determine, in a manner similar to the determination of the first display area corresponding to the interactive device, a second display area corresponding to the second marker, determine rendering coordinates corresponding to the second display area and used for displaying second virtual content, and render and display the second virtual content according to the rendering coordinates, where the rendering coordinates may be used to represent a relative spatial position relationship between the second virtual content displayed in the virtual space and the terminal device.
As one approach, the second virtual content may be enlarged first virtual content to facilitate enlarged viewing of details in the virtual content by a user. The terminal device can amplify the first virtual content according to a preset amplification factor to obtain a second virtual content, and display the second virtual content. In this embodiment, after the first virtual content and the second virtual content are displayed, the user may further interact with the first virtual content and the second virtual content through the interaction device.
Step S207: and acquiring a control instruction which is generated by the interactive equipment based on the detected touch action and corresponds to the touch action.
Step S208: and changing the display modes of the first virtual content and the second virtual content according to the control instruction.
In this embodiment, a touch action of a user may be detected through a touch area on the interactive device, and a control instruction corresponding to the touch action is generated, and the terminal device receives the control instruction sent by the interactive device, so as to change display modes of the first virtual content and the second virtual content according to the control instruction.
As one way, the display manner of the second virtual content may be changed in synchronization with the first virtual content. As another mode, different touch areas corresponding to the first virtual content and the second virtual content respectively may be further disposed on the control panel of the interactive device, where the different touch areas are used to control display modes of the first virtual content and the second virtual content respectively.
In some embodiments, the touch action may include, but is not limited to, a single-finger swipe, a click, a press, a multi-finger-fit swipe, etc. acting on the touch area. The control instruction is used for controlling the display of the virtual content, which may be to control the virtual content to rotate, enlarge, reduce, and implement a specific action effect, or to switch the virtual content to a new virtual content, or to add a new virtual content to the current augmented reality scene.
In some embodiments, according to different virtual contents, the same touch action may correspond to different control instructions, and after the touch action detected through the touch area is obtained, the control instruction corresponding to the touch action is generated according to the currently displayed virtual type and the touch action. For example, when the virtual content is a vehicle: when the touch motion detected in the touch area is used as leftward sliding relative to a user, generating a control instruction for opening the vehicle door; when the touch motion detected in the touch area is used as a right sliding relative to the user, generating a control instruction for closing the vehicle door; and when the touch action detected by the touch area is double-click, generating a control instruction for turning on the car light. As another example, when the virtual content is a 3D medical human body model: when the touch motion detected in the touch area is leftward sliding, generating a control instruction converted into a 3D3 muscle anatomical model; when the touch motion detected by the touch area is used as rightward sliding, generating a control instruction converted into a 3D skeleton anatomical model; and when the touch action detected in the touch area is double-click, generating a control command converted into a 3D nerve anatomical model.
Referring to fig. 8A, in some embodiments, when the terminal device displays virtual content, when a touch action detected in a touch area of the interaction device is obtained as a single finger sliding left or right relative to a user, a control instruction for switching the displayed new virtual content is generated, for example, the control instruction is used for controlling the terminal device to switch a currently displayed 3D medical human body model into a new virtual content such as a virtual building model or a virtual automobile; when the touch action detected in the acquired touch area is that a single finger slides upwards or downwards relative to the user, a control instruction for controlling the current virtual content is generated, for example, the control instruction is used for controlling the terminal device to adjust the currently displayed 3D medical human body model to rotate, translate and the like.
Referring to fig. 8B and 8C, in some embodiments, when the terminal device displays virtual content, and when the touch action detected in the touch area is obtained that the distance between two fingers is relatively contracted and merged, a control instruction for reducing the currently displayed virtual content is generated, for example, the control instruction is for controlling the currently displayed 3D medical human body model to be reduced; when the touch action detected in the acquired touch area is that the distance between the two fingers is relatively enlarged and far away, a control instruction for enlarging the currently displayed virtual content is generated, for example, the control instruction is used for controlling the currently displayed 3D medical human body model to be enlarged.
In some embodiments, the touch area of the interactive device further includes a pressure area, and the pressure area is provided with a pressure sensor, and can generate the control command according to pressure data detected by the pressure area. As one way, the control instruction obtained by the pressure area may control the virtual content to be displayed in a deformed state, for example, the virtual content is displayed in a state of being deformed by being pressed by an external force. Further, according to the pressure data detected by the pressure area, a preset functional relationship can be adopted to calculate the deformation state of the virtual content. For example, the amount of deformation of the virtual content is in a proportional relationship with the pressure value, the larger the pressure value is, the larger the amount of deformation of the virtual content is, and even when the pressure value exceeds a set threshold, a preset image display operation (for example, explosion or disappearance of the virtual content, switching of the virtual content, etc.) may be performed. Alternatively, the control instruction obtained through the pressure area may also control the display state of the virtual content, so that the display state (such as color gradient, brightness or transparency) of the virtual content display changes according to the pressure. Specifically, the actual pressure acquired by the pressure area is compared with a pressure threshold value according to a preset pressure threshold value, and the variation of the display state of the virtual content is controlled according to the ratio of the actual pressure to the pressure threshold value.
In some embodiments, the touch area of the interactive device may further include a control area, and the displayed virtual content may include a virtual interactive interface, that is, the terminal device presents the virtual content with the virtual interactive interface to the user, and the control area corresponds to the virtual interactive interface. The form of the virtual interactive space may include, but is not limited to, including: buttons, pop-up windows, lists, etc. In some specific examples, when the terminal device displays the virtual content, the virtual interaction interface of the virtual content may present an interaction menu, and the control area of the interaction device corresponds to the interaction menu, and the selection or/and input of the interaction menu may be implemented by obtaining a touch action of the control area. For example, when the displayed virtual content is a table lamp, if a user double-clicks the touch area, the virtual interactive interface of the virtual content presents selection menu buttons of "whether to turn off the lamp", "yes" and "no", and meanwhile, the touch area includes a first control area corresponding to the "yes" button and a second control area corresponding to the "no" button.
Further, in this embodiment, the terminal device may not only display corresponding virtual content for a user currently operating the interactive device, but also perform data sharing with other terminal devices in a short-distance environment or a long-distance environment through Wi-Fi network, bluetooth, near field communication, and the like, so that other users who do not operate the interactive device but wear the terminal device can also see corresponding virtual content.
Step S209: and establishing communication connection with other terminal equipment.
Step S210: and transmitting virtual image data corresponding to the second virtual content to other terminal equipment.
In this embodiment, the virtual image data is used to display the second virtual content in the other terminal device.
In this embodiment, the virtual image data sent to the other terminal device may include virtual model data of the second virtual content and rendering coordinates in the virtual space, where the virtual model data is data used for rendering and displaying the second virtual content in the virtual space by the other terminal device, and may include a color used for establishing a model corresponding to the second virtual content, coordinates of each vertex in the 3D model, and the like.
In some embodiments, the virtual image data may further include a sharing instruction, where the sharing instruction is configured to instruct the other terminal device to acquire a third image including the second marker, identify the second marker, acquire relative spatial position information of a second display area corresponding to the second marker, and display the second virtual content according to the relative spatial position information.
In some embodiments, the mode of sharing and displaying the virtual content at least includes, according to the position of the other terminal device: a near-field sharing mode and a far-field sharing mode.
As an implementation scenario of the near field sharing mode, when the other terminal device and the main terminal device connected with the interaction device (i.e., the terminal device displaying the first virtual content and the second virtual content in this embodiment) are in the same scenario (or the same real environment), both the other terminal device and the main terminal device can recognize the same second marker set in the scenario. At this time, when the master terminal device determines a second display area by recognizing a second marker in the scene and displays second virtual content at a second display position in the virtual space corresponding to the second display area, virtual image data corresponding to the second virtual content may be transmitted to other terminal devices located in the same scene through a Wi-Fi network, bluetooth, near field communication, or the like. After other terminal devices in the same scene acquire the virtual image data of the second virtual content sent by the main terminal device, the near field sharing display mode can be started according to the sharing instruction in the virtual image data. At this time, the other terminal device may determine a second display area corresponding to the second marker by acquiring a third image including the second marker, acquire relative spatial position information of the second display area, further determine a third display position of the second virtual content in the virtual space according to the relative spatial position information, and render and display the second virtual content at the third display position in the virtual space by using the acquired virtual image data of the second virtual content.
As an implementation scenario of the far-field sharing mode, when the other terminal device and the main terminal device connected with the interaction device are in different scenarios (for example, different rooms with distant geographic locations), the main terminal device can recognize the second marker set in the main scenario (the scenario in which the main terminal device is located), and the other terminal device can also recognize the third marker set in the sub-scenario in which the other terminal device is located. At this time, when the master terminal device determines a second display area by recognizing a second marker in the scene and displays a second virtual content at a second display position in a virtual space corresponding to the second display area, virtual image data corresponding to the second virtual content may be transmitted to other terminal devices located in the sub-scene through a wireless communication network connecting the same server. When the other terminal devices in the sub-scene acquire the virtual image data of the second virtual content sent by the main terminal device, the far-field sharing display mode can be started according to the sharing instruction in the virtual image data. At this time, the other terminal device may acquire an image including the third marker, determine a third display area corresponding to the third marker, and further determine a relative position relationship between the third display area and the other terminal device, so as to obtain a third display position corresponding to the third display area in the virtual space according to the relative position relationship, and render and display the second virtual content at the third display position in the virtual space by using the obtained virtual image data of the second virtual content.
In the far-field sharing mode, as one mode, the second marker and the third marker each have their orientation in the real environment (the orientations of the marker, such as front, rear, left, right, etc., may be divided according to the position distribution of the sub-markers and the feature points on the marker), and the orientation of the second marker and the orientation of the third marker in the real environment may be associated (that is, the second marker and the third marker each have associated orientations of front, rear, left, right, etc.). For example, when the main terminal device is located at a position in front of the second marker in the main scene, the displayed second virtual content is a direction in which the front surface faces the main terminal device, and at this time, the other terminal device that establishes far-field sharing with the main terminal device is located at a position in the rear of the third marker in the sub scene, and the second virtual content displayed by the other terminal device at the third display position in the virtual space corresponding to the third display area may be a direction in which the rear surface faces the other terminal device.
In the far-field sharing mode, as another way, the direction of the second virtual content displayed by the other terminal devices may be kept consistent with the second virtual content displayed by the main terminal device, for example, when the main terminal device is located in front of the main scene relative to the second marker and the other terminal devices are located in back of the sub scene relative to the third marker, the second virtual content displayed by the main terminal device and the other terminal devices are both oriented in their respective directions.
In the near field sharing mode, the second virtual content displayed by other terminal devices in the same scene can be displayed in a direction different from that of the main terminal device according to different relative spatial position information of the second virtual content relative to the second marker; the relative spatial position between itself and the second marker may be ignored, and a virtual image whose orientation coincides with the orientation of the second virtual content displayed by the host terminal device may be displayed.
It can be understood that other terminal devices can display virtual contents at different positions and visual angles on their respective display screens according to their respective relative spatial position information with respect to the second marker or the third marker, or can display virtual contents at the same positions and visual angles.
As one mode, in this embodiment, after the user finishes using the terminal device, the terminal device may further upload operation records (for example, which virtual contents are selected to be displayed and what interaction is performed) of the user in the use process to the server in the form of a log, so as to perform subsequent uses such as statistics of user preferences and optimization of virtual display experience.
According to the virtual content display method provided by the embodiment of the application, the second display position of the second virtual content in the virtual space can be located through the second marker, multi-user interaction of a near field or a far field of the virtual display content can be achieved through shared data, the interactivity of multiple users and the virtual content is further improved, and the virtual content display method is applicable to applications such as AR/VR teleconferencing and teaching.
Referring to fig. 9, fig. 9 is a block diagram illustrating a display apparatus 300 for virtual content according to an embodiment of the present disclosure. As will be explained below with respect to the block diagram of fig. 9, the display apparatus 300 of virtual content includes: an acquisition module 310, a first display module 320, and a second display module 330, wherein: the acquisition module 310 is configured to acquire a first image including a first marker, where the first marker is a marker disposed on the interactive device.
The first display module 320 is configured to determine a first display area corresponding to the interaction device according to the first marker in the first image, and display first virtual content, where a first display position of the first virtual content corresponds to the first display area. Further, the first display module 320 includes: the identification unit is used for identifying a first marker in the first image and acquiring first relative pose information of the interaction device and the terminal device, wherein the first relative pose information comprises relative position information and relative posture information of the interaction device and the terminal device; the first display unit is used for determining a first display area corresponding to the interactive device according to the first relative pose information and displaying first virtual content; the second display module 330 is configured to display second virtual content corresponding to the first virtual content, where a second display position of the second virtual content corresponds to a preset second display area.
The second display module 330 includes: an acquisition unit for acquiring a second image containing a second marker; and the second display unit is used for determining a second display area corresponding to the second marker according to the second marker in the second image and displaying second virtual content corresponding to the first virtual content. Further, the second display unit includes: the identification subunit is used for identifying a second marker in the second image and acquiring second relative pose information of the terminal device and the second marker, wherein the second relative pose information comprises relative position information and relative pose information of the terminal device and the second marker; and the display subunit is used for determining a second display area corresponding to the second marker according to the second relative posture information and displaying second virtual content corresponding to the first virtual content.
The display apparatus 300 of virtual content may further include: the acquisition module is used for acquiring a control instruction which is generated by the interaction equipment based on the detected touch action and corresponds to the touch action; the changing module is used for changing the display modes of the first virtual content and the second virtual content according to the control instruction; the communication module is used for establishing communication connection with other terminal equipment; and the sharing module is used for sending virtual image data corresponding to the second virtual content to other terminal equipment, and the virtual image data is used for displaying the second virtual content in the other terminal equipment. Furthermore, the virtual image data is also used for instructing other terminal equipment to acquire a third image containing a second marker, identify the second marker, acquire relative spatial position information with the second display area, and display second virtual content according to the relative spatial position information.
Referring to fig. 10, an interactive apparatus 400 is provided in the present embodiment, the interactive apparatus 400 includes a substrate 410, a control panel 430 and a first marker 450, the control panel 430 is disposed on the substrate 410, and the first marker 450 is integrated in the control panel 430. In this embodiment, the interactive apparatus 400 is a substantially flat panel structure. The substrate 410 may be a plate structure or a housing structure, and is used for carrying the control panel 430.
The control panel 430 is stacked on one side of the substrate 410 and is used for receiving a manipulation instruction of a user to facilitate the interactive device 400 to generate an image control instruction. The control panel 430 has a touch area 431 and is divided into a display area 433.
The touch area 431 is used for receiving a manipulation instruction of a user. In some embodiments, the touch area 431 may include a touch screen, and by detecting a touch state of the touch screen, if it is determined that a touch signal is generated according to the touch screen, the touch area 431 is considered to receive the manipulation instruction. In some embodiments, the touch area 431 includes a key, and the touch area 431 is considered to receive the manipulation instruction if the key is determined to generate the pressure signal by detecting the pressed state of the key. In some embodiments, the touch area 431 may be a plurality of touch areas, and the plurality of touch areas 431 includes at least one of a touch screen and a key.
The first marker 450 is a planar first marker integrated into the display region 433 of the control panel 430, which may be a predetermined symbol or pattern. The first marker 450 is used for being recognized by the terminal device and then displayed on the terminal device in the form of a virtual model.
The interaction device 400 further comprises a filter layer 470, the filter layer 470 being stacked on a side of the first marker 450 facing away from the substrate 410. The filter layer 470 is used to filter light other than the light emitted from the illumination device of the terminal equipment to the first marker 450, so as to prevent the first marker 450 from being affected by the ambient light when the light is reflected, thereby making the first marker 450 more easily recognized. In some embodiments, the filtering performance of the filter layer 470 may be set according to actual needs. For example, when the first marker 450 enters the field of view of the camera to be identified, in order to improve the identification efficiency, the camera usually uses an auxiliary light source to assist in capturing images, for example, when an infrared light source is used for assistance, the filter layer 470 is used to filter out light other than infrared light (such as visible light, ultraviolet light, etc.), so that light other than infrared light cannot pass through the filter layer 470 and infrared light can pass through and reach the first marker 450. When the auxiliary light source projects the infrared light to the first marker 450, the filter layer 470 filters the ambient light except the infrared light, so that only the infrared light reaches the first marker 450 and is reflected by the first marker 450 to the near-infrared image capturing device, thereby reducing the influence of the ambient light on the recognition process.
In some embodiments, the control panel 430 may further include a pressure area (not shown), and the pressure area is provided with a pressure sensor for sensing the external pressure, so that the interactive device 400 generates a corresponding control command according to the external pressure. The pressure area may be disposed in a partial area of the control panel 430, or may completely cover the surface of the control panel 430, or may be disposed to overlap with or be parallel to or spaced apart from the touch area 431 or/and the display area 433.
In the embodiment shown in fig. 10, the touch area 431 and the display area 433 are arranged in parallel, and the first marker 450 is overlapped on the display area 433. In other embodiments, the touch area 431 and the display area 433 may be disposed in an overlapping manner, for example, the touch area 431 is a transparent touch screen which is overlapped on the display area 433, and the first marker 450 is disposed between the touch area 431 and the display area 433, so that the volume of the interactive device 400 can be further reduced, and the portability thereof can be improved.
An embodiment of the present application provides a terminal device, which includes a display, a memory and a processor, the display and the memory are coupled to the processor, the memory stores instructions, and when the instructions are executed by the processor, the instructions perform: acquiring a first image containing a first marker, wherein the first marker is a marker arranged on the interaction equipment; determining a first display area corresponding to the interactive device according to a first marker in the first image, and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area; and displaying second virtual content corresponding to the first virtual content, wherein a second display position of the second virtual content corresponds to a preset second display area.
The embodiment of the application provides a computer readable storage medium, wherein a program code is stored in the computer readable storage medium, and the program code can be called by a processor to execute: acquiring a first image containing a first marker, wherein the first marker is a marker arranged on the interaction equipment; determining a first display area corresponding to the interactive device according to a first marker in the first image, and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area; and displaying second virtual content corresponding to the first virtual content, wherein a second display position of the second virtual content corresponds to a preset second display area.
The specific pattern displayed by the marker in the present application is not limited, and may be any pattern that can be acquired by a camera of the terminal device. For example, the specific pattern of the marker may be a combination of one or more of any of the following: circular, triangular, rectangular, oval, wavy, straight, curved, etc., and are not limited to what is described in this specification. In other embodiments, the marker may be other types of patterns that enable the marker to be more effectively recognized by the camera. For example, the specific pattern of the marker may be a geometric figure (e.g., a circle, a triangle, a rectangle, an ellipse, a wavy line, a straight line, a curved line, etc.), a predetermined pattern (e.g., an animal head, a common schematic symbol such as a traffic sign, etc.), or other patterns that can be resolved by the camera to form the marker, and is not limited to the description in the specification. In other embodiments, the marker may be a bar code, a two-dimensional code, or the like.
To sum up, according to the method, the apparatus, the terminal device and the interactive device for displaying virtual content provided in the embodiment of the present application, a first image including a first marker is collected first, where the first marker is a marker disposed on the interactive device; then, a first display area corresponding to the interactive device is determined according to a first marker in the first image, and first virtual content is displayed, wherein a first display position of the first virtual content corresponds to the first display area; acquiring a selection instruction; and finally, selecting second virtual content from the first virtual content according to the selection instruction, and displaying the second virtual content. After the virtual content is displayed on the interactive equipment, the corresponding virtual content can be released and displayed on another area, and the interactivity of AR/VR is improved.
It should be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments. In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not necessarily depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application. The above-described embodiments may be combined with each other and the features of the respective embodiments may be combined with each other without conflict, and the embodiments are not limited to the embodiments.

Claims (10)

1. A method for displaying virtual content, the method comprising:
acquiring a first image containing a first marker, wherein the first marker is a marker arranged on the interaction equipment;
determining a first display area corresponding to the interaction device according to a first marker in the first image, and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area;
and displaying second virtual content corresponding to the first virtual content, wherein a second display position of the second virtual content corresponds to a preset second display area.
2. The method of claim 1, wherein determining a first display area corresponding to the interactive device from the first marker in the first image and displaying a first virtual content comprises:
identifying the first marker in the first image, and acquiring first relative pose information of the interaction device and a terminal device, wherein the first relative pose information comprises relative position information and relative posture information of the interaction device and the terminal device;
and determining a first display area corresponding to the interaction equipment according to the first relative pose information, and displaying first virtual content.
3. The method of claim 2, wherein displaying the second virtual content corresponding to the first virtual content comprises:
acquiring a second image comprising a second marker;
and determining a second display area corresponding to the second marker according to the second marker in the second image, and displaying second virtual content corresponding to the first virtual content.
4. The method of claim 3, wherein determining a second display region corresponding to the second marker from the second marker in the second image and displaying a second virtual content corresponding to the first virtual content comprises:
identifying the second marker in the second image, and acquiring second relative position and orientation information of the terminal device and the second marker, wherein the second relative position and orientation information comprises relative position information and relative orientation information of the terminal device and the second marker;
and determining a second display area corresponding to the second marker according to the second relative posture information, and displaying second virtual content corresponding to the first virtual content.
5. The method of claim 1, further comprising:
acquiring a control instruction which is generated by the interaction equipment based on the detected touch action and corresponds to the touch action;
and changing the display modes of the first virtual content and the second virtual content according to the control instruction.
6. The method of claim 1, further comprising:
establishing communication connection with other terminal equipment;
and sending virtual image data corresponding to the second virtual content to the other terminal equipment, wherein the virtual image data is used for displaying the second virtual content in the other terminal equipment.
7. The method according to claim 6, wherein the virtual image data is further used for instructing the other terminal device to acquire a third image containing a second marker, identify the second marker, obtain relative spatial position information with respect to the second display area, and display the second virtual content according to the relative spatial position information.
8. An apparatus for displaying virtual content, the apparatus comprising:
the system comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image containing a first marker, and the first marker is a marker arranged on the interactive equipment;
the first display module is used for determining a first display area corresponding to the interaction device according to the first marker in the first image and displaying first virtual content, wherein a first display position of the first virtual content corresponds to the first display area;
and the second display module is used for displaying second virtual content corresponding to the first virtual content, and a second display position of the second virtual content corresponds to a preset second display area.
9. A terminal device comprising a display, a memory, and a processor, the display and the memory coupled to the processor, the memory storing instructions that, when executed by the processor, the processor performs the method of any of claims 1-7.
10. An interaction device, characterized in that the interaction device comprises:
the control panel is provided with a first marker and a touch area, the first marker is used for terminal equipment to identify and determine first relative pose information of the interaction equipment and the terminal equipment, the touch area is used for detecting a touch action of a user to generate a control instruction corresponding to the touch action, and the control instruction is used for controlling virtual content displayed by the terminal equipment.
CN201811217303.3A 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and interactive equipment Pending CN111077983A (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN201811217303.3A CN111077983A (en) 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and interactive equipment
PCT/CN2019/111790 WO2020078443A1 (en) 2018-10-18 2019-10-18 Method and system for displaying virtual content based on augmented reality and terminal device
US16/731,055 US11244511B2 (en) 2018-10-18 2019-12-31 Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811217303.3A CN111077983A (en) 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and interactive equipment

Publications (1)

Publication Number Publication Date
CN111077983A true CN111077983A (en) 2020-04-28

Family

ID=70308833

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811217303.3A Pending CN111077983A (en) 2018-10-18 2018-10-18 Virtual content display method and device, terminal equipment and interactive equipment

Country Status (1)

Country Link
CN (1) CN111077983A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100032267A (en) * 2008-09-17 2010-03-25 (주)지아트 Device for authoring augmented reality, method and system for authoring augmented reality using the same
CN102722338A (en) * 2012-06-15 2012-10-10 杭州电子科技大学 Touch screen based three-dimensional human model displaying and interacting method
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20100032267A (en) * 2008-09-17 2010-03-25 (주)지아트 Device for authoring augmented reality, method and system for authoring augmented reality using the same
CN102722338A (en) * 2012-06-15 2012-10-10 杭州电子科技大学 Touch screen based three-dimensional human model displaying and interacting method
CN107250891A (en) * 2015-02-13 2017-10-13 Otoy公司 Being in communication with each other between head mounted display and real-world objects
CN107847289A (en) * 2015-03-01 2018-03-27 阿里斯医疗诊断公司 The morphology operation of reality enhancing

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
倪得晶等: "基于虚拟现实的机器人遥操作关键技术研究", 《仪器仪表学报》 *

Similar Documents

Publication Publication Date Title
US10175492B2 (en) Systems and methods for transition between augmented reality and virtual reality
CN106062862B (en) System and method for immersive and interactive multimedia generation
CN111083463A (en) Virtual content display method and device, terminal equipment and display system
US11244511B2 (en) Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device
US9857589B2 (en) Gesture registration device, gesture registration program, and gesture registration method
US11049324B2 (en) Method of displaying virtual content based on markers
CN111766937B (en) Virtual content interaction method and device, terminal equipment and storage medium
US20160314624A1 (en) Systems and methods for transition between augmented reality and virtual reality
JP3926837B2 (en) Display control method and apparatus, program, and portable device
CN110456907A (en) Control method, device, terminal device and the storage medium of virtual screen
US10096166B2 (en) Apparatus and method for selectively displaying an operational environment
US10296359B2 (en) Interactive system control apparatus and method
WO2015082015A1 (en) Occluding augmented reality objects
JP2013141207A (en) Multi-user interaction with handheld projectors
CN110442245A (en) Display methods, device, terminal device and storage medium based on physical keyboard
WO2022022029A1 (en) Virtual display method, apparatus and device, and computer readable storage medium
KR100971667B1 (en) Apparatus and method for providing realistic contents through augmented book
CN111383345B (en) Virtual content display method and device, terminal equipment and storage medium
CN111566596A (en) Real world portal for virtual reality display
KR101983233B1 (en) Augmented reality image display system and method using depth map
JP7499819B2 (en) Head-mounted display
CN113168228A (en) Systems and/or methods for parallax correction in large area transparent touch interfaces
CN113470190A (en) Scene display method and device, equipment, vehicle and computer readable storage medium
KR101665363B1 (en) Interactive contents system having virtual Reality, augmented reality and hologram
CN111077985A (en) Interaction method, system and interaction device for virtual content

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200428