WO2019019248A1 - Virtual reality interaction method, device and system - Google Patents

Virtual reality interaction method, device and system Download PDF

Info

Publication number
WO2019019248A1
WO2019019248A1 PCT/CN2017/099508 CN2017099508W WO2019019248A1 WO 2019019248 A1 WO2019019248 A1 WO 2019019248A1 CN 2017099508 W CN2017099508 W CN 2017099508W WO 2019019248 A1 WO2019019248 A1 WO 2019019248A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
users
posture
virtual scene
coordinate
Prior art date
Application number
PCT/CN2017/099508
Other languages
French (fr)
Chinese (zh)
Inventor
谢冰
许秋子
Original Assignee
深圳市瑞立视多媒体科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市瑞立视多媒体科技有限公司 filed Critical 深圳市瑞立视多媒体科技有限公司
Priority to CN201780000956.3A priority Critical patent/CN107820593B/en
Publication of WO2019019248A1 publication Critical patent/WO2019019248A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics

Definitions

  • the present invention belongs to the field of virtual reality technologies, and in particular, to a virtual reality interaction method, apparatus, and system.
  • an existing virtual reality interaction technology acquires a plurality of image information of a user, and then acquires spatial position and posture information of the user according to the plurality of image information. Finally, the virtual scene is obtained according to the user's spatial position and posture, and the virtual scene is displayed.
  • the problem of mutual occlusion caused by multi-person interaction when users are occluded from each other, the problem of the user's spatial position and posture cannot be tracked, and the user cannot be located, resulting in a virtual scene.
  • a first aspect of the embodiments of the present invention provides a virtual reality interaction method, including:
  • a second aspect of the embodiments of the present invention provides a virtual reality interaction apparatus, including:
  • a receiving module configured to receive image information for automatically capturing a camera, and receive at least one collector for collecting And sensing information transmitted through the virtual scene client;
  • an obtaining module configured to acquire spatial location information and posture information of all users according to the image information and the sensing information
  • a transmission module configured to transmit the spatial location information and the posture information to all virtual scene clients, so that each of the clients according to the spatial location information, the posture information, and a local user perspective The information renders the virtual scene and displays it to the local user; the local user is one of all users.
  • a third aspect of the embodiments of the present invention provides a virtual reality interaction system, the system comprising: at least two motion capture cameras, at least one collector, at least one virtual scene client, at least one helmet display, and a camera Server; among them,
  • the motion capture camera is configured to capture image information of a user and transmit the image information to the camera server;
  • At least one of the collectors is configured to collect sensing information of the user and transmit the information to the virtual scene client corresponding to the user;
  • At least one of the virtual scene clients is configured to receive sensing information of the corresponding collector and transmit the information to the camera server;
  • the camera server is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information; and transmit the spatial location information and the posture information to all virtual scenarios a client, so that each of the clients renders a virtual scene according to the spatial location information, the gesture information, and the perspective information of the local user, and displays the virtual scenario to the local user; the local user is one of all users.
  • a fourth aspect of the embodiments of the present invention provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and the computer program is executed by a processor to implement the virtual reality interaction method. A step of.
  • [0019] first receiving the image information of the automatic camera, and receiving the sensing information collected by the at least one collector and passing through the corresponding virtual scene client; and then acquiring the spatial location information of all the users according to the image information and the sensing information. Attitude information; then transmit spatial position information and attitude information to all virtual fields The client, so that each virtual scene client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user; the local user is one of all users;
  • the positioning ⁇ combines the two sets of data of optics and inertia, so that when users occlude each other, they can accurately locate the user.
  • FIG. 1 is a schematic diagram of an implementation flow of a virtual reality interaction method according to an embodiment of the present invention
  • FIG. 2 is another schematic diagram of an implementation process of a virtual reality interaction method according to an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a virtual reality interaction device according to an embodiment of the present invention.
  • FIG. 4 is a schematic diagram of a virtual reality interaction device acquisition module according to an embodiment of the present invention.
  • FIG. 5 is a schematic diagram of a second computing module in a virtual reality interaction device acquiring module according to an embodiment of the present invention
  • FIG. 6 is a schematic diagram of a first computing module in a virtual reality interaction device acquiring module according to an embodiment of the present invention
  • FIG. 7 is a schematic structural diagram of a virtual reality interaction system according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of a server terminal of a virtual reality interactive system according to an embodiment of the present invention.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of a virtual reality interaction method according to an embodiment of the present invention. For ease of description, only parts related to the embodiment of the present invention are shown, which are as follows:
  • Step 101 Receive image information of an automatic camera, and receive sensing information collected by at least one collector and transmitted through a corresponding virtual scene client.
  • the execution body of the embodiment may be a server for capturing a camera (also referred to as a camera server).
  • a camera server also referred to as a camera server.
  • the number of virtual scene clients is the same as the number of users.
  • the virtual scene is mainly based on games. It can be understood that the virtual scene in the embodiment of the present invention is not limited to a game, but may also be a virtual scene in other application fields, such as a live room, educational training, military exercises, and the like.
  • the optical imaging system can be used to identify the active optics attached to the observed object (one or more, human or weapon) (or Passive optics) Marks the image of the marker point through the image capture system of the camera, and then transmits it to the server (referred to as the camera server) via the network (wired, wireless, USB, etc.).
  • the camera server receives image information of the automatic camera capture, wherein the received image information may include position coordinate information of all users in the virtual scene.
  • the server identifies the observed object based on the position coordinate information, acquires the location information of the user, and realizes positioning of the user. It can be understood that if the server is to achieve positioning of the user, the received image information of the same user must come from two different camera cameras.
  • the collector may specifically be an inertial navigation unit such as a gyroscope attached to the user, and after acquiring the speed and acceleration information of the user through the gyroscope, the user may be sent to the user by wire or wirelessly, such as Bluetooth.
  • a gyroscope attached to the user
  • the user may be sent to the user by wire or wirelessly, such as Bluetooth.
  • the corresponding client one user corresponds to a virtual scene client.
  • the sensor then transmits the sensor information to the server of the camera.
  • the sensing information may include speed and acceleration information of all users, and the acceleration information may be specifically six-axis acceleration.
  • the client can be a backpack host, and the user can be carried on the back by the user, so that the user can perform the virtual interaction and can get rid of the constraint of the traditional wire and expand the activity space.
  • Step 102 Acquire spatial location information and posture information of all users according to the image information and the sensing information.
  • the gesture information includes the face orientation of all users.
  • the server After the server receives the image information of the automatic camera and the sensor information from the collector, the server can According to these two pieces of information, the spatial location information and posture information of the user are calculated.
  • Step 103 The spatial location information and the posture information are transmitted to all the virtual scene clients, so that each virtual scene client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user.
  • the local user is one of all users.
  • one user corresponds to one client (one user carries a backpack host), and the user corresponding to the client is a local user.
  • the server can transmit spatial location information and posture information of all users to each client through the network.
  • each client After receiving the spatial location information and posture information of all users, each client can combine the perspective information of the local user to render a virtual scene suitable for the perspective of the local user and display it to the local user through the helmet worn by the user.
  • the server realizes the tracking and positioning of the user by combining the image information of the camera and the sensor information collected by the collector, and then the physical location information of each user is obtained.
  • the gesture information is mapped into the virtual space created by the virtual scene client and the server's graphics engine, thereby completing the actual interaction. Since the user is positioned, the image information collected by the camera and the sensor information collected by the collector are used to accurately locate the user even if multiple people interact with each other, thereby avoiding interaction between multiple people.
  • the occlusion causes the problem that the optical marker points are lost and cannot be located.
  • the system since the user continuously moves during the virtual interaction process, the system also needs to collect the image information of the next frame and the next engraved sensing information, and acquire the next engraving of the user.
  • the spatial position information and the posture information, and the virtual scene is updated according to the actual motion state of the user, and the immersive feeling of the interaction is realized.
  • the step 101 can be continued.
  • the image information of the automatic camera capture includes position coordinate information of all users, and the sensor information from the collector may include: speed and acceleration information of all users.
  • the spatial position information and the posture information of the user are calculated, which may be: filtering the position coordinate information, the speed and the acceleration information to obtain spatial position information and posture information of all users.
  • Step 201 Receive image information of an automatic camera, and receive sensing information collected by at least one collector and transmitted through a corresponding virtual scene client.
  • Step 202 Perform filter processing on the position coordinate information, the speed and the acceleration information to obtain spatial position information and posture information of all users.
  • the position coordinate information of the automatic camera capture may include: current coordinate position information and historical coordinate position information; wherein the historical coordinate position information is coordinate position information acquired by the camera capture history. Then, in calculating the user's spatial position information and posture information, there are two ways of operation:
  • the two pieces of data ie, position coordinate information, velocity and acceleration information
  • the position coordinate information used by the ⁇ is used. It may be the current coordinate position information (no occlusion ⁇ appears) or historical coordinate position information (occlusion ⁇ appears).
  • the current position coordinate information or the historical position coordinate information are filtered to obtain spatial position information and posture information of all users.
  • Positioning is performed using a set of data (ie, current coordinate position information) in the absence of occlusion.
  • a set of data ie, current coordinate position information
  • two sets of data ie, historical coordinate position information, velocity and acceleration information
  • the spatial position information and the posture information of all users can be calculated according to the current position coordinate information; it should be noted here that when the occlusion ⁇ occurs, there may be no current position coordinate information (currently The position coordinate information is none or there is a part of the position coordinate information but the position information cannot be located by using the part coordinate position information.
  • the spatial position information and the posture information of all the users are directly calculated according to the current position coordinate information.
  • the spatial position information and the posture information of all users are calculated according to the current position coordinate information, and the specific operation manner may include, for example, B1-1 to B1-3, as follows:
  • B1-1 Extracting two-dimensional coordinate information of the plurality of marker points from the current position coordinate information.
  • the plurality of marking points may be the optical spheres of the rigid body in the optical motion tracking technology, and the optical tracking system needs to configure the tracked objects with the optical spheres, and the layout is geometrically distributed.
  • the combination of these light balls is configured
  • the rigid body recognized by the system such as a person in a virtual scene, can be represented by multiple light spheres, changing the spatial position of multiple light spheres, which is different for the tracking system.
  • Step B1-2 Calculating three-dimensional coordinate information of the plurality of marked points according to the two-dimensional coordinate information.
  • Step B1-2 is specifically implemented to match the key points of the image acquired by the plurality of motion capture cameras with the engraving, and then the two-dimensional coordinate information of the matched marked points is converted into the same by the principle of triangulation.
  • Three-dimensional coordinate information of the space is specifically implemented to match the key points of the image acquired by the plurality of motion capture cameras with the engraving, and then the two-dimensional coordinate information of the matched marked points is converted into the same by the principle of triangulation.
  • B1-3 Acquire spatial position information and posture information of all users according to the three-dimensional coordinate information of the plurality of marked points and a preset algorithm.
  • the spatial position information and the posture information of the all users are calculated according to the historical position coordinate information and the speed and acceleration information of the user, and the specific operation manner includes, for example, step C1-1 and step C1-2.
  • C1-1 Predicting current position coordinate information based on historical position coordinate information.
  • C1-2 Calculate the spatial position information and posture information of all users based on the predicted current position coordinate information and the velocity and acceleration information.
  • Step C1-1 may be specifically: predicting current position coordinate information according to historical position coordinate information and historical speed and acceleration information.
  • Step C1-2 may be specifically: calculating spatial position information and posture information of all users according to the predicted current position coordinate information and the current speed and acceleration information.
  • Step 203 The spatial location information and the posture information are transmitted to all the virtual scene clients, so that each virtual scene client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user.
  • the server can use the image information collected by the camera and the sensor information collected by the collector, so even if the multi-person interaction occurs, the occlusion can be Accurate positioning of the user, avoiding the problem of multi-person interaction and mutual occlusion leading to loss of optical marker points.
  • an embodiment of the present invention further provides a virtual reality interaction device, which may be, for example, a server that captures a camera.
  • the virtual reality interaction device 30 includes a receiving module 310, an obtaining module 320, and a transmitting module 330.
  • the receiving module 310 receives the image information of the automatic camera, and receives the sensing information collected by the at least one collector and transmitted through the corresponding virtual scene client.
  • the obtaining module 320 is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information.
  • the transmission module 330 transmits the spatial location information and the posture information to all the virtual scene clients, so that each client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user;
  • the user is one of all users.
  • the system since the user continuously moves during the virtual interaction process, the system further needs to execute the receiving module to collect the image information of the next frame and the next engraved sensing information to obtain the next user.
  • the spatial position information and the posture information are engraved, and the virtual scene is updated according to the actual movement state of the user and the immersive feeling of the virtual interaction is realized. Therefore, when the transmission module 330 completes the corresponding function, the receiving module 310 needs to be triggered to continue working.
  • the image information includes location coordinate information of all users
  • the sensing information includes speed and acceleration information of all users
  • the obtaining module 320 is specifically configured to: filter the position coordinate information, the speed, and the acceleration information to obtain Spatial location information and pose information for all users.
  • the position coordinate information includes current coordinate position information and historical coordinate position information; the obtaining module 320 is further specifically configured to: filter the speed and acceleration information, the current position coordinate information, or the historical position coordinate information to obtain all User's spatial location information and posture information.
  • the position coordinate information includes current coordinate position information and historical coordinate position information;
  • the obtaining module 320 includes a determining module 321, a first calculating module 322, and a second calculating module 323.
  • the determining module 321 is configured to determine whether the spatial position information and the posture information of all the users can be calculated according to the current position coordinate information.
  • the first calculating module 322 is configured to calculate spatial location information and posture information of all users according to the current location coordinate information when the determination result of the determining module is ⁇ .
  • the second calculating module 323 is configured to: when the determining result of the determining module is negative, according to the historical location coordinate letter The user and the user's speed and acceleration information calculate the spatial position information and attitude information of all users.
  • the second calculation module 323 includes a prediction module 3231 and a first posture information calculation module 3232.
  • a prediction module 3231 configured to predict current location coordinate information according to historical location coordinate information
  • the first posture information calculation module 3232 is configured to calculate spatial position information and posture information of all users according to the predicted current position coordinate information and the speed and acceleration information.
  • the first calculation module 322 includes an extraction module 3221, a three-dimensional coordinate information calculation module 3222, and a second posture information calculation module 3223.
  • the extracting module 3221 is configured to extract two-dimensional coordinate information of the plurality of marked points from the current position coordinate information.
  • the three-dimensional coordinate information calculation module 3222 is configured to calculate three-dimensional coordinate information of the plurality of marked points according to the two-dimensional coordinate information.
  • the second posture information calculation module 3223 is configured to acquire spatial position information and posture information of all users according to the three-dimensional coordinate information of the plurality of marked points and a preset algorithm.
  • the posture information includes face orientations of all users.
  • the virtual reality interaction device of the embodiment of the present invention realizes the tracking and positioning of the user by combining the image information of the camera and the sensor information collected by the collector, and then the physical location information of each user is The gesture information is mapped into a virtual space created by the client and the graphics engine of the interaction device to complete the actual interaction. Because the user is positioned, the image information collected by the camera and the sensor information collected by the collector are used, even if the interaction of multiple people is blocked, the interactive device can accurately locate the user, avoiding multi-person interaction and mutual interaction. The occlusion between the two causes the problem that the optical marker point is lost and cannot be located.
  • the present invention further provides a virtual reality interaction system, the system comprising: a plurality of camera, at least one collector, at least one virtual scene client, at least one helmet display, and the above description of the embodiment Interactive device.
  • the collector can be implemented by a gyroscope
  • the interaction device can be, for example, a server that captures the camera.
  • One user corresponds to one virtual scene client, one helmet display, and at least one collector.
  • the interactive system will be described in detail through FIG.
  • FIG. 7 is a schematic diagram of an embodiment of a virtual reality interactive system of the present invention.
  • the virtual reality interaction system includes: a camera 13 , a camera 12 and a camera 13 , an Ethernet router 2 , a server 3. WIFI router 4, virtual scene client 51 and virtual scene client 52, helmet display 61 and helmet display 62, gyroscope 71 and gyroscope 72.
  • the camera 16 , the camera 12 and the camera 13 are used to capture image information of the user and transmit it to the server 3 of the camera through the Ethernet router 2 .
  • the gyroscope 71 and the gyroscope 72 also respectively collect the sensing information of the corresponding user and transmit it to the server 3 through the WIFI router 4.
  • the server 3 acquires spatial location information and posture information of all users according to the received image information and sensing information; and transmits the spatial location information and the posture information to the client 51 and the client 52.
  • the client 51 After receiving the spatial location information and posture information of all users, the client 51 renders the virtual scene in conjunction with the perspective information of the local user and displays it to the local user through the helmet display 61.
  • the client 52 After receiving the spatial location information and posture information of all users, the client 52 renders the virtual scene in combination with the perspective information of the local user and displays it to the local user through the helmet display 62.
  • the helmet display and the client computer are connected through an HDMI (High Definition Multimedia Interface) interface, and the gyroscope and the client are connected via Bluetooth.
  • HDMI High Definition Multimedia Interface
  • the embodiment of the present invention firstly receives the image information of the automatic camera capture and collects the sensor information transmitted by the client of the virtual scene; and then acquires the spatial location of all users according to the image information and the sensor information. Information and attitude information; finally transmitting the spatial location information and the posture information to the client of the virtual scene, so that the client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user and displays it to the local user;
  • the two sets of data with the inertia so that the user occludes each other, achieving high accuracy of positioning the user.
  • the camera server includes a processor 80, a memory 81, and a computer program 82 stored in the memory 81 and operable on the processor 80, such as a virtual reality interactive program.
  • the processor 80 executes the computer program 82 to implement the steps in the various embodiments of the virtual reality interaction method described above, such as steps 101 through 103 shown in FIG.
  • processor 80 executes computer program 82 to implement the functions of the various modules/units in the various apparatus embodiments described above, such as the functions of modules 310 through 330 shown in FIG.
  • computer program 82 can be partitioned into one or more modules/units, one or more The modules/units are stored in memory 81 and executed by processor 80 to complete the present invention.
  • the one or more modules/units may be a series of computer program instructions that are capable of performing a particular function, which is used to describe the execution of computer program 82 in the camera server.
  • the computer program 82 can be divided into a receiving module 310, an obtaining module 320, and a transmitting module 330 (modules in a virtual device), and the specific functions of each module are as follows:
  • the receiving module 310 is configured to receive image information for automatically capturing a camera, and receive sensing information collected by the at least one collector and transmitted through the corresponding virtual scene client;
  • the obtaining module 320 is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information.
  • the transmission module 330 is configured to transmit the spatial location information and the posture information to all the virtual scene clients, so that each client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user.
  • the local user is one of all users.
  • the camera server may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the camera server may include, but is not limited to, a processor 80, a memory 81. It will be understood by those skilled in the art that FIG. 8 is merely an example of a camera server, and does not constitute a limitation of the server server. It may include more or less components than those illustrated, or some components may be combined, or different components, such as
  • the camera server may also include an input and output device, a network access device, a bus, and the like.
  • the processor 80 may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), or an application specific integrated circuit (Application Specific Integrated).
  • CPU central processing unit
  • DSP digital signal processor
  • Application Specific Integrated Application Specific Integrated
  • the general purpose processor can be a microprocessor or the processor can be any conventional processor or the like.
  • the memory 81 may be an internal storage unit of the camera server, such as a hard disk or a memory of a camera server.
  • the memory 81 may also be an external storage device of the camera server, such as a plug-in hard disk equipped on the camera server, a smart memory card (SMC), and a secure digital (SD) card.
  • flash card Flash card
  • the memory 81 may further include an internal storage list of the camera server.
  • the element also includes an external storage device.
  • the memory 81 is used to store the computer program and other programs and data required by the camera server.
  • the memory 81 can also be used to temporarily store data that has been output or is about to be output.
  • each functional unit and module described above is exemplified. In practical applications, the above functions may be assigned differently according to needs.
  • the functional unit and the module are completed, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above.
  • Each functional unit and module in the embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be implemented by hardware.
  • Formal implementation can also be implemented in the form of software functional units.
  • the disclosed device/terminal device and method may be implemented in other manners.
  • the device/terminal device embodiment described above is merely illustrative.
  • the division of the module or unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units.
  • components may be combined or integrated into another system, or some features may be omitted or not implemented.
  • the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
  • the unit described as a separate component may or may not be physically distributed as a unit
  • the displayed components may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated module/unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the present invention implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium. After the program is executed by the processor, the steps of the various method embodiments described above can be implemented.
  • the computer program comprises computer program code
  • the computer program code may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium can include
  • Disk CD
  • computer memory Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signals telecommunications signals
  • software distribution media the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Provided are a virtual reality interaction method, a device and a system, applicable to the field of virtual reality technology. The method comprises: firstly, receiving image information from a motion capture camera, and receiving sensing information collected by at least one collector and transmitted through a corresponding virtual scene client; and then acquiring spatial location information and posture information of all users according to the image information and the sensing information; and transmitting the spatial location information and the posture information to all the virtual scene clients, so that each client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of a local user, and displays the virtual scene to the local user. The local user is one of all users. Since two sets of data of optics and inertia are combined, the user is positioned when the users occlude each other.

Description

一种虚拟现实交互方法、 装置及系统 技术领域  Virtual reality interaction method, device and system
[0001] 本发明属于虚拟现实技术领域, 尤其涉及一种虚拟现实交互方法、 装置以及系 统。  [0001] The present invention belongs to the field of virtual reality technologies, and in particular, to a virtual reality interaction method, apparatus, and system.
背景技术  Background technique
[0002] 由于虚拟现实与增强现实个人娱乐设备的发展以及电脑图像处理性能的迅速发 展, 多人虚拟现实互动或娱乐的需求也越来越迫切。  [0002] Due to the development of virtual reality and augmented reality personal entertainment devices and the rapid development of computer image processing performance, the demand for multi-person virtual reality interaction or entertainment has become more and more urgent.
[0003] 目前, 现有的一种虚拟现实交互技术是通过采集用户的多个图像信息, 然后根 据多个图像信息获取用户的空间位置和姿态信息。 最后根据用户的空间位置和 姿态获取虚拟场景, 并对虚拟场景进行显示。 在实现过程中, 由于多人互动吋 会出现相互遮挡的问题, 那么当用户之间相互遮挡吋, 则会导致无法跟踪用户 的空间位置和姿态的问题, 进而无法对用户进行定位, 导致虚拟场景渲染失败 技术问题  [0003] At present, an existing virtual reality interaction technology acquires a plurality of image information of a user, and then acquires spatial position and posture information of the user according to the plurality of image information. Finally, the virtual scene is obtained according to the user's spatial position and posture, and the virtual scene is displayed. In the implementation process, due to the problem of mutual occlusion caused by multi-person interaction, when users are occluded from each other, the problem of the user's spatial position and posture cannot be tracked, and the user cannot be located, resulting in a virtual scene. Rendering failure technical problem
[0004] 现有虚拟现实交互方法当用户之间相互遮挡吋无法对用户进行定位。  [0004] Existing virtual reality interaction methods cannot locate users when they are occluded from each other.
问题的解决方案  Problem solution
技术解决方案  Technical solution
[0005] 本发明实施例的第一方面提供了一种虚拟现实交互方法, 包括:  A first aspect of the embodiments of the present invention provides a virtual reality interaction method, including:
[0006] 接收来自动捕相机的图像信息, 以及接收至少一个采集器采集并通过对应虚拟 场景客户端传来的传感信息;  Receiving image information for automatically capturing a camera, and receiving sensing information collected by at least one collector and transmitted through a corresponding virtual scene client;
[0007] 根据所述图像信息和所述传感信息获取所有用户的空间位置信息和姿态信息; [0008] 将所述空间位置信息和所述姿态信息传输至所有虚拟场景客户端, 以便每一个 所述客户端根据所述空间位置信息、 所述姿态信息以及本地用户的视角信息渲 染虚拟场景并显示给所述本地用户; 所述本地用户为所有用户之一。 Acquiring spatial location information and posture information of all users according to the image information and the sensing information; [0008] transmitting the spatial location information and the posture information to all virtual scene clients, so that each And the client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scenario to the local user; the local user is one of all users.
[0009] 本发明实施例的第二方面提供了一种虚拟现实交互装置, 包括: A second aspect of the embodiments of the present invention provides a virtual reality interaction apparatus, including:
[0010] 接收模块, 用于接收来自动捕相机的图像信息, 以及接收至少一个采集器采集 并通过对应虚拟场景客户端传来的传感信息; [0010] a receiving module, configured to receive image information for automatically capturing a camera, and receive at least one collector for collecting And sensing information transmitted through the virtual scene client;
[0011] 获取模块, 用于根据所述图像信息和所述传感信息获取所有用户的空间位置信 息和姿态信息;  [0011] an obtaining module, configured to acquire spatial location information and posture information of all users according to the image information and the sensing information;
[0012] 传输模块, 用于将所述空间位置信息和所述姿态信息传输至所有虚拟场景客户 端, 以便每一个所述客户端根据所述空间位置信息、 所述姿态信息以及本地用 户的视角信息渲染虚拟场景并显示给所述本地用户; 所述本地用户为所有用户 之一。  [0012] a transmission module, configured to transmit the spatial location information and the posture information to all virtual scene clients, so that each of the clients according to the spatial location information, the posture information, and a local user perspective The information renders the virtual scene and displays it to the local user; the local user is one of all users.
[0013] 本发明实施例的第三方面提供了一种虚拟现实交互系统, 所述系统包括: 至少 两个动捕相机, 至少一个采集器、 至少一个虚拟场景客户端、 至少一个头盔显 示器以及相机服务端; 其中,  [0013] A third aspect of the embodiments of the present invention provides a virtual reality interaction system, the system comprising: at least two motion capture cameras, at least one collector, at least one virtual scene client, at least one helmet display, and a camera Server; among them,
[0014] 所述动捕相机, 用于捕获用户的图像信息并传输给所述相机服务端; [0014] the motion capture camera is configured to capture image information of a user and transmit the image information to the camera server;
[0015] 至少一个所述采集器, 用于采集用户的传感信息并传输给该用户对应的所述虚 拟场景客户端; [0015] at least one of the collectors is configured to collect sensing information of the user and transmit the information to the virtual scene client corresponding to the user;
[0016] 至少一个所述虚拟场景客户端, 用于接收来之对应采集器的传感信息并传输给 所述相机服务端;  [0016] at least one of the virtual scene clients is configured to receive sensing information of the corresponding collector and transmit the information to the camera server;
[0017] 所述相机服务端, 用于根据所述图像信息和所述传感信息获取所有用户的空间 位置信息和姿态信息; 以及将所述空间位置信息和所述姿态信息传输至所有虚 拟场景客户端, 以便每一个所述客户端根据所述空间位置信息、 所述姿态信息 以及本地用户的视角信息渲染虚拟场景并显示给所述本地用户; 所述本地用户 为所有用户之一。  [0017] the camera server is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information; and transmit the spatial location information and the posture information to all virtual scenarios a client, so that each of the clients renders a virtual scene according to the spatial location information, the gesture information, and the perspective information of the local user, and displays the virtual scenario to the local user; the local user is one of all users.
[0018] 本发明实施例的第四方面提供了一种计算机可读存储介质, 所述计算机可读存 储介质存储有计算机程序, 所述计算机程序被处理器执行吋实现上述所述虚拟 现实交互方法的步骤。  A fourth aspect of the embodiments of the present invention provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and the computer program is executed by a processor to implement the virtual reality interaction method. A step of.
发明的有益效果  Advantageous effects of the invention
有益效果  Beneficial effect
[0019] 首先接收来自动捕相机的图像信息, 以及接收至少一个采集器采集并通过对应 虚拟场景客户端传来的传感信息; 然后根据图像信息和传感信息获取所有用户 的空间位置信息和姿态信息; 再将空间位置信息和姿态信息传输至所有虚拟场 景客户端, 以便每一个虚拟场景客户端根据空间位置信息、 姿态信息以及本地 用户的视角信息渲染虚拟场景并显示给所述本地用户; 所述本地用户为所有用 户之一; 由于在对用户进行定位吋, 融合了光学与惯性两组数据, 故当用户之 间相互遮挡吋也能准确地对用户进行定位。 [0019] first receiving the image information of the automatic camera, and receiving the sensing information collected by the at least one collector and passing through the corresponding virtual scene client; and then acquiring the spatial location information of all the users according to the image information and the sensing information. Attitude information; then transmit spatial position information and attitude information to all virtual fields The client, so that each virtual scene client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user; the local user is one of all users; The positioning 吋 combines the two sets of data of optics and inertia, so that when users occlude each other, they can accurately locate the user.
对附图的简要说明  Brief description of the drawing
附图说明  DRAWINGS
[0020] 为了更清楚地说明本发明实施例中的技术方案, 下面将对实施例或现有技术描 述中所需要使用的附图作简单地介绍, 显而易见地, 下面描述中的附图仅仅是 本发明的一些实施例, 对于本领域普通技术人员来讲, 在不付出创造性劳动性 的前提下, 还可以根据这些附图获得其他的附图。  [0020] In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description are merely Some embodiments of the present invention may also be used to obtain other drawings based on these drawings without departing from the skilled artisan.
[0021] 图 1是本发明实施例提供的虚拟现实交互方法的实现流程一示意图; 1 is a schematic diagram of an implementation flow of a virtual reality interaction method according to an embodiment of the present invention;
[0022] 图 2是本发明实施例提供的虚拟现实交互方法的实现流程另一示意图; [0022] FIG. 2 is another schematic diagram of an implementation process of a virtual reality interaction method according to an embodiment of the present invention;
[0023] 图 3是本发明实施例提供的虚拟现实交互装置的示意图; 3 is a schematic diagram of a virtual reality interaction device according to an embodiment of the present invention;
[0024] 图 4是本发明实施例提供的虚拟现实交互装置获取模块的示意图; 4 is a schematic diagram of a virtual reality interaction device acquisition module according to an embodiment of the present invention;
[0025] 图 5是本发明实施例提供的虚拟现实交互装置获取模块中的第二计算模块的示 意图; [0025] FIG. 5 is a schematic diagram of a second computing module in a virtual reality interaction device acquiring module according to an embodiment of the present invention;
[0026] 图 6是本发明实施例提供的虚拟现实交互装置获取模块中的第一计算模块的示 意图;  6 is a schematic diagram of a first computing module in a virtual reality interaction device acquiring module according to an embodiment of the present invention;
[0027] 图 7是本发明实施例提供的虚拟现实交互系统一结构示意图;  7 is a schematic structural diagram of a virtual reality interaction system according to an embodiment of the present invention;
[0028] 图 8是本发明实施例提供的虚拟现实交互系统相机服务端的示意图。 8 is a schematic diagram of a server terminal of a virtual reality interactive system according to an embodiment of the present invention.
本发明的实施方式 Embodiments of the invention
[0029] 以下描述中, 为了说明而不是为了限定, 提出了诸如特定系统结构、 技术之类 的具体细节, 以便透彻理解本发明实施例。 然而, 本领域的技术人员应当清楚 , 在没有这些具体细节的其它实施例中也可以实现本发明。 在其它情况中, 省 略对众所周知的系统、 装置、 电路以及方法的详细说明, 以免不必要的细节妨 碍本发明的描述。  [0029] In the following description, for purposes of illustration and description However, it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the invention.
[0030] 为了说明本发明所述的技术方案, 下面通过具体实施例来进行说明。 [0031] 图 1示出了本发明实施例提供的虚拟现实交互方法的第一实施例的流程示意图 , 为了便于说明, 仅示出了与本发明实施例相关的部分, 详述如下: [0030] In order to explain the technical solutions of the present invention, the following description will be made by way of specific embodiments. [0031] FIG. 1 is a schematic flowchart diagram of a first embodiment of a virtual reality interaction method according to an embodiment of the present invention. For ease of description, only parts related to the embodiment of the present invention are shown, which are as follows:
[0032] 步骤 101, 接收来自动捕相机的图像信息, 以及接收至少一个采集器采集并通 过对应虚拟场景客户端传来的传感信息。  [0032] Step 101: Receive image information of an automatic camera, and receive sensing information collected by at least one collector and transmitted through a corresponding virtual scene client.
[0033] 具体实施中, 本实施例的执行主体可以是动捕相机的服务器 (又称相机服务端 ) 。 虚拟现实交互中, 虚拟场景客户端的数量与用户数量相同。 现有虚拟现实 交互中, 虚拟场景主要以游戏为主。 可以理解的是, 本发明实施例的虚拟场景 不限于游戏, 还可以是其他应用领域的虚拟场景, 如直播室、 教育培训, 军事 演习等。  [0033] In a specific implementation, the execution body of the embodiment may be a server for capturing a camera (also referred to as a camera server). In virtual reality interaction, the number of virtual scene clients is the same as the number of users. In the existing virtual reality interaction, the virtual scene is mainly based on games. It can be understood that the virtual scene in the embodiment of the present invention is not limited to a game, but may also be a virtual scene in other application fields, such as a live room, educational training, military exercises, and the like.
[0034] 另外, 基于光学动捕技术的虚拟现实交互中, 可以利用光学成像系统 (多个动 捕相机) 识别被观察对象 (1个或多个, 人或武器) 上附着的主动光学 (或被动 光学) 标记点, 通过动捕相机的图像采集系统处理计算出标记点的图像信息, 然后经网络 (有线, 无线, USB等) 传输给动捕相机的服务器 (简称相机服务器 ) 。 相机服务器则接收来自动捕相机的图像信息, 其中, 接收到的图像信息可 以包括虚拟场景中所有用户的位置坐标信息。 服务器根据该位置坐标信息识别 被观察对象, 获取用户的位置信息, 实现对用户进行定位。 可以理解的是, 若 服务器要实现对用户的定位, 那么接收到的同一用户的图像信息必须来自于两 个不同的动捕相机。  [0034] In addition, in the virtual reality interaction based on the optical motion capture technology, the optical imaging system (multiple camera) can be used to identify the active optics attached to the observed object (one or more, human or weapon) (or Passive optics) Marks the image of the marker point through the image capture system of the camera, and then transmits it to the server (referred to as the camera server) via the network (wired, wireless, USB, etc.). The camera server receives image information of the automatic camera capture, wherein the received image information may include position coordinate information of all users in the virtual scene. The server identifies the observed object based on the position coordinate information, acquires the location information of the user, and realizes positioning of the user. It can be understood that if the server is to achieve positioning of the user, the received image information of the same user must come from two different camera cameras.
[0035] 另一方面, 采集器具体可以为惯性导航单元如陀螺仪, 其附着在用户身上, 通 过陀螺仪获取用户的速度和加速度信息之后, 可通过有线或无线如蓝牙的方式 发送给该用户对应的客户端, 一个用户对应一个虚拟场景客户端。 然后再通过 该客户端将传感信息传送给动捕相机的服务器。 其中, 传感信息可以包括所有 用户的速度和加速度信息, 而加速度信息可以具体为六轴加速度。 客户端可以 是背包主机, 使用吋可以由用户背在背上, 这样用户进行虚拟交互吋可以摆脱 传统线材的束缚, 扩展了活动空间。  [0035] On the other hand, the collector may specifically be an inertial navigation unit such as a gyroscope attached to the user, and after acquiring the speed and acceleration information of the user through the gyroscope, the user may be sent to the user by wire or wirelessly, such as Bluetooth. For the corresponding client, one user corresponds to a virtual scene client. The sensor then transmits the sensor information to the server of the camera. The sensing information may include speed and acceleration information of all users, and the acceleration information may be specifically six-axis acceleration. The client can be a backpack host, and the user can be carried on the back by the user, so that the user can perform the virtual interaction and can get rid of the constraint of the traditional wire and expand the activity space.
[0036] 步骤 102, 根据图像信息和传感信息获取所有用户的空间位置信息和姿态信息 。 姿态信息包括所有用户的脸部朝向。  [0036] Step 102: Acquire spatial location information and posture information of all users according to the image information and the sensing information. The gesture information includes the face orientation of all users.
[0037] 服务器接收到来自动捕相机的图像信息和来自采集器的传感信息之后, 便可根 据这两个信息计算出用户的空间位置信息和姿态信息。 [0037] After the server receives the image information of the automatic camera and the sensor information from the collector, the server can According to these two pieces of information, the spatial location information and posture information of the user are calculated.
[0038] 步骤 103, 将空间位置信息和姿态信息传输至所有虚拟场景客户端, 以便每一 个虚拟场景客户端根据空间位置信息、 姿态信息以及本地用户的视角信息渲染 虚拟场景并显示给本地用户。  [0038] Step 103: The spatial location information and the posture information are transmitted to all the virtual scene clients, so that each virtual scene client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user.
[0039] 其中本地用户为所有用户之一。 根据前文的描述可知, 一个用户对应一个客户 端 (一个用户背一个背包主机) , 那么与该客户端对应的用户则为本地用户。 服务器可通过网络将所有用户的空间位置信息和姿态信息传输给每一个客户端 。 每一个客户端在接收到所有用户的空间位置信息和姿态信息之后, 再结合本 地用户的视角信息, 便可渲染出适合本地用户视角的虚拟场景并通过用户佩戴 的头盔显示给本地用户。  [0039] wherein the local user is one of all users. According to the foregoing description, one user corresponds to one client (one user carries a backpack host), and the user corresponding to the client is a local user. The server can transmit spatial location information and posture information of all users to each client through the network. After receiving the spatial location information and posture information of all users, each client can combine the perspective information of the local user to render a virtual scene suitable for the perspective of the local user and display it to the local user through the helmet worn by the user.
[0040] 本发明实施例的虚拟现实交互方法, 服务器通过融合动捕相机的图像信息与采 集器采集的传感信息两组数据, 实现了对用户的跟踪定位, 然后将各用户的物 理位置信息与姿态信息映射到由虚拟场景客户端和服务器的图形引擎所创造的 虚拟空间中, 从而完成实吋交互。 由于在对用户进行定位吋, 利用动捕相机采 集的图像信息和采集器采集的传感信息, 即使多人互动出现遮挡吋, 也能够准 确对用户进行定位, 避免了多人互动吋相互之间的遮挡导致光学标记点丢失无 法定位的问题。  [0040] In the virtual reality interaction method of the embodiment of the present invention, the server realizes the tracking and positioning of the user by combining the image information of the camera and the sensor information collected by the collector, and then the physical location information of each user is obtained. The gesture information is mapped into the virtual space created by the virtual scene client and the server's graphics engine, thereby completing the actual interaction. Since the user is positioned, the image information collected by the camera and the sensor information collected by the collector are used to accurately locate the user even if multiple people interact with each other, thereby avoiding interaction between multiple people. The occlusion causes the problem that the optical marker points are lost and cannot be located.
[0041] 需要说明的是, 由于用户在虚拟交互过程中是持续运动的, 故系统还需采集下 一帧的图像信息和下一吋刻的传感信息, 并获取用户的下一吋刻的空间位置信 息和姿态信息, 且根据用户实吋的运动状态及吋更新虚拟场景, 以实现交互的 沉浸感, 所以在执行步骤 103后, 可以返回继续执行步骤 101。  [0041] It should be noted that, since the user continuously moves during the virtual interaction process, the system also needs to collect the image information of the next frame and the next engraved sensing information, and acquire the next engraving of the user. The spatial position information and the posture information, and the virtual scene is updated according to the actual motion state of the user, and the immersive feeling of the interaction is realized. After the step 103 is performed, the step 101 can be continued.
[0042] 根据前文的描述可知, 来自动捕相机的图像信息包括所有用户的位置坐标信息 , 而来自采集器的传感信息可以包括: 所有用户的速度和加速度信息。 那么在 执行步骤 102, 即计算用户的空间位置信息和姿态信息吋, 具体可以为: 将位置 坐标信息、 速度和加速度信息进行滤波处理以得到所有用户的空间位置信息和 姿态信息。 下面, 将通过图 2的实施例进行详细描述。  [0042] According to the foregoing description, the image information of the automatic camera capture includes position coordinate information of all users, and the sensor information from the collector may include: speed and acceleration information of all users. Then, in step 102, the spatial position information and the posture information of the user are calculated, which may be: filtering the position coordinate information, the speed and the acceleration information to obtain spatial position information and posture information of all users. Hereinafter, a detailed description will be made by the embodiment of Fig. 2.
[0043] 图 2示出了本发明实施例提供的虚拟现实交互方法的第二实施例的流程示意图 , 为了便于说明, 仅示出了与本发明实施例相关的部分, 详述如下: [0044] 步骤 201, 接收来自动捕相机的图像信息, 以及接收至少一个采集器采集并通 过对应虚拟场景客户端传来的传感信息。 2 is a schematic flowchart of a second embodiment of a virtual reality interaction method according to an embodiment of the present invention. For ease of description, only parts related to the embodiment of the present invention are shown, which are as follows: [0044] Step 201: Receive image information of an automatic camera, and receive sensing information collected by at least one collector and transmitted through a corresponding virtual scene client.
[0045] 步骤 202, 将位置坐标信息、 速度和加速度信息进行滤波处理以得到所有用户 的空间位置信息和姿态信息。  [0045] Step 202: Perform filter processing on the position coordinate information, the speed and the acceleration information to obtain spatial position information and posture information of all users.
[0046] 其中, 来自动捕相机的位置坐标信息可以包括: 当前坐标位置信息和历史坐标 位置信息; 其中, 历史坐标位置信息是动捕相机历史采集的坐标位置信息。 那 么在计算用户的空间位置信息和姿态信息, 可以有两种操作方式:  [0046] wherein the position coordinate information of the automatic camera capture may include: current coordinate position information and historical coordinate position information; wherein the historical coordinate position information is coordinate position information acquired by the camera capture history. Then, in calculating the user's spatial position information and posture information, there are two ways of operation:
[0047] 一种是: 无论是否出现遮挡, 在进行定位计算用户的空间位置信息和姿态信息 吋, 均使用两组数据 (即位置坐标信息, 速度和加速度信息) , 此吋使用的位 置坐标信息可能是当前坐标位置信息 (没有出现遮挡吋) 或历史坐标位置信息 (出现遮挡吋) 。 此吋, 在计算用户的空间位置信息和姿态信息吋, 具体是将 速度和加速度信息、 当前位置坐标信息或所述历史位置坐标信息进行滤波处理 , 从而得到所有用户的空间位置信息和姿态信息。  [0047] One is: Regardless of whether occlusion occurs or not, the two pieces of data (ie, position coordinate information, velocity and acceleration information) are used in performing the positioning calculation of the user's spatial position information and posture information, and the position coordinate information used by the 位置 is used. It may be the current coordinate position information (no occlusion 出现 appears) or historical coordinate position information (occlusion 出现 appears). In this case, in calculating the spatial position information and the posture information of the user, specifically, the speed and acceleration information, the current position coordinate information or the historical position coordinate information are filtered to obtain spatial position information and posture information of all users.
[0048] 另一种是: 在未出现遮挡吋, 使用一组数据 (即当前坐标位置信息) 进行定位 。 当出现遮挡吋, 则使用两组数据 (即历史坐标位置信息, 速度和加速度信息 ) 进行定位。 因此, 在幵始定位之前, 需要判断根据当前位置坐标信息能否计 算出所有用户的空间位置信息和姿态信息; 在这里需要说明的一点是, 当出现 遮挡吋, 可能没有当前位置坐标信息 (当前位置坐标信息为无) 或者有一部分 当然位置坐标信息但利用该部分坐标位置信息无法定位。 此吋, 在判断根据当 前位置坐标信息可以对用户进行定位吋, 则直接根据当前位置坐标信息计算所 述所有用户的空间位置信息和姿态信息。 在判断到根据当前位置坐标信息无法 对用户进行定位吋, 则需要根据历史位置坐标信息以及用户的速度和加速度信 息计算所述所有用户的空间位置信息和姿态信息。  [0048] The other is: Positioning is performed using a set of data (ie, current coordinate position information) in the absence of occlusion. When an occlusion 出现 occurs, two sets of data (ie, historical coordinate position information, velocity and acceleration information) are used for positioning. Therefore, before starting the positioning, it is necessary to determine whether the spatial position information and the posture information of all users can be calculated according to the current position coordinate information; it should be noted here that when the occlusion 出现 occurs, there may be no current position coordinate information (currently The position coordinate information is none or there is a part of the position coordinate information but the position information cannot be located by using the part coordinate position information. In this case, after judging that the user can be located according to the current position coordinate information, the spatial position information and the posture information of all the users are directly calculated according to the current position coordinate information. When it is judged that the user cannot be located according to the current position coordinate information, it is necessary to calculate the spatial position information and the posture information of all the users based on the historical position coordinate information and the speed and acceleration information of the user.
[0049] 具体地, 在根据当前位置坐标信息计算所有用户的空间位置信息和姿态信息吋 , 具体操作方式例如可以包括 B1-1至 B1-3 , 如下:  [0049] Specifically, the spatial position information and the posture information of all users are calculated according to the current position coordinate information, and the specific operation manner may include, for example, B1-1 to B1-3, as follows:
[0050] B1-1.从当前位置坐标信息中提取多个标记点的二维坐标信息。  [0050] B1-1. Extracting two-dimensional coordinate information of the plurality of marker points from the current position coordinate information.
[0051] 其中, 多个标记点可以为光学运动跟踪技术中的刚体的光球, 光学跟踪系统都 需要将所跟踪的对象用光球配置好, 布局成几何分布。 这些光球的组合配置成 系统识别的刚体, 比如虚拟场景中的人, 可以用多个光球来表示, 改变多个光 球的空间位置, 对跟踪系统来说, 就是不同的人。 [0051] Wherein, the plurality of marking points may be the optical spheres of the rigid body in the optical motion tracking technology, and the optical tracking system needs to configure the tracked objects with the optical spheres, and the layout is geometrically distributed. The combination of these light balls is configured The rigid body recognized by the system, such as a person in a virtual scene, can be represented by multiple light spheres, changing the spatial position of multiple light spheres, which is different for the tracking system.
[0052] B1-2.根据二维坐标信息计算多个标记点的三维坐标信息。 步骤 B1-2具体实现 吋, 采用将多个动捕相机同吋刻所采集到的图像关键点进行匹配, 然后通过三 角测量的原理将这些匹配好的标记点的二维坐标信息转换为其所在空间的三维 坐标信息。 [1-22] B1-2. Calculating three-dimensional coordinate information of the plurality of marked points according to the two-dimensional coordinate information. Step B1-2 is specifically implemented to match the key points of the image acquired by the plurality of motion capture cameras with the engraving, and then the two-dimensional coordinate information of the matched marked points is converted into the same by the principle of triangulation. Three-dimensional coordinate information of the space.
[0053] B1-3.根据多个标记点的三维坐标信息和预设算法获取所有用户的空间位置信 息和姿态信息。  [0053] B1-3. Acquire spatial position information and posture information of all users according to the three-dimensional coordinate information of the plurality of marked points and a preset algorithm.
[0054] 具体地, 在根据历史位置坐标信息以及用户的速度和加速度信息计算所述所有 用户的空间位置信息和姿态信息, 具体操作方式例如包括步骤 C1-1和步骤 C1-2  [0054] Specifically, the spatial position information and the posture information of the all users are calculated according to the historical position coordinate information and the speed and acceleration information of the user, and the specific operation manner includes, for example, step C1-1 and step C1-2.
[0055] C1-1.根据历史位置坐标信息预测当前位置坐标信息。 [0055] C1-1. Predicting current position coordinate information based on historical position coordinate information.
[0056] C1-2.根据预测的当前位置坐标信息以及速度和加速度信息计算所有用户的空 间位置信息和姿态信息。  [0056] C1-2. Calculate the spatial position information and posture information of all users based on the predicted current position coordinate information and the velocity and acceleration information.
[0057] 其中, 步骤 C1-1可以具体为: 根据历史位置坐标信息和历史速度和加速度信息 预测当前位置坐标信息。 步骤 C1-2可以具体为: 根据预测的当前位置坐标信息 以及当前速度和加速度信息计算所有用户的空间位置信息和姿态信息。 [0057] Step C1-1 may be specifically: predicting current position coordinate information according to historical position coordinate information and historical speed and acceleration information. Step C1-2 may be specifically: calculating spatial position information and posture information of all users according to the predicted current position coordinate information and the current speed and acceleration information.
[0058] 步骤 203, 将空间位置信息和姿态信息传输至所有虚拟场景客户端, 以便每一 个虚拟场景客户端根据空间位置信息、 姿态信息以及本地用户的视角信息渲染 虚拟场景并显示给本地用户。 [0058] Step 203: The spatial location information and the posture information are transmitted to all the virtual scene clients, so that each virtual scene client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user.
[0059] 本发明实施例的虚拟现实交互方法, 服务器在对用户进行定位吋, 可以利用动 捕相机采集的图像信息和采集器采集的传感信息, 因此即使多人互动出现遮挡 吋, 也能够准确对用户进行定位, 避免了多人互动吋相互之间的遮挡导致光学 标记点丢失无法定位的问题。 [0059] In the virtual reality interaction method of the embodiment of the present invention, after the server locates the user, the server can use the image information collected by the camera and the sensor information collected by the collector, so even if the multi-person interaction occurs, the occlusion can be Accurate positioning of the user, avoiding the problem of multi-person interaction and mutual occlusion leading to loss of optical marker points.
[0060] 上述两个实施例对虚拟现实交互方法进行了详细的描述, 下面将结合附图, 对 采用上述虚拟现实交互方法的装置进行详细描述, 需要说明的是, 关于一些术 语的描述与定义, 若在虚拟现实交互方法中已经进行了详细的描述的, 在装置 实施例中将不再赘述。 [0061] 为了实现上述虚拟现实交互方法, 本发明实施例还提供了一种虚拟现实交互装 置, 该交互装置例如可以是动捕相机的服务器。 如图 3所示, 该虚拟现实交互装 置 30包括接收模块 310、 获取模块 320和传输模块 330。 [0060] The above two embodiments describe the virtual reality interaction method in detail. The apparatus using the above virtual reality interaction method will be described in detail below with reference to the accompanying drawings. It should be noted that the description and definition of some terms are used. If it has been described in detail in the virtual reality interaction method, it will not be described again in the device embodiment. [0061] In order to implement the above-mentioned virtual reality interaction method, an embodiment of the present invention further provides a virtual reality interaction device, which may be, for example, a server that captures a camera. As shown in FIG. 3, the virtual reality interaction device 30 includes a receiving module 310, an obtaining module 320, and a transmitting module 330.
[0062] 接收模块 310, 接收来自动捕相机的图像信息, 以及接收至少一个采集器采集 并通过对应虚拟场景客户端传来的传感信息。 [0062] The receiving module 310 receives the image information of the automatic camera, and receives the sensing information collected by the at least one collector and transmitted through the corresponding virtual scene client.
[0063] 获取模块 320, 用于根据图像信息和传感信息获取所有用户的空间位置信息和 姿态信息。 [0063] The obtaining module 320 is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information.
[0064] 传输模块 330, 将空间位置信息和姿态信息传输至所有虚拟场景客户端, 以便 每一个客户端根据空间位置信息、 姿态信息以及本地用户的视角信息渲染虚拟 场景并显示给本地用户; 本地用户为所有用户之一。  [0064] The transmission module 330 transmits the spatial location information and the posture information to all the virtual scene clients, so that each client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user; The user is one of all users.
[0065] 具体实施中, 由于用户在虚拟交互过程中是持续运动的, 故系统还需执行接收 模块以采集下一帧的图像信息和下一吋刻的传感信息, 以获取用户的下一吋刻 的空间位置信息和姿态信息, 且根据用户实吋的运动状态及吋更新虚拟场景, 实现了虚拟交互的沉浸感。 所以, 当传输模块 330完成相应功能吋, 还需触发接 收模块 310继续工作。  [0065] In a specific implementation, since the user continuously moves during the virtual interaction process, the system further needs to execute the receiving module to collect the image information of the next frame and the next engraved sensing information to obtain the next user. The spatial position information and the posture information are engraved, and the virtual scene is updated according to the actual movement state of the user and the immersive feeling of the virtual interaction is realized. Therefore, when the transmission module 330 completes the corresponding function, the receiving module 310 needs to be triggered to continue working.
[0066] 具体实施中, 图像信息包括所有用户的位置坐标信息, 传感信息包括所有用户 的速度和加速度信息, 获取模块 320具体用于: 将位置坐标信息、 速度和加速度 信息进行滤波处理以得到所有用户的空间位置信息和姿态信息。  [0066] In a specific implementation, the image information includes location coordinate information of all users, and the sensing information includes speed and acceleration information of all users, and the obtaining module 320 is specifically configured to: filter the position coordinate information, the speed, and the acceleration information to obtain Spatial location information and pose information for all users.
[0067] 具体实施中, 位置坐标信息包括当前坐标位置信息和历史坐标位置信息; 获取 模块 320进一步具体用于: 将速度和加速度信息、 当前位置坐标信息或历史位置 坐标信息进行滤波处理以得到所有用户的空间位置信息和姿态信息。  [0067] In a specific implementation, the position coordinate information includes current coordinate position information and historical coordinate position information; the obtaining module 320 is further specifically configured to: filter the speed and acceleration information, the current position coordinate information, or the historical position coordinate information to obtain all User's spatial location information and posture information.
[0068] 其中, 如图 4所示, 位置坐标信息包括当前坐标位置信息和历史坐标位置信息 ; 获取模块 320包括判断模块 321、 第一计算模块 322和第二计算模块 323。  [0068] wherein, as shown in FIG. 4, the position coordinate information includes current coordinate position information and historical coordinate position information; the obtaining module 320 includes a determining module 321, a first calculating module 322, and a second calculating module 323.
[0069] 判断模块 321, 用于判断能否根据当前位置坐标信息计算出所有用户的空间位 置信息和姿态信息。  [0069] The determining module 321, is configured to determine whether the spatial position information and the posture information of all the users can be calculated according to the current position coordinate information.
[0070] 第一计算模块 322, 用于当判断模块的判断结果为是吋, 根据当前位置坐标信 息计算所有用户的空间位置信息和姿态信息。  [0070] The first calculating module 322 is configured to calculate spatial location information and posture information of all users according to the current location coordinate information when the determination result of the determining module is 吋.
[0071] 第二计算模块 323, 用于当判断模块的判断结果为否吋, 根据历史位置坐标信 息以及用户的速度和加速度信息计算所有用户的空间位置信息和姿态信息。 [0071] The second calculating module 323 is configured to: when the determining result of the determining module is negative, according to the historical location coordinate letter The user and the user's speed and acceleration information calculate the spatial position information and attitude information of all users.
[0072] 其中, 如图 5所示, 第二计算模块 323包括预测模块 3231和第一姿态信息计算模 块 3232。  [0072] wherein, as shown in FIG. 5, the second calculation module 323 includes a prediction module 3231 and a first posture information calculation module 3232.
[0073] 预测模块 3231, 用于根据历史位置坐标信息预测当前位置坐标信息;  [0073] a prediction module 3231, configured to predict current location coordinate information according to historical location coordinate information;
[0074] 第一姿态信息计算模块 3232, 用于根据预测的当前位置坐标信息以及速度和加 速度信息计算所有用户的空间位置信息和姿态信息。  [0074] The first posture information calculation module 3232 is configured to calculate spatial position information and posture information of all users according to the predicted current position coordinate information and the speed and acceleration information.
[0075] 其中, 如图 6所示, 第一计算模块 322包括提取模块 3221、 三维坐标信息计算模 块 3222和第二姿态信息计算模块 3223。 [0075] Wherein, as shown in FIG. 6, the first calculation module 322 includes an extraction module 3221, a three-dimensional coordinate information calculation module 3222, and a second posture information calculation module 3223.
[0076] 提取模块 3221, 用于从当前位置坐标信息中提取多个标记点的二维坐标信息。 [0076] The extracting module 3221 is configured to extract two-dimensional coordinate information of the plurality of marked points from the current position coordinate information.
[0077] 三维坐标信息计算模块 3222, 用于根据二维坐标信息计算多个标记点的三维坐 标信息。 [0077] The three-dimensional coordinate information calculation module 3222 is configured to calculate three-dimensional coordinate information of the plurality of marked points according to the two-dimensional coordinate information.
[0078] 第二姿态信息计算模块 3223, 用于根据多个标记点的三维坐标信息和预设算法 获取所有用户的空间位置信息和姿态信息。  [0078] The second posture information calculation module 3223 is configured to acquire spatial position information and posture information of all users according to the three-dimensional coordinate information of the plurality of marked points and a preset algorithm.
[0079] 其中, 姿态信息包括所有用户的脸部朝向。 [0079] wherein the posture information includes face orientations of all users.
[0080] 本发明实施例的虚拟现实交互装置, 通过融合动捕相机的图像信息与采集器采 集的传感信息两组数据, 实现了对用户的跟踪定位, 然后将各用户的物理位置 信息与姿态信息映射到由客户端和交互装置的图形引擎所创造的虚拟空间中, 从而完成实吋交互。 由于在对用户进行定位吋, 利用动捕相机采集的图像信息 和采集器采集的传感信息, 即使多人互动出现遮挡吋, 交互装置也能够准确对 用户进行定位, 避免了多人互动吋相互之间的遮挡导致光学标记点丢失无法定 位的问题。  [0080] The virtual reality interaction device of the embodiment of the present invention realizes the tracking and positioning of the user by combining the image information of the camera and the sensor information collected by the collector, and then the physical location information of each user is The gesture information is mapped into a virtual space created by the client and the graphics engine of the interaction device to complete the actual interaction. Because the user is positioned, the image information collected by the camera and the sensor information collected by the collector are used, even if the interaction of multiple people is blocked, the interactive device can accurately locate the user, avoiding multi-person interaction and mutual interaction. The occlusion between the two causes the problem that the optical marker point is lost and cannot be located.
[0081] 相应地, 本发明还提供了一种虚拟现实交互系统, 该系统包括: 多个动捕相机 , 至少一个采集器、 至少一个虚拟场景客户端、 至少一个头盔显示器、 以及上 述实施例描述的交互装置。 其中, 采集器可以由陀螺仪实现, 交互装置例如可 以是动捕相机的服务器。 一个用户对应使用一个虚拟场景客户端、 一个头盔显 示器, 以及至少一个采集器。 下面, 将通过图 7, 对该交互系统进行详细描述。  Correspondingly, the present invention further provides a virtual reality interaction system, the system comprising: a plurality of camera, at least one collector, at least one virtual scene client, at least one helmet display, and the above description of the embodiment Interactive device. Wherein, the collector can be implemented by a gyroscope, and the interaction device can be, for example, a server that captures the camera. One user corresponds to one virtual scene client, one helmet display, and at least one collector. In the following, the interactive system will be described in detail through FIG.
[0082] 如图 7所示, 是本发明的虚拟现实交互系统的实施例的示意图。 该虚拟现实交 互系统包括: 动捕相机 11、 动捕相机 12和动捕相机 13, 以太网路由器 2, 服务器 3, WIFI路由器 4, 虚拟场景客户端 51和虚拟场景客户端 52, 头盔显示器 61和头 盔显示器 62, 陀螺仪 71和陀螺仪 72。 [0082] As shown in FIG. 7, is a schematic diagram of an embodiment of a virtual reality interactive system of the present invention. The virtual reality interaction system includes: a camera 13 , a camera 12 and a camera 13 , an Ethernet router 2 , a server 3. WIFI router 4, virtual scene client 51 and virtual scene client 52, helmet display 61 and helmet display 62, gyroscope 71 and gyroscope 72.
[0083] 其中, 动捕相机 11、 动捕相机 12和动捕相机 13用于捕获用户的图像信息并通过 以太网路由器 2传输给相机的服务器 3。 同吋, 陀螺仪 71和陀螺仪 72还分别采集 对应用户的传感信息并通过 WIFI路由器 4传输给服务器 3。  [0083] Among them, the camera 16 , the camera 12 and the camera 13 are used to capture image information of the user and transmit it to the server 3 of the camera through the Ethernet router 2 . At the same time, the gyroscope 71 and the gyroscope 72 also respectively collect the sensing information of the corresponding user and transmit it to the server 3 through the WIFI router 4.
[0084] 服务器 3, 根据接收到的图像信息和传感信息获取所有用户的空间位置信息和 姿态信息; 以及将所述空间位置信息和所述姿态信息均传输给客户端 51和客户 端 52。 客户端 51在接收到所有用户的空间位置信息和姿态信息之后, 结合本地 用户的视角信息渲染虚拟场景并通过头盔显示器 61显示给本地用户。 同样的, 客户端 52在接收到所有用户的空间位置信息和姿态信息之后, 结合本地用户的 视角信息渲染虚拟场景并通过头盔显示器 62显示给本地用户。 其中头盔显示器 和客户端电脑之间通过 HDMI (High Definition Multimedia Interface, 高清晰度多 媒体接口) 接口连接, 陀螺仪和客户端之间通过蓝牙连接。  [0084] The server 3 acquires spatial location information and posture information of all users according to the received image information and sensing information; and transmits the spatial location information and the posture information to the client 51 and the client 52. After receiving the spatial location information and posture information of all users, the client 51 renders the virtual scene in conjunction with the perspective information of the local user and displays it to the local user through the helmet display 61. Similarly, after receiving the spatial location information and posture information of all users, the client 52 renders the virtual scene in combination with the perspective information of the local user and displays it to the local user through the helmet display 62. The helmet display and the client computer are connected through an HDMI (High Definition Multimedia Interface) interface, and the gyroscope and the client are connected via Bluetooth.
[0085] 综上所述, 本发明实施例首先接收来自动捕相机的图像信息并采集通过虚拟场 景的客户端传来的传感信息; 然后根据图像信息和传感信息获取所有用户的空 间位置信息和姿态信息; 最后将空间位置信息和姿态信息传输至虚拟场景的客 户端, 以便客户端根据空间位置信息、 姿态信息以及本地用户的视角信息渲染 虚拟场景并显示给本地用户; 由于融合了光学与惯性两组数据, 故在用户之间 相互遮挡吋实现了对用户的定位的高准确性。  In summary, the embodiment of the present invention firstly receives the image information of the automatic camera capture and collects the sensor information transmitted by the client of the virtual scene; and then acquires the spatial location of all users according to the image information and the sensor information. Information and attitude information; finally transmitting the spatial location information and the posture information to the client of the virtual scene, so that the client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user and displays it to the local user; The two sets of data with the inertia, so that the user occludes each other, achieving high accuracy of positioning the user.
[0086] 应理解, 上述实施例中各步骤的序号的大小并不意味着执行顺序的先后, 各过 程的执行顺序应以其功能和内在逻辑确定, 而不应对本发明实施例的实施过程 构成任何限定。  It should be understood that the size of the serial number of each step in the foregoing embodiment does not mean the order of execution sequence, and the execution order of each process should be determined by its function and internal logic, and should not be constituted by the implementation process of the embodiment of the present invention. Any restrictions.
[0087] 其中, 如图 8所示, 相机服务端包括处理器 80、 存储器 81以及存储在存储器 81 中并可在处理器 80上运行的计算机程序 82, 例如虚拟现实交互程序。 处理器 80 执行计算机程序 82吋实现上述各个虚拟现实交互方法实施例中的步骤, 例如图 1 所示的步骤 101至 103。 或者, 处理器 80执行计算机程序 82吋实现上述各装置实 施例中各模块 /单元的功能, 例如图 3所示模块 310至 330的功能。  [0087] wherein, as shown in FIG. 8, the camera server includes a processor 80, a memory 81, and a computer program 82 stored in the memory 81 and operable on the processor 80, such as a virtual reality interactive program. The processor 80 executes the computer program 82 to implement the steps in the various embodiments of the virtual reality interaction method described above, such as steps 101 through 103 shown in FIG. Alternatively, processor 80 executes computer program 82 to implement the functions of the various modules/units in the various apparatus embodiments described above, such as the functions of modules 310 through 330 shown in FIG.
[0088] 示例性的, 计算机程序 82可以被分割成一个或多个模块 /单元, 一个或者多个 模块 /单元被存储在存储器 81中, 并由处理器 80执行, 以完成本发明。 一个或多 个模块 /单元可以是能够完成特定功能的一系列计算机程序指令段, 该指令段用 于描述计算机程序 82在相机服务端中的执行过程。 例如, 计算机程序 82可以被 分割成接收模块 310、 获取模块 320和传输模块 330 (虚拟装置中的模块) , 各模 块具体功能如下: [0088] Illustratively, computer program 82 can be partitioned into one or more modules/units, one or more The modules/units are stored in memory 81 and executed by processor 80 to complete the present invention. The one or more modules/units may be a series of computer program instructions that are capable of performing a particular function, which is used to describe the execution of computer program 82 in the camera server. For example, the computer program 82 can be divided into a receiving module 310, an obtaining module 320, and a transmitting module 330 (modules in a virtual device), and the specific functions of each module are as follows:
[0089] 接收模块 310, 用于接收来自动捕相机的图像信息, 以及接收至少一个采集器 采集并通过对应虚拟场景客户端传来的传感信息;  [0089] The receiving module 310 is configured to receive image information for automatically capturing a camera, and receive sensing information collected by the at least one collector and transmitted through the corresponding virtual scene client;
[0090] 获取模块 320, 用于根据图像信息和传感信息获取所有用户的空间位置信息和 姿态信息; [0090] The obtaining module 320 is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information.
[0091] 传输模块 330, 用于将空间位置信息和姿态信息传输至所有虚拟场景客户端, 以便每一个客户端根据空间位置信息、 姿态信息以及本地用户的视角信息渲染 虚拟场景并显示给本地用户; 本地用户为所有用户之一。  [0091] The transmission module 330 is configured to transmit the spatial location information and the posture information to all the virtual scene clients, so that each client renders the virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user. ; The local user is one of all users.
[0092] 所述相机服务端可以是桌上型计算机、 笔记本、 掌上电脑及云端服务器等计算 设备。 所述相机服务端可包括, 但不仅限于, 处理器 80、 存储器 81。 本领域技 术人员可以理解, 图 8仅仅是相机服务端的示例, 并不构成对相机服务端的限定 , 可以包括比图示更多或更少的部件, 或者组合某些部件, 或者不同的部件, 例如所述相机服务端还可以包括输入输出设备、 网络接入设备、 总线等。  [0092] The camera server may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. The camera server may include, but is not limited to, a processor 80, a memory 81. It will be understood by those skilled in the art that FIG. 8 is merely an example of a camera server, and does not constitute a limitation of the server server. It may include more or less components than those illustrated, or some components may be combined, or different components, such as The camera server may also include an input and output device, a network access device, a bus, and the like.
[0093] 所称处理器 80可以是中央处理单元 (Central Processing Unit, CPU) , 还可以是其 他通用处理器、 数字信号处理器(Digital Signal Processor, DSP)、 专用集成电路 (Application Specific Integrated  The processor 80 may be a central processing unit (CPU), or may be another general-purpose processor, a digital signal processor (DSP), or an application specific integrated circuit (Application Specific Integrated).
Circuit, ASIC)、 现成可编程门阵列 (Field-Programmable Gate Array, FPGA)或者 其他可编程逻辑器件、 分立门或者晶体管逻辑器件、 分立硬件组件等。 通用处 理器可以是微处理器或者该处理器也可以是任何常规的处理器等。  Circuit, ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware component, etc. The general purpose processor can be a microprocessor or the processor can be any conventional processor or the like.
[0094] 所述存储器 81可以是所述相机服务端的内部存储单元, 例如相机服务端的硬盘 或内存。 所述存储器 81也可以是所述相机服务端的外部存储设备, 例如所述相 机服务端上配备的插接式硬盘, 智能存储卡 (Smart Media Card, SMC) , 安全 数字 (Secure Digital, SD) 卡, 闪存卡 (Flash [0094] The memory 81 may be an internal storage unit of the camera server, such as a hard disk or a memory of a camera server. The memory 81 may also be an external storage device of the camera server, such as a plug-in hard disk equipped on the camera server, a smart memory card (SMC), and a secure digital (SD) card. , flash card (Flash
Card) 等。 进一步地, 所述存储器 81还可以既包括所述相机服务端的内部存储单 元也包括外部存储设备。 所述存储器 81用于存储所述计算机程序以及所述相机 服务端所需的其他程序和数据。 所述存储器 81还可以用于暂吋地存储已经输出 或者将要输出的数据。 Card) and so on. Further, the memory 81 may further include an internal storage list of the camera server. The element also includes an external storage device. The memory 81 is used to store the computer program and other programs and data required by the camera server. The memory 81 can also be used to temporarily store data that has been output or is about to be output.
[0095] 所属领域的技术人员可以清楚地了解到, 为了描述的方便和简洁, 仅以上述各 功能单元、 模块的划分进行举例说明, 实际应用中, 可以根据需要而将上述功 能分配由不同的功能单元、 模块完成, 即将所述装置的内部结构划分成不同的 功能单元或模块, 以完成以上描述的全部或者部分功能。 实施例中的各功能单 元、 模块可以集成在一个处理单元中, 也可以是各个单元单独物理存在, 也可 以两个或两个以上单元集成在一个单元中, 上述集成的单元既可以采用硬件的 形式实现, 也可以采用软件功能单元的形式实现。 另外, 各功能单元、 模块的 具体名称也只是为了便于相互区分, 并不用于限制本申请的保护范围。 上述系 统中单元、 模块的具体工作过程, 可以参考前述方法实施例中的对应过程, 在 此不再赘述。  [0095] It will be clearly understood by those skilled in the art that, for convenience and brevity of description, only the division of each functional unit and module described above is exemplified. In practical applications, the above functions may be assigned differently according to needs. The functional unit and the module are completed, that is, the internal structure of the device is divided into different functional units or modules to complete all or part of the functions described above. Each functional unit and module in the embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be implemented by hardware. Formal implementation can also be implemented in the form of software functional units. In addition, the specific names of the respective functional units and modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application. For the specific working process of the units and modules in the foregoing system, reference may be made to the corresponding processes in the foregoing method embodiments, and details are not described herein again.
[0096] 在上述实施例中, 对各个实施例的描述都各有侧重, 某个实施例中没有详述或 记载的部分, 可以参见其它实施例的相关描述。  [0096] In the above embodiments, the descriptions of the various embodiments are all focused on, and the parts that are not detailed or described in the specific embodiments may be referred to the related descriptions of other embodiments.
[0097] 本领域普通技术人员可以意识到, 结合本文中所公幵的实施例描述的各示例的 单元及算法步骤, 能够以电子硬件、 或者计算机软件和电子硬件的结合来实现 。 这些功能究竟以硬件还是软件方式来执行, 取决于技术方案的特定应用和设 计约束条件。 专业技术人员可以对每个特定的应用来使用不同方法来实现所描 述的功能, 但是这种实现不应认为超出本发明的范围。  [0097] Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware, or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods for implementing the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present invention.
[0098] 在本发明所提供的实施例中, 应该理解到, 所揭露的装置 /终端设备和方法, 可以通过其它的方式实现。 例如, 以上所描述的装置 /终端设备实施例仅仅是示 意性的, 例如, 所述模块或单元的划分, 仅仅为一种逻辑功能划分, 实际实现 吋可以有另外的划分方式, 例如多个单元或组件可以结合或者可以集成到另一 个系统, 或一些特征可以忽略, 或不执行。 另一点, 所显示或讨论的相互之间 的耦合或直接耦合或通讯连接可以是通过一些接口, 装置或单元的间接耦合或 通讯连接, 可以是电性, 机械或其它的形式。  [0098] In the embodiments provided by the present invention, it should be understood that the disclosed device/terminal device and method may be implemented in other manners. For example, the device/terminal device embodiment described above is merely illustrative. For example, the division of the module or unit is only a logical function division, and the actual implementation may have another division manner, for example, multiple units. Or components may be combined or integrated into another system, or some features may be omitted or not implemented. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
[0099] 所述作为分离部件说明的单元可以是或者也可以不是物理上分幵的, 作为单元 显示的部件可以是或者也可以不是物理单元, 即可以位于一个地方, 或者也可 以分布到多个网络单元上。 可以根据实际的需要选择其中的部分或者全部单元 来实现本实施例方案的目的。 [0099] The unit described as a separate component may or may not be physically distributed as a unit The displayed components may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
[0100] 另外, 在本发明各个实施例中的各功能单元可以集成在一个处理单元中, 也可 以是各个单元单独物理存在, 也可以两个或两个以上单元集成在一个单元中。 上述集成的单元既可以采用硬件的形式实现, 也可以采用软件功能单元的形式 实现。  [0100] In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit. The above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
[0101] 所述集成的模块 /单元如果以软件功能单元的形式实现并作为独立的产品销售 或使用吋, 可以存储在一个计算机可读取存储介质中。 基于这样的理解, 本发 明实现上述实施例方法中的全部或部分流程, 也可以通过计算机程序来指令相 关的硬件来完成, 所述的计算机程序可存储于一计算机可读存储介质中, 该计 算机程序在被处理器执行吋, 可实现上述各个方法实施例的步骤。 。 其中, 所 述计算机程序包括计算机程序代码, 所述计算机程序代码可以为源代码形式、 对象代码形式、 可执行文件或某些中间形式等。 所述计算机可读介质可以包括 [0101] The integrated module/unit, if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium. Based on such understanding, the present invention implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware. The computer program may be stored in a computer readable storage medium. After the program is executed by the processor, the steps of the various method embodiments described above can be implemented. . Wherein, the computer program comprises computer program code, and the computer program code may be in the form of source code, object code form, executable file or some intermediate form. The computer readable medium can include
: 能够携带所述计算机程序代码的任何实体或装置、 记录介质、 u盘、 移动硬盘: any entity or device capable of carrying the computer program code, recording medium, u disk, mobile hard disk
、 磁碟、 光盘、 计算机存储器、 只读存储器 (ROM, Read-Only Memory) 、 随 机存取存储器 (RAM, Random Access Memory) 、 电载波信号、 电信信号以及 软件分发介质等。 需要说明的是, 所述计算机可读介质包含的内容可以根据司 法管辖区内立法和专利实践的要求进行适当的增减, 例如在某些司法管辖区, 根据立法和专利实践, 计算机可读介质不包括电载波信号和电信信号。 , Disk, CD, computer memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the content contained in the computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in a jurisdiction, for example, in some jurisdictions, according to legislation and patent practice, computer readable media Does not include electrical carrier signals and telecommunication signals.
[0102] 以上所述实施例仅用以说明本发明的技术方案, 而非对其限制; 尽管参照前述 实施例对本发明进行了详细的说明, 本领域的普通技术人员应当理解: 其依然 可以对前述各实施例所记载的技术方案进行修改, 或者对其中部分技术特征进 行等同替换; 而这些修改或者替换, 并不使相应技术方案的本质脱离本发明各 实施例技术方案的精神和范围, 均应包含在本发明的保护范围之内。 The embodiments described above are only intended to illustrate the technical solutions of the present invention, and are not intended to be limiting; although the present invention has been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that The technical solutions described in the foregoing embodiments are modified, or some of the technical features are equivalently replaced; and the modifications or substitutions do not deviate from the spirit and scope of the technical solutions of the embodiments of the present invention. It should be included in the scope of protection of the present invention.

Claims

权利要求书  Claim
[权利要求 1] 一种虚拟现实交互方法, 其特征在于, 用于现实用户与虚拟场景之间 的交互, 包括:  [Claim 1] A virtual reality interaction method, which is used for interaction between a real user and a virtual scene, including:
接收来自动捕相机的图像信息, 以及接收至少一个采集器采集并通过 对应虚拟场景客户端传来的传感信息;  Receiving image information of the automatic camera capture, and receiving sensing information collected by at least one collector and transmitted through the corresponding virtual scene client;
根据所述图像信息和所述传感信息获取所有用户的空间位置信息和姿 态 息;  Obtaining spatial location information and posture information of all users according to the image information and the sensing information;
将所述空间位置信息和所述姿态信息传输至所有虚拟场景客户端, 以 便每一个所述虚拟场景客户端根据所述空间位置信息、 所述姿态信息 以及本地用户的视角信息渲染虚拟场景并显示给所述本地用户; 所述 本地用户为所有用户之一。  Transmitting the spatial location information and the gesture information to all virtual scene clients, so that each of the virtual scene clients renders a virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene. To the local user; the local user is one of all users.
[权利要求 2] 如权利要求 1所述的虚拟现实交互方法, 其特征在于, 所述图像信息 包括所有用户的位置坐标信息, 所述传感信息包括所有用户的速度和 加速度信息, 所述根据所述图像信息和所述传感信息获取所有用户的 空间位置信息和姿态信息的步骤具体为:  [Claim 2] The virtual reality interaction method according to claim 1, wherein the image information includes position coordinate information of all users, and the sensor information includes speed and acceleration information of all users, and the basis The step of acquiring the spatial location information and the posture information of all users by using the image information and the sensing information is specifically:
将所述位置坐标信息、 所述速度和加速度信息进行滤波处理以得到所 述所有用户的空间位置信息和姿态信息。  The position coordinate information, the speed and acceleration information are subjected to filtering processing to obtain spatial position information and posture information of all the users.
[权利要求 3] 如权利要求 2所述的虚拟现实交互方法, 其特征在于, 所述位置坐标 信息包括当前坐标位置信息和历史坐标位置信息; 所述将所述位置坐 标信息、 所述速度和加速度信息进行滤波处理以得到所述所有用户的 空间位置信息和姿态信息的步骤进一步具体为: 将所述速度和加速度信息、 所述当前位置坐标信息或所述历史位置坐 标信息进行滤波处理以得到所述所有用户的空间位置信息和姿态信息 The virtual reality interaction method according to claim 2, wherein the position coordinate information includes current coordinate position information and historical coordinate position information; and the position coordinate information, the speed and The step of performing the filtering process on the acceleration information to obtain the spatial position information and the posture information of all the users is further specifically: filtering the speed and acceleration information, the current position coordinate information or the historical position coordinate information to obtain Spatial location information and posture information of all users
[权利要求 4] 如权利要求 2所述的虚拟现实交互方法, 其特征在于, 所述位置坐标 信息包括当前坐标位置信息和历史坐标位置信息; 所述将所述位置坐 标信息、 所述速度和加速度信息进行滤波处理以得到所述所有用户的 空间位置信息和姿态信息的步骤包括: 判断能否根据所述当前位置坐标信息计算出所述所有用户的空间位置 信息和姿态信息; The virtual reality interaction method according to claim 2, wherein the position coordinate information includes current coordinate position information and historical coordinate position information; and the position coordinate information, the speed and The step of performing acceleration processing on the acceleration information to obtain spatial position information and posture information of all the users includes: Determining whether the spatial location information and the posture information of the all users can be calculated according to the current location coordinate information;
若判断结果为是, 则根据所述当前位置坐标信息计算所述所有用户的 空间位置信息和姿态信息;  If the determination result is yes, calculating spatial position information and posture information of all the users according to the current position coordinate information;
若判断结果为否, 则根据所述历史位置坐标信息以及所述用户的速度 和加速度信息计算所述所有用户的空间位置信息和姿态信息。  If the result of the determination is no, the spatial position information and the posture information of the all users are calculated based on the historical position coordinate information and the speed and acceleration information of the user.
[权利要求 5] 如权利要求 4所述的虚拟现实交互方法, 其特征在于, 所述根据所述 历史位置坐标信息以及所述速度和加速度信息计算所述所有用户的空 间位置信息和姿态信息的步骤包括:  [Claim 5] The virtual reality interaction method according to claim 4, wherein the calculating the spatial position information and the posture information of the all users according to the historical position coordinate information and the speed and acceleration information The steps include:
根据所述历史位置坐标信息预测当前位置坐标信息;  Predicting current position coordinate information according to the historical position coordinate information;
根据预测的所述当前位置坐标信息以及所述速度和加速度信息计算所 述所有用户的空间位置信息和姿态信息。  The spatial position information and the posture information of all the users are calculated based on the predicted current position coordinate information and the speed and acceleration information.
[权利要求 6] 如权利要求 4所述的虚拟现实交互方法, 其特征在于, 所述根据所述 当前位置坐标信息计算出所述所有用户的空间位置信息和姿态信息的 步骤包括:  The method of claim 4, wherein the calculating the spatial location information and the posture information of the all users according to the current location coordinate information comprises:
从所述当前位置坐标信息中提取多个标记点的二维坐标信息; 根据所述二维坐标信息计算所述多个标记点的三维坐标信息; 根据所述多个标记点的三维坐标信息和预设算法获取所述所有用户的 空间位置信息和姿态信息。  Extracting two-dimensional coordinate information of the plurality of marking points from the current position coordinate information; calculating three-dimensional coordinate information of the plurality of marking points according to the two-dimensional coordinate information; and according to the three-dimensional coordinate information of the plurality of marking points The preset algorithm acquires spatial location information and posture information of all the users.
[权利要求 7] 根据权利要求 1所述的虚拟现实交互方法, 其特征在于, 所述姿态信 息包括所述所有用户的脸部朝向。  [Claim 7] The virtual reality interaction method according to claim 1, wherein the gesture information includes a face orientation of the all users.
[权利要求 8] —种虚拟现实交互装置, 其特征在于, 用于现实用户与虚拟场景之间 的交互, 包括:  [Claim 8] A virtual reality interaction device, configured to: interact between a real user and a virtual scene, including:
接收模块, 用于接收来自动捕相机的图像信息, 以及接收至少一个采 集器采集并通过对应虚拟场景客户端传来的传感信息;  a receiving module, configured to receive image information for automatically capturing a camera, and receive sensing information collected by at least one collector and transmitted through a corresponding virtual scene client;
获取模块, 用于根据所述图像信息和所述传感信息获取所有用户的空 间位置信息和姿态信息;  An obtaining module, configured to acquire spatial location information and posture information of all users according to the image information and the sensing information;
传输模块, 用于将所述空间位置信息和所述姿态信息传输至所述客户 端, 以便每一个所述客户端根据所述空间位置信息、 所述姿态信息以 及本地用户的视角信息渲染虚拟场景并显示给所述本地用户; 所述本 地用户为所有用户之一。 a transmission module, configured to transmit the spatial location information and the posture information to the client Ending, so that each of the clients renders a virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user; the local user is one of all users.
根据权利要求 8所述的虚拟现实交互装置, 其特征在于, 所述图像信 息包括所有用户的位置坐标信息, 所述传感信息包括所有用户的速度 和加速度信息, 所述获取模块具体用于: The virtual reality interaction device according to claim 8, wherein the image information includes location coordinate information of all users, and the sensing information includes speed and acceleration information of all users, and the acquiring module is specifically configured to:
将所述位置坐标信息、 所述速度和加速度信息进行滤波处理以得到所 述所有用户的空间位置信息和姿态信息。 The position coordinate information, the speed and acceleration information are subjected to filtering processing to obtain spatial position information and posture information of all the users.
根据权利要求 9所述的虚拟现实交互装置, 其特征在于, 所述位置坐 标信息包括当前坐标位置信息和历史坐标位置信息; 所述获取模块进 一步具体用于: The virtual reality interactive device according to claim 9, wherein the position coordinate information includes current coordinate position information and historical coordinate position information; and the obtaining module is further specifically used for:
将所述速度和加速度信息、 所述当前位置坐标信息或所述历史位置坐 标信息进行滤波处理以得到所述所有用户的空间位置信息和姿态信息 根据权利要求 9所述的虚拟现实交互装置, 其特征在于, 所述位置坐 标信息包括当前坐标位置信息和历史坐标位置信息; 所述获取模块包 括: And filtering the speed and acceleration information, the current position coordinate information or the historical position coordinate information to obtain spatial position information and posture information of all users according to claim 9, wherein the virtual reality interaction device according to claim 9 The location coordinate information includes current coordinate location information and historical coordinate location information. The acquiring module includes:
判断模块, 用于判断能否根据所述当前位置坐标信息计算出所述所有 用户的空间位置信息和姿态信息; a determining module, configured to determine whether the spatial location information and the posture information of the all users can be calculated according to the current location coordinate information;
第一计算模块, 用于当所述判断模块的判断结果为是吋, 根据所述当 前位置坐标信息计算所述所有用户的空间位置信息和姿态信息; 第二计算模块, 用于当所述判断模块的判断结果为否吋, 根据所述历 史位置坐标信息以及所述用户的速度和加速度信息计算所述所有用户 的空间位置信息和姿态信息。 a first calculating module, configured to: when the determining result of the determining module is 吋, calculate spatial position information and posture information of all users according to the current position coordinate information; and second calculating module, configured to: when The judgment result of the module is NO, and the spatial position information and the posture information of the all users are calculated according to the historical position coordinate information and the speed and acceleration information of the user.
根据权利要求 11所述的虚拟现实交互装置, 其特征在于, 所述第二计 算模块包括: The virtual reality interaction device according to claim 11, wherein the second calculation module comprises:
预测模块, 用于根据所述历史位置坐标信息预测当前位置坐标信息; 第一姿态信息计算模块, 用于根据预测的所述当前位置坐标信息, 以 及所述速度和加速度信息计算所述所有用户的空间位置信息和姿态信 息。 a prediction module, configured to predict current position coordinate information according to the historical position coordinate information; a first attitude information calculation module, configured to use the predicted current position coordinate information to And the speed and acceleration information calculates spatial position information and posture information of the all users.
[权利要求 13] 根据权利要求 11所述的虚拟现实交互装置, 其特征在于, 所述第一计 算模块包括:  The virtual reality interaction device according to claim 11, wherein the first calculation module comprises:
提取模块, 用于从所述当前位置坐标信息中提取多个标记点的二维坐 标信息;  An extraction module, configured to extract two-dimensional coordinate information of the plurality of marked points from the current position coordinate information;
三维坐标信息计算模块, 用于根据所述二维坐标信息计算所述多个标 记点的三维坐标信息;  a three-dimensional coordinate information calculation module, configured to calculate three-dimensional coordinate information of the plurality of mark points according to the two-dimensional coordinate information;
第二姿态信息计算模块, 用于根据所述多个标记点的三维坐标信息和 预设算法获取所述所有用户的空间位置信息和姿态信息。  The second posture information calculation module is configured to acquire spatial position information and posture information of all the users according to the three-dimensional coordinate information of the plurality of marked points and a preset algorithm.
[权利要求 14] 一种虚拟现实交互系统, 其特征在于, 所述系统包括: 多个动捕相机 , 至少一个采集器、 至少一个虚拟场景客户端、 至少一个头盔显示器 以及相机服务端; 其中, [Claim 14] A virtual reality interactive system, the system comprising: a plurality of camera, at least one collector, at least one virtual scene client, at least one helmet display, and a camera server;
所述动捕相机, 用于捕获用户的图像信息并传输给所述相机服务端; 至少一个所述采集器, 用于采集用户的传感信息并传输给该用户对应 的所述虚拟场景客户端;  The camera is configured to capture image information of a user and transmit it to the server server; at least one of the collectors is configured to collect sensing information of the user and transmit the information to the virtual scene client corresponding to the user. ;
至少一个所述虚拟场景客户端, 用于接收来之对应采集器的传感信息 并传输给所述相机服务端;  At least one of the virtual scene clients is configured to receive sensing information of the corresponding collector and transmit the information to the camera server;
所述相机服务端, 用于根据所述图像信息和所述传感信息获取所有用 户的空间位置信息和姿态信息; 以及将所述空间位置信息和所述姿态 信息传输至全部所述虚拟场景客户端, 以便每一个所述虚拟场景客户 端根据所述空间位置信息、 所述姿态信息以及本地用户的视角信息渲 染虚拟场景并通过所述头盔显示器显示给所述本地用户; 所述本地用 户为所有用户之一。  The camera server is configured to acquire spatial location information and posture information of all users according to the image information and the sensing information; and transmit the spatial location information and the posture information to all the virtual scene clients Ending, so that each of the virtual scene clients renders a virtual scene according to the spatial location information, the posture information, and the perspective information of the local user, and displays the virtual scene to the local user through the helmet display; One of the users.
[权利要求 15] —种计算机可读存储介质, 所述计算机可读存储介质存储有计算机程 序, 其特征在于, 所述计算机程序被处理器执行吋实现如权利要求 1 至 7任一项所述虚拟现实交互方法的步骤。  [Claim 15] A computer readable storage medium storing a computer program, wherein the computer program is executed by a processor, implementing the method of any one of claims 1 to The steps of the virtual reality interaction method.
PCT/CN2017/099508 2017-07-28 2017-08-29 Virtual reality interaction method, device and system WO2019019248A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201780000956.3A CN107820593B (en) 2017-07-28 2017-08-29 Virtual reality interaction method, device and system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CNPCT/CN2017/094961 2017-07-28
CN2017094961 2017-07-28

Publications (1)

Publication Number Publication Date
WO2019019248A1 true WO2019019248A1 (en) 2019-01-31

Family

ID=65039366

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/099508 WO2019019248A1 (en) 2017-07-28 2017-08-29 Virtual reality interaction method, device and system

Country Status (1)

Country Link
WO (1) WO2019019248A1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110969706A (en) * 2019-12-02 2020-04-07 Oppo广东移动通信有限公司 Augmented reality device, image processing method and system thereof, and storage medium
CN111027431A (en) * 2019-11-29 2020-04-17 广州幻境科技有限公司 Upper limb posture fuzzy positioning method and system based on inertial sensor
CN111047710A (en) * 2019-12-03 2020-04-21 深圳市未来感知科技有限公司 Virtual reality system, interactive device display method, and computer-readable storage medium
CN111651057A (en) * 2020-06-11 2020-09-11 浙江商汤科技开发有限公司 Data display method and device, electronic equipment and storage medium
CN111744180A (en) * 2020-06-29 2020-10-09 完美世界(重庆)互动科技有限公司 Method and device for loading virtual game, storage medium and electronic device
CN111857341A (en) * 2020-06-10 2020-10-30 浙江商汤科技开发有限公司 Display control method and device
CN111984114A (en) * 2020-07-20 2020-11-24 深圳盈天下视觉科技有限公司 Multi-person interaction system based on virtual space and multi-person interaction method thereof
CN113038262A (en) * 2021-01-08 2021-06-25 深圳市智胜科技信息有限公司 Panoramic live broadcast method and device
CN114327076A (en) * 2022-01-04 2022-04-12 上海三一重机股份有限公司 Virtual interaction method, device and system for working machine and working environment
CN116030228A (en) * 2023-02-22 2023-04-28 杭州原数科技有限公司 Method and device for displaying mr virtual picture based on web
CN116152383A (en) * 2023-03-06 2023-05-23 深圳优立全息科技有限公司 Voxel model, image generation method, device and storage medium
TWI837861B (en) 2022-01-06 2024-04-01 宏達國際電子股份有限公司 Data processing system, method for determining coordinates, and computer readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN106445176A (en) * 2016-12-06 2017-02-22 腾讯科技(深圳)有限公司 Man-machine interaction system and interaction method based on virtual reality technique
CN206209206U (en) * 2016-11-14 2017-05-31 上海域圆信息科技有限公司 3D glasses with fixed sample point and the virtual reality system of Portable multi-person interaction
CN106843460A (en) * 2016-12-13 2017-06-13 西北大学 The capture of multiple target position alignment system and method based on multi-cam

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106125903A (en) * 2016-04-24 2016-11-16 林云帆 Many people interactive system and method
CN206209206U (en) * 2016-11-14 2017-05-31 上海域圆信息科技有限公司 3D glasses with fixed sample point and the virtual reality system of Portable multi-person interaction
CN106445176A (en) * 2016-12-06 2017-02-22 腾讯科技(深圳)有限公司 Man-machine interaction system and interaction method based on virtual reality technique
CN106843460A (en) * 2016-12-13 2017-06-13 西北大学 The capture of multiple target position alignment system and method based on multi-cam

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111027431A (en) * 2019-11-29 2020-04-17 广州幻境科技有限公司 Upper limb posture fuzzy positioning method and system based on inertial sensor
CN110969706B (en) * 2019-12-02 2023-10-10 Oppo广东移动通信有限公司 Augmented reality device, image processing method, system and storage medium thereof
CN110969706A (en) * 2019-12-02 2020-04-07 Oppo广东移动通信有限公司 Augmented reality device, image processing method and system thereof, and storage medium
CN111047710A (en) * 2019-12-03 2020-04-21 深圳市未来感知科技有限公司 Virtual reality system, interactive device display method, and computer-readable storage medium
CN111047710B (en) * 2019-12-03 2023-12-26 深圳市未来感知科技有限公司 Virtual reality system, interactive device display method, and computer-readable storage medium
CN111857341A (en) * 2020-06-10 2020-10-30 浙江商汤科技开发有限公司 Display control method and device
CN111651057A (en) * 2020-06-11 2020-09-11 浙江商汤科技开发有限公司 Data display method and device, electronic equipment and storage medium
CN111744180A (en) * 2020-06-29 2020-10-09 完美世界(重庆)互动科技有限公司 Method and device for loading virtual game, storage medium and electronic device
CN111984114A (en) * 2020-07-20 2020-11-24 深圳盈天下视觉科技有限公司 Multi-person interaction system based on virtual space and multi-person interaction method thereof
CN113038262A (en) * 2021-01-08 2021-06-25 深圳市智胜科技信息有限公司 Panoramic live broadcast method and device
CN114327076A (en) * 2022-01-04 2022-04-12 上海三一重机股份有限公司 Virtual interaction method, device and system for working machine and working environment
TWI837861B (en) 2022-01-06 2024-04-01 宏達國際電子股份有限公司 Data processing system, method for determining coordinates, and computer readable storage medium
CN116030228A (en) * 2023-02-22 2023-04-28 杭州原数科技有限公司 Method and device for displaying mr virtual picture based on web
CN116152383A (en) * 2023-03-06 2023-05-23 深圳优立全息科技有限公司 Voxel model, image generation method, device and storage medium
CN116152383B (en) * 2023-03-06 2023-08-11 深圳优立全息科技有限公司 Voxel model, image generation method, device and storage medium

Similar Documents

Publication Publication Date Title
CN107820593B (en) Virtual reality interaction method, device and system
WO2019019248A1 (en) Virtual reality interaction method, device and system
US11127210B2 (en) Touch and social cues as inputs into a computer
WO2018119889A1 (en) Three-dimensional scene positioning method and device
CN113874870A (en) Image-based localization
US11715224B2 (en) Three-dimensional object reconstruction method and apparatus
US9256986B2 (en) Automated guidance when taking a photograph, using virtual objects overlaid on an image
CN106125903B (en) Multi-person interaction system and method
US20140118557A1 (en) Method and apparatus for providing camera calibration
US20130174213A1 (en) Implicit sharing and privacy control through physical behaviors using sensor-rich devices
CN112148197A (en) Augmented reality AR interaction method and device, electronic equipment and storage medium
TW201835723A (en) Graphic processing method and device, virtual reality system, computer storage medium
CN112198959A (en) Virtual reality interaction method, device and system
US10825197B2 (en) Three dimensional position estimation mechanism
US20200097732A1 (en) Markerless Human Movement Tracking in Virtual Simulation
CN112148189A (en) Interaction method and device in AR scene, electronic equipment and storage medium
CN104536579A (en) Interactive three-dimensional scenery and digital image high-speed fusing processing system and method
WO2017084319A1 (en) Gesture recognition method and virtual reality display output device
CN104520905A (en) Three-dimensional environment sharing system, and three-dimensional environment sharing method
CN110955329B (en) Transmission method, electronic device, and computer storage medium
WO2022174594A1 (en) Multi-camera-based bare hand tracking and display method and system, and apparatus
CN111698646B (en) Positioning method and device
CN112882576B (en) AR interaction method and device, electronic equipment and storage medium
US20170140215A1 (en) Gesture recognition method and virtual reality display output device
WO2022088819A1 (en) Video processing method, video processing apparatus and storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17919481

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17919481

Country of ref document: EP

Kind code of ref document: A1