WO2017092432A1 - Method, device, and system for virtual reality interaction - Google Patents

Method, device, and system for virtual reality interaction Download PDF

Info

Publication number
WO2017092432A1
WO2017092432A1 PCT/CN2016/096983 CN2016096983W WO2017092432A1 WO 2017092432 A1 WO2017092432 A1 WO 2017092432A1 CN 2016096983 W CN2016096983 W CN 2016096983W WO 2017092432 A1 WO2017092432 A1 WO 2017092432A1
Authority
WO
WIPO (PCT)
Prior art keywords
infrared
calibration
feature information
information
virtual reality
Prior art date
Application number
PCT/CN2016/096983
Other languages
French (fr)
Chinese (zh)
Inventor
张超
Original Assignee
乐视控股(北京)有限公司
乐视致新电子科技(天津)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视致新电子科技(天津)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017092432A1 publication Critical patent/WO2017092432A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/014Hand-worn input/output arrangements, e.g. data gloves
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment

Definitions

  • the present application relates to the field of virtual reality technologies, and in particular, to a virtual reality interaction method, apparatus, and system.
  • Virtual reality technology generates a simulated environment through computer and combines the acquired image information to realize interactive 3D vision and behavior, thus enabling Users are immersed in the simulation environment to realize the interaction between people and the virtual reality environment.
  • VR Virtual Reality
  • an important aspect affecting the interaction effect involves the technology of image information collection.
  • the prior art generally collects image information through a three-dimensional depth camera, and calculates the distance from a target object such as a collected object or a person by using the basic principle of stereo vision ranging to realize an interactive three-dimensional view, stereoscopic vision ranging.
  • the basic principle is to observe the same object from different viewpoints to obtain the perceptual images at different viewing angles, and calculate the distance information of the target objects by calculating the pixel deviation between the pixels of the image by the principle of triangulation.
  • the prior art can simulate a three-dimensional view and behavior through image information collected by a three-dimensional depth camera, thereby realizing virtual reality interaction.
  • the three-dimensional depth camera adopted by the prior art has difficulty in realizing virtual reality technology based on the camera in some scenarios due to its own price, technical maturity, ease of use, and the like, and an urgent need for other virtual reality technologies.
  • the embodiment of the present application provides a virtual reality interaction method, device, and system, which are used to solve the problem of acquiring an image through a three-dimensional depth camera in the prior art, and the virtual reality technology based on the camera is in some scenarios. Difficult to achieve, there is an urgent need for a problem with other virtual reality technologies.
  • An embodiment of the present application provides a virtual reality interaction method, where the method includes:
  • Determining three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image, and passing each of the labels
  • the fixed three-dimensional motion trajectory information performs virtual reality interaction.
  • the embodiment of the present application further provides a virtual reality interaction device, where the device includes: a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein:
  • a first infrared imaging unit configured to acquire at least two first infrared images of the calibration object by the first infrared camera, the calibration object comprising at least one calibration point, wherein the calibration point is used to provide infrared light;
  • a second infrared imaging unit configured to acquire at least two second infrared images of the calibration object by using a second infrared camera
  • An extracting unit configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, where the feature information is used to display each of the targets Positioning in the first infrared image or the second infrared image;
  • a determining unit configured to determine three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image;
  • An interaction unit configured to perform virtual reality interaction by using three-dimensional motion trajectory information of each of the calibration points.
  • the embodiment of the present application further provides a virtual reality interaction system, where the system includes: a virtual reality interaction device and a calibration object, wherein:
  • the virtual reality interaction device includes a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein: the first infrared camera unit is configured to pass the first infrared camera Collecting at least two first infrared images of the calibration object; a second infrared imaging unit configured to acquire at least two second infrared images of the calibration object by the second infrared camera; and an extracting unit configured to extract each of the calibration points Corresponding feature information in each of the first infrared images and corresponding feature information in each of the second infrared images, the feature information is used to display each of the calibration points in the first infrared image or the second infrared a position determining unit, configured to determine a three-dimensional shape of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image Motion trajectory information; an interaction unit configured to perform virtual reality interaction through three
  • the calibration object includes at least one calibration point for reflecting infrared light.
  • the embodiment of the present application provides an electronic device, including the virtual reality interaction device described in any of the foregoing embodiments.
  • the embodiment of the present application provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium can store a computer instruction, and the virtual reality interaction method provided by the embodiment of the present application can be implemented when the computer instruction is executed Some or all of the steps in each implementation.
  • An embodiment of the present application provides an electronic device, including: one or more processors; and a memory; wherein the memory stores instructions executable by the one or more processors, the instructions being set to It is used to perform any of the above virtual reality interaction methods of the present application.
  • An embodiment of the present application provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer,
  • the computer is caused to execute any of the above virtual reality interaction methods in the embodiments of the present application.
  • the virtual reality interaction method, device and system provided by the embodiment of the present application collect the infrared image of the calibration object through the first infrared camera and the second infrared camera, and perform feature information extraction and feature information on the collected infrared image. Analysis, determining the three-dimensional motion trajectory information of each calibration point of the calibration object, thereby performing virtual reality interaction, thereby solving the prior art virtual reality interaction by acquiring images through the three-dimensional depth camera, due to the price and technical maturity of the three-dimensional depth camera itself, The influence of the convenience and the like causes a problem that the virtual reality technology based on the camera is difficult to implement in some scenarios, and provides a new virtual reality technology.
  • FIG. 1 is a flowchart of a virtual reality interaction method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic diagram of a calibration glove in a practical application in Embodiment 1 of the present application;
  • FIG. 3 is a flowchart of a virtual reality interaction method according to Embodiment 2 of the present application.
  • FIG. 4 is a schematic diagram of a virtual reality interaction device in a practical application in Embodiment 2 of the present application;
  • FIG. 5 is a schematic structural diagram of a virtual reality interaction apparatus according to Embodiment 3 of the present application.
  • FIG. 6 is a schematic structural diagram of a virtual reality interaction system in Embodiment 4 of the present application.
  • FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • Embodiment 1 provides a virtual reality interaction method for solving the problem that the prior art acquires an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera.
  • the specific flow chart of the method is shown in FIG. 1 and includes the following steps:
  • Step S11 acquiring at least two first infrared images of the calibration object by the first infrared camera, and acquiring the same number of second infrared images of the calibration object by the second infrared camera.
  • the first infrared camera and the second infrared camera refer to a camera capable of imaging infrared light
  • the interaction method based on the infrared camera has a relatively low cost
  • the infrared light has a large wavelength and a low frequency, so the infrared light is transmitted in the air.
  • the energy loss is small, it is not easily distorted by infrared light imaging.
  • the infrared camera may be a camera that adds an ordinary light of an infrared filter between the photosensitive device of the camera and the lens, thereby further reducing the implementation cost of the virtual reality interaction method, especially for
  • the infrared filter used can be an 850 nm infrared band pass filter.
  • the first infrared camera and the second infrared camera are usually installed on the same device, and the device may be a server, a mobile terminal such as a mobile phone, an iPad or a smart helmet, or a terminal such as a smart TV or a computer.
  • the virtual reality interaction method can transmit the collected infrared image to the server, and then perform calculation and simulation of the real environment, and can also perform calculation and simulation of the real environment by terminals such as mobile phones, iPads, smart helmets, smart televisions or computers.
  • terminals such as mobile phones, iPads, smart helmets, smart televisions or computers.
  • the application examples do not limit this.
  • the calibration object includes at least one calibration point for providing infrared light.
  • the calibration object refers to an object that is simultaneously photographed by the first infrared camera and the second infrared camera.
  • the object may be a person or an object, and the outer surface of the object has at least a partial area for providing infrared light, which provides infrared light. Part of the area is called the calibration point, and there must be at least one calibration point in the calibration.
  • there are many ways to provide infrared light at the calibration point including reflecting infrared light and the calibration point itself emitting infrared light.
  • the way to provide infrared light is to install a reflective material on the outer surface of each calibration point of the calibration object to reflect the infrared light emitted by other devices to the calibration object.
  • the first infrared camera captures at least two first infrared images of the calibration object
  • the second infrared camera captures at least two second infrared images of the calibration object, that is, the first infrared camera and the second infrared camera respectively acquire the same N and M infrared images are calibrated, and N and M are both greater than or equal to two.
  • the first infrared camera and the second infrared camera collect the same number of infrared images, that is, N and M are equal.
  • Step S12 extract corresponding feature information of each of the calibration points in each of the first infrared images, and And extracting corresponding feature information of each of the calibration points in each of the second infrared images.
  • the feature information is used to display the position of each of the calibration points in each of the first infrared images or each of the second infrared images.
  • the manner of extracting the corresponding characteristic information of each of the calibration points in each of the first infrared images is The following operations may be performed for each calibration point: first determining a corresponding area of the calibration point in each of the first infrared images, and then using a clustering algorithm to extract features corresponding to the calibration point in the corresponding area Information; for each of the first infrared images, first calculating feature information of each calibration point in each of the first infrared images, and then determining corresponding features of each calibration point in each of the first infrared images information.
  • the trajectory prediction algorithm such as Kalman prediction can be used to determine the corresponding region of the same calibration point in each of the first infrared images, and then k-means or k is used in all corresponding regions corresponding to the calibration point.
  • the clustering algorithm such as -mediods extracts the feature information corresponding to the calibration point, and may first calculate each of the first infrared images by using a clustering algorithm such as k-means or k-mediods for each of the first infrared images.
  • the feature information of the fixed point is then determined by a trajectory prediction algorithm such as a Kalman prediction to determine corresponding feature information of the calibration point in each of the first infrared images.
  • Step S13 determining the three-dimensionality of each of the calibration points by using corresponding feature information in each of the first infrared images and corresponding feature information of each of the calibration points in each of the second infrared images by using the calibration points. Motion track information, and virtual reality interaction through the three-dimensional motion track information of each of the calibration points.
  • the three-dimensional motion trajectory information of each of the calibration points refers to information of a motion trajectory of each of the calibration points on the calibration object in a three-dimensional space when the calibration object moves in a three-dimensional space.
  • the calibration points on the calibration glove are respectively 5 fingers, and when the calibration glove is moved in a three-dimensional space, the three-dimensional motion track information of each calibration point is Refers to the information of the trajectory of the calibration point on the five fingers of the calibration glove in three-dimensional space.
  • each of the calibration points corresponds to each of the first infrared images.
  • the feature information, and the corresponding feature information of the calibration point in each second infrared image determine the three-dimensional motion track information of the calibration point.
  • the virtual reality interaction may be performed by using the three-dimensional motion trajectory information of each of the calibration points, and the three-dimensional motion trajectory information of the calibration object may be determined by using the three-dimensional motion trajectory information of each of the calibration points, and the calibration object is determined. Comparing the three-dimensional motion trajectory information with the information in the database, acquiring an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database, performing virtual reality interaction through the interaction instruction; The three-dimensional motion trajectory information of the calibration point is compared with the information in the database, and the interaction instruction corresponding to the three-dimensional motion trajectory information of each of the calibration points in the database is acquired, and the virtual reality interaction is performed by the interaction instruction.
  • the infrared image of the calibration object is collected by the first infrared camera and the second infrared camera, and the feature information is extracted and the characteristic information is analyzed by the acquired infrared image to determine the
  • the three-dimensional motion trajectory information of each calibration point of the calibration object is used for virtual reality interaction, thereby solving the prior art virtual reality interaction by acquiring images through the three-dimensional depth camera, due to the price, technical maturity, ease of use, etc. of the three-dimensional depth camera itself
  • the influence of the cause causes the virtual reality technology based on the camera to be difficult to implement in some scenarios, and there is an urgent need for other virtual reality technologies.
  • step S13 of the first embodiment the corresponding feature information in each of the first infrared images and the corresponding feature information of each of the calibration points in each of the second infrared images are determined by using the calibration points.
  • the three-dimensional motion trajectory information of each of the calibration points is determined by determining corresponding three-dimensional motion trajectory information of each calibration point by corresponding feature information of each calibration point in each first infrared image and corresponding feature information in each second infrared image.
  • the feature information corresponding to each of the first infrared images by using the calibration points and each of the calibration points are Determining, in the second infrared image, the three-dimensional motion trajectory information of the calibration object, that is, first determining, when the first infrared image is collected, each calibration point in the infrared image to the first
  • the information about the distance between the center of the lens of the infrared camera and the second infrared camera is determined by the information of at least two of the distances, and the three-dimensional motion track information of each of the calibration points is determined, thus forming Embodiment 2 of the present application, such as Figure 3 is described.
  • Step S21 collecting at least two first infrared images of the calibration object by the first infrared camera, and acquiring the same number of second infrared images of the calibration object by the second infrared camera, the first infrared camera
  • the lens of the head and the second infrared camera are in the same plane.
  • the first infrared camera may be configured to acquire at least two first infrared images of the calibration object, while the second infrared camera acquires the same calibration object.
  • the second infrared image that is, the first infrared camera and the second infrared camera, simultaneously acquire the same calibration R image, and R is greater than or equal to 2.
  • the first infrared camera and the second infrared camera are usually fixed on the same device, and the lenses of the first infrared camera and the second infrared camera are in the same plane, by adjusting the device.
  • the infrared image of the calibration object is acquired in the direction, and an infrared light emitting device is usually also mounted on the device, and the infrared light emitting device emits infrared light to the calibration object, and the infrared light is reflected by the calibration point in the calibration object.
  • Step S22 Extract corresponding feature information of each of the calibration points in each of the first infrared images, and extract feature information corresponding to each of the calibration points in each of the second infrared images.
  • Step S22 is the same as step S12 in Embodiment 1, and will not be described here.
  • Step S23 Perform the following operations for each calibration point:
  • Step S231 determining a second infrared image acquired simultaneously with each of the first infrared images
  • Step S232 determining, by using the feature information corresponding to the calibration point in the first infrared image, and the corresponding feature information of the calibration point in the second infrared image, when acquiring the first infrared image, Vertical distance information of the line connecting the calibration point to the lens center of the first infrared camera and the second infrared camera;
  • the vertical distance information of the line connecting the calibration point to the lens center of the first infrared camera and the second infrared camera refers to: the calibration point to the lens center of the first infrared camera and the lens center of the second infrared camera The distance information of the vertical line segment.
  • the distance between the first infrared camera and the second infrared camera is known, and when the focal lengths of the two cameras are the same and are known, the corresponding feature information in the first infrared image is utilized by the calibration point, and Corresponding feature information in the two infrared images may be obtained by using a similar triangle to obtain vertical distance information of the connection point of the calibration point to the lens center of the first infrared camera and the second infrared camera; Corresponding feature information in the infrared image, and corresponding feature information in the second infrared image and information on the focal lengths of the two cameras, determining the parallax of the two cameras to the calibration point, through the parallax and the first infrared camera and the second infrared camera The distance between the calibration points to the first Vertical distance information of a line connecting the center of the lens of the infrared camera and the second infrared camera.
  • Step S233 determining, by using at least two pieces of the vertical distance information corresponding to the calibration point, the calibration point three-dimensional motion track information.
  • Step S234 Perform virtual reality interaction through the three-dimensional motion trajectory information of each of the calibration points.
  • the first infrared camera and the second infrared camera simultaneously acquire the same number of infrared images of the calibration object, and are disposed in the same lens by setting the first infrared camera and the second infrared camera. a plane, so that the vertical distance information of each calibration point of the calibration point to the lens center of the first infrared camera and the second infrared camera can be used to determine the three-dimensional motion trajectory information of each calibration point of the calibration object, thereby making the virtual reality method more Easy to implement.
  • the non-transitory computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
  • Embodiment 3 provides a virtual reality interaction device for solving the problem that the prior art captures an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera.
  • a schematic diagram of a specific structure of the apparatus 500 is shown in FIG. 5, and includes the following units: a first infrared camera unit 501, a second infrared camera unit 502, an extraction unit 503, a determination unit 504, and an interaction unit 505, wherein:
  • the first infrared camera unit 501 is configured to collect at least two first infrared images of the calibration object by using the first infrared camera, the calibration object includes at least one calibration point, and the calibration point is used to provide infrared light;
  • a second infrared imaging unit 502 configured to acquire at least two second infrared images of the calibration object by using a second infrared camera;
  • the extracting unit 503 is configured to extract feature information corresponding to each of the first infrared images of the calibration point and corresponding feature information in each of the second infrared images, where the feature information is used to display each of the a position of the calibration point in the first infrared image or the second infrared image;
  • a determining unit 504 configured to use a corresponding feature letter in each of the first infrared images by using each of the calibration points And corresponding feature information in the second infrared image, determining three-dimensional motion trajectory information of each of the calibration points;
  • the interaction unit 505 is configured to perform virtual reality interaction by using the three-dimensional motion trajectory information of each of the calibration points.
  • the extracting unit 503 may further include a first extracting subunit 5031 and a second extracting subunit 5032, wherein:
  • the first extraction subunit 5031 is configured to determine, for each calibration point, a corresponding area of the calibration point in each of the first infrared images or each of the second infrared images;
  • the second extraction sub-unit 5032 is configured to use the clustering algorithm to extract feature information corresponding to the calibration point in the corresponding area.
  • the interaction unit 505 may further include a first interaction unit 5051, a second interaction unit 5052, and a third interaction unit 5053, where:
  • a first interaction unit 5051 configured to determine three-dimensional motion trajectory information of the calibration object by using three-dimensional motion trajectory information of each of the calibration points;
  • the second interaction unit 5052 is configured to compare the three-dimensional motion trajectory information of the calibration object with the information in the database, and acquire an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database;
  • the third interaction unit 5053 is configured to perform virtual reality interaction by using the interaction instruction.
  • the first infrared camera unit and the second infrared camera unit collect at least two infrared images of the same calibration object through the infrared camera, and then extract the unit to extract each calibration point in each infrared image.
  • the determining unit determines the three-dimensional motion trajectory information of each calibration point by using the feature information corresponding to each calibration point, and the interaction unit performs virtual reality interaction based on the three-dimensional motion trajectory information of each calibration point. Therefore, the prior art solves the virtual reality interaction by acquiring images through the three-dimensional depth camera. Due to the influence of the price of the three-dimensional depth camera and the like, the virtual reality technology based on the camera is difficult to realize in a specific scene.
  • Embodiment 4 provides a virtual reality interaction system for solving the problem that the prior art acquires an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera itself.
  • FIG. 6 is a schematic diagram of a specific structure of the virtual reality interaction system 600, including: a virtual reality interaction device 601 and a calibration object 602, wherein:
  • the virtual reality interaction device 601 includes: a first infrared imaging unit, a second infrared imaging unit, an extraction unit, a determining unit, and an interaction unit, wherein: the first infrared imaging unit is configured to collect at least the calibration object by the first infrared camera.
  • the calibration object includes at least one calibration point
  • the calibration point is used to provide infrared light
  • the second infrared imaging unit is configured to collect at least two second objects of the calibration object by the second infrared camera An infrared image
  • an extracting unit configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, wherein the feature information is used to display each a position of the calibration point in the first infrared image or the second infrared image
  • a determining unit configured to utilize corresponding feature information of each of the calibration points in the first infrared image and in the second infrared Corresponding feature information in the image, determining three-dimensional motion track information of each of the calibration points
  • an interaction unit for performing three-dimensional transport through each of the calibration points Track information virtual reality interaction.
  • the calibration 602 includes at least one calibration point for reflecting infrared light.
  • a virtual reality interactive system in practical applications including a virtual reality interactive helmet and calibration gloves.
  • the virtual reality interactive helmet has a dual infrared camera for collecting infrared images of the calibration gloves, and can usually be installed on the virtual reality interactive helmet.
  • Infrared light emitting device the five fingers of the calibration glove have materials capable of reflecting infrared light
  • the dual infrared camera on the virtual reality interactive helmet can transmit the collected infrared image to the remote server for processing after collecting the infrared image.
  • a processing device can also be installed on the virtual reality interactive helmet for processing.
  • the virtual reality interaction system provided in Embodiment 4 is configured, wherein the virtual reality interaction device collects an infrared image of the calibration object through the infrared camera in the first infrared imaging unit and the second infrared imaging unit, and collects the collected infrared
  • the image performs a series of processing for virtual reality interaction. Therefore, the prior art solves the virtual reality interaction by acquiring images through the three-dimensional depth camera. Due to the influence of the price of the three-dimensional depth camera and the like, the virtual reality technology based on the camera is difficult to realize in a specific scene.
  • an electronic device including the foregoing any one of the embodiments.
  • Virtual reality interactive device including the foregoing any one of the embodiments.
  • a non-transitory computer readable storage medium is also provided, the non-transitory computer readable storage medium storing computer executable instructions executable by any of the above methods
  • the virtual reality interaction method in the example is also provided.
  • FIG. 7 is a schematic diagram of a hardware structure of an electronic device for performing a virtual reality interaction method according to an embodiment of the present disclosure. As shown in FIG. 7, the device includes:
  • processors 710 and memory 720 one processor 710 is taken as an example in FIG.
  • the apparatus that performs the virtual reality interaction method may further include: an input device 730 and an output device 740.
  • the processor 710, the memory 720, the input device 730, and the output device 740 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
  • the memory 720 is used as a non-transitory computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as a program instruction corresponding to the virtual reality interaction method in the embodiment of the present application.
  • / Module for example, the first infrared camera unit 501, the second infrared camera unit 502, the extraction unit 503, the determination unit 504, and the interaction unit 505 shown in FIG. 5).
  • the processor 710 executes various functional applications and data processing of the electronic device by executing non-volatile software programs, instructions, and modules stored in the memory 720, that is, implementing the virtual reality interaction method of the above method embodiment.
  • the memory 720 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to use of the virtual reality interactive device, and the like.
  • memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device.
  • memory 720 can optionally include memory remotely located relative to processor 710, which can be connected to the virtual reality interactive device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
  • Input device 730 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the virtual reality interaction device.
  • the output device 740 can include a display device such as a display screen.
  • the one or more modules are stored in the memory 720, and when executed by the one or more processors 710, perform a virtual reality interaction method in any of the above method embodiments.
  • the above product can execute the method provided by the embodiment of the present application, and has a corresponding functional module for executing the method. And beneficial effects. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiments of the present application.
  • the electronic device of the embodiment of the present application exists in various forms, including but not limited to:
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • the device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.

Abstract

A method, device, and system for virtual reality interaction. The method comprises: capturing at least two first infrared images of a calibration object via a first infrared camera, capturing at least two second infrared images of the calibration object via a second infrared camera (S11), extracting corresponding feature information in the first infrared images and corresponding feature information in the second infrared images of calibration points (S12), utilizing the corresponding feature information in the first infrared images and the corresponding feature information in the second infrared images of the calibration points to determine three-dimensional motion trajectory information of the calibration points, and performing virtual reality interaction via the three-dimensional motion trajectory information of the calibration points (S13). Virtual reality interaction is performed by simultaneously capturing the infrared images of the calibration object via the two infrared cameras, solved is the technical problem that an alternate virtual reality technique being urgently required as implementation is difficult in some scenarios when images are captured by a three-dimensional depth camera for virtual reality interaction.

Description

一种虚拟现实交互方法、装置和系统Virtual reality interaction method, device and system
本申请要求于2015年12月1日提交中国专利局、申请号为201510870209.8、申请名称为“一种虚拟现实交互方法、装置和系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。The present application claims priority to Chinese Patent Application No. 201510870209.8, filed on Dec. 1, 2015, the entire disclosure of which is incorporated herein by reference. In this application.
技术领域Technical field
本申请涉及虚拟现实技术领域,尤其涉及一种虚拟现实交互方法、装置和系统。The present application relates to the field of virtual reality technologies, and in particular, to a virtual reality interaction method, apparatus, and system.
背景技术Background technique
随着社会的发展,各个行业的进步都为生活质量的提高做出了卓越贡献。其中,虚拟现实(Virtual Reality,VR)技术的出现极大地丰富了人们的生活,虚拟现实技术通过计算机生成一种模拟环境,并结合采集的图像信息实现交互式的三维视景和行为,从而使用户沉浸到模拟环境中,实现人与虚拟现实环境的交互。在虚拟现实技术中,影响交互效果的一个重要的方面涉及到图像信息采集的技术。With the development of society, the progress of various industries has made outstanding contributions to the improvement of quality of life. Among them, the emergence of Virtual Reality (VR) technology has greatly enriched people's lives. Virtual reality technology generates a simulated environment through computer and combines the acquired image information to realize interactive 3D vision and behavior, thus enabling Users are immersed in the simulation environment to realize the interaction between people and the virtual reality environment. In virtual reality technology, an important aspect affecting the interaction effect involves the technology of image information collection.
现有技术一般通过三维深度摄像头来采集图像信息,并利用立体视觉测距的基本原理计算与被采集物或人等目标对象的距离,用以实现交互式的三维视景,立体视觉测距的基本原理是从不同视点观察同一物体,以获取不同视角下的感知图像,通过三角测量原理计算图像像素间的像素偏差来计算目标对象的距离信息。The prior art generally collects image information through a three-dimensional depth camera, and calculates the distance from a target object such as a collected object or a person by using the basic principle of stereo vision ranging to realize an interactive three-dimensional view, stereoscopic vision ranging. The basic principle is to observe the same object from different viewpoints to obtain the perceptual images at different viewing angles, and calculate the distance information of the target objects by calculating the pixel deviation between the pixels of the image by the principle of triangulation.
虽然现有技术能够通过三维深度摄像头采集到的图像信息模拟出三维视景和行为,进而实现虚拟现实交互。但是,现有技术采用的三维深度摄像头由于自身价格、技术成熟度、使用便捷性等原因,导致基于该摄像头的虚拟现实技术在某些场景下难以实现,迫切需要一种其它虚拟现实技术。Although the prior art can simulate a three-dimensional view and behavior through image information collected by a three-dimensional depth camera, thereby realizing virtual reality interaction. However, the three-dimensional depth camera adopted by the prior art has difficulty in realizing virtual reality technology based on the camera in some scenarios due to its own price, technical maturity, ease of use, and the like, and an urgent need for other virtual reality technologies.
发明内容Summary of the invention
本申请实施例提供一种虚拟现实交互方法、装置和系统,用以解决现有技术通过三维深度摄像头采集图像,导致基于该摄像头的虚拟现实技术在某些场景下 难以实现,迫切需要一种其它虚拟现实技术的问题。The embodiment of the present application provides a virtual reality interaction method, device, and system, which are used to solve the problem of acquiring an image through a three-dimensional depth camera in the prior art, and the virtual reality technology based on the camera is in some scenarios. Difficult to achieve, there is an urgent need for a problem with other virtual reality technologies.
本申请实施例提供一种虚拟现实交互方法,所述方法包括:An embodiment of the present application provides a virtual reality interaction method, where the method includes:
通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;Acquiring at least two first infrared images of the calibration by the first infrared camera, and acquiring at least two second infrared images of the calibration by the second infrared camera, the calibration comprising at least one calibration point, the calibration point For providing infrared light;
提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;Extracting corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, wherein the feature information is used to display each of the calibration points in the first a position in an infrared image or a second infrared image;
利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。Determining three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image, and passing each of the labels The fixed three-dimensional motion trajectory information performs virtual reality interaction.
本申请实施例还提供一种虚拟现实交互装置,所述装置包括:第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:The embodiment of the present application further provides a virtual reality interaction device, where the device includes: a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein:
第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;a first infrared imaging unit configured to acquire at least two first infrared images of the calibration object by the first infrared camera, the calibration object comprising at least one calibration point, wherein the calibration point is used to provide infrared light;
第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;a second infrared imaging unit, configured to acquire at least two second infrared images of the calibration object by using a second infrared camera;
提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;An extracting unit, configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, where the feature information is used to display each of the targets Positioning in the first infrared image or the second infrared image;
确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;a determining unit, configured to determine three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image;
交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。An interaction unit, configured to perform virtual reality interaction by using three-dimensional motion trajectory information of each of the calibration points.
本申请实施例还提供一种虚拟现实交互系统,所述系统包括:虚拟现实交互装置和标定物,其中:The embodiment of the present application further provides a virtual reality interaction system, where the system includes: a virtual reality interaction device and a calibration object, wherein:
所述虚拟现实交互装置包括第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:第一红外摄像单元,用于通过第一红外摄像头 采集标定物的至少两张第一红外图像;第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。The virtual reality interaction device includes a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein: the first infrared camera unit is configured to pass the first infrared camera Collecting at least two first infrared images of the calibration object; a second infrared imaging unit configured to acquire at least two second infrared images of the calibration object by the second infrared camera; and an extracting unit configured to extract each of the calibration points Corresponding feature information in each of the first infrared images and corresponding feature information in each of the second infrared images, the feature information is used to display each of the calibration points in the first infrared image or the second infrared a position determining unit, configured to determine a three-dimensional shape of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image Motion trajectory information; an interaction unit configured to perform virtual reality interaction through three-dimensional motion trajectory information of each of the calibration points.
所述标定物包含至少一个标定点,所述标定点用于反射红外光。The calibration object includes at least one calibration point for reflecting infrared light.
本申请实施例提供一种电子设备,包括前述任一实施例所述的虚拟现实交互装置。The embodiment of the present application provides an electronic device, including the virtual reality interaction device described in any of the foregoing embodiments.
本申请实施例提供一种非暂态计算机可读存储介质,其中,该非暂态计算机可读存储介质可存储有计算机指令,该计算机指令执行时可实现本申请实施例提供的虚拟现实交互方法的各实现方式中的部分或全部步骤。The embodiment of the present application provides a non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium can store a computer instruction, and the virtual reality interaction method provided by the embodiment of the present application can be implemented when the computer instruction is executed Some or all of the steps in each implementation.
本申请实施例提供一种电子设备,包括:一个或多个处理器;以及,存储器;其中,所述存储器存储有可被所述一个或多个处理器执行的指令,所述指令被设置为用于执行本申请上述任一项虚拟现实交互方法。An embodiment of the present application provides an electronic device, including: one or more processors; and a memory; wherein the memory stores instructions executable by the one or more processors, the instructions being set to It is used to perform any of the above virtual reality interaction methods of the present application.
本申请实施例提供一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行本申请实施例上述任一项虚拟现实交互方法。An embodiment of the present application provides a computer program product, the computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions, when the program instructions are executed by a computer, The computer is caused to execute any of the above virtual reality interaction methods in the embodiments of the present application.
本申请实施例提供的一种虚拟现实交互方法、装置和系统,通过第一红外摄像头和第二红外摄像头采集标定物的红外图像,并通过对所采集的红外图像进行特征信息提取以及特征信息的分析,确定该标定物的各标定点的三维运动轨迹信息,从而进行虚拟现实交互,因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格、技术成熟度、使用便捷性等原因的影响,导致基于该摄像头的虚拟现实技术在某些场景下难以实现的问题,提供了一种新的虚拟现实技术。 The virtual reality interaction method, device and system provided by the embodiment of the present application collect the infrared image of the calibration object through the first infrared camera and the second infrared camera, and perform feature information extraction and feature information on the collected infrared image. Analysis, determining the three-dimensional motion trajectory information of each calibration point of the calibration object, thereby performing virtual reality interaction, thereby solving the prior art virtual reality interaction by acquiring images through the three-dimensional depth camera, due to the price and technical maturity of the three-dimensional depth camera itself, The influence of the convenience and the like causes a problem that the virtual reality technology based on the camera is difficult to implement in some scenarios, and provides a new virtual reality technology.
附图说明DRAWINGS
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作一简单地介绍,显而易见地,下面描述中的附图是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, a brief description of the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description It is a certain embodiment of the present application, and other drawings can be obtained according to the drawings without any creative work for those skilled in the art.
图1为本申请实施例1提供的一种虚拟现实交互方法的流程图;1 is a flowchart of a virtual reality interaction method according to Embodiment 1 of the present application;
图2为本申请实施例1中实际应用中的一种标定手套示意图;2 is a schematic diagram of a calibration glove in a practical application in Embodiment 1 of the present application;
图3为本申请实施例2提供的一种虚拟现实交互方法的流程图;3 is a flowchart of a virtual reality interaction method according to Embodiment 2 of the present application;
图4为本申请实施例2中实际应用中的一种虚拟现实交互设备示意图;4 is a schematic diagram of a virtual reality interaction device in a practical application in Embodiment 2 of the present application;
图5为本申请实施例3中的一种虚拟现实交互装置的结构示意图;FIG. 5 is a schematic structural diagram of a virtual reality interaction apparatus according to Embodiment 3 of the present application;
图6为本申请实施例4中的一种虚拟现实交互系统的结构示意图;6 is a schematic structural diagram of a virtual reality interaction system in Embodiment 4 of the present application;
图7为本申请实施例提供的一种电子设备的结构示意图。FIG. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
具体实施方式detailed description
为使本申请的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。The technical solutions in the embodiments of the present application are clearly and completely described in the following with reference to the accompanying drawings in the embodiments of the present application. Some embodiments are applied, not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments of the present application without departing from the inventive scope are the scope of the present application.
以下结合附图,详细说明本申请各实施例提供的技术方案。The technical solutions provided by the embodiments of the present application are described in detail below with reference to the accompanying drawings.
实施例1Example 1
实施例1提供一种虚拟现实交互方法,用以解决现有技术通过三维深度摄像头采集图像,由于三维深度摄像头自身价格等原因,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。该方法的具体流程示意图如附图1所示,包括下述步骤:Embodiment 1 provides a virtual reality interaction method for solving the problem that the prior art acquires an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera. The specific flow chart of the method is shown in FIG. 1 and includes the following steps:
步骤S11:通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物相同数量的第二红外图像。Step S11: acquiring at least two first infrared images of the calibration object by the first infrared camera, and acquiring the same number of second infrared images of the calibration object by the second infrared camera.
所述第一红外摄像头和所述第二红外摄像头指能够将红外光成像的摄像头, 由于在市场中红外摄像头相对于三维深度摄像头来说价格十分低廉,因此基于红外摄像头的交互方法实现成本相应较低,另外,红外光线的波长较大,频率较低,所以红外光线在空气中传输时能量损失较少,通过红外光线成像不容易失真。需要说明的是,在实际应用中红外摄像头可以是,在摄像头的感光器件和镜头之间添加红外滤光片的普通光线的摄像头,这样可以进一步降低该虚拟现实交互方法的实现成本,特别的为了增大红外摄像头的拍摄效果,所采用的红外滤光片可以为850纳米的红外带通滤光片。The first infrared camera and the second infrared camera refer to a camera capable of imaging infrared light, Since the price of the infrared camera in the market is very low compared to the three-dimensional depth camera, the interaction method based on the infrared camera has a relatively low cost, and the infrared light has a large wavelength and a low frequency, so the infrared light is transmitted in the air. When the energy loss is small, it is not easily distorted by infrared light imaging. It should be noted that, in practical applications, the infrared camera may be a camera that adds an ordinary light of an infrared filter between the photosensitive device of the camera and the lens, thereby further reducing the implementation cost of the virtual reality interaction method, especially for To increase the shooting effect of the infrared camera, the infrared filter used can be an 850 nm infrared band pass filter.
在实际应用中通常将第一红外摄像头和第二红外摄像头安装在同一个设备上,该设备可以为服务器,也可以为手机、iPad或智能头盔等移动终端,还可以为智能电视或电脑等终端,虚拟现实交互的方式可以通过将采集的红外图像传递给服务器,然后由进行运算和模拟现实环境,也可以由手机、iPad、智能头盔、智能电视或电脑等终端进行运算和模拟现实环境,本申请实施例不对此做出限定。In practical applications, the first infrared camera and the second infrared camera are usually installed on the same device, and the device may be a server, a mobile terminal such as a mobile phone, an iPad or a smart helmet, or a terminal such as a smart TV or a computer. The virtual reality interaction method can transmit the collected infrared image to the server, and then perform calculation and simulation of the real environment, and can also perform calculation and simulation of the real environment by terminals such as mobile phones, iPads, smart helmets, smart televisions or computers. The application examples do not limit this.
所述标定物包含至少一个标定点,所述标定点用于提供红外光。标定物指第一红外摄像头和第二红外摄像头同时拍照的对象,在现实中该对象可以为人或者物体,并且该对象的外表面至少要有部分的面积用于提供红外光,这个提供红外光的部分面积称为标定点,标定物中至少要有一个标定点,特别的在实际应用中标定点提供红外光的方式也可以有多种,包括反射红外光和标定点自身发射红外光,一种常用的提供红外光的方式是在标定物的各标定点的外表面安装反光材料,反射由其它设备向该标定物发射的红外光。The calibration object includes at least one calibration point for providing infrared light. The calibration object refers to an object that is simultaneously photographed by the first infrared camera and the second infrared camera. In reality, the object may be a person or an object, and the outer surface of the object has at least a partial area for providing infrared light, which provides infrared light. Part of the area is called the calibration point, and there must be at least one calibration point in the calibration. Especially in practical applications, there are many ways to provide infrared light at the calibration point, including reflecting infrared light and the calibration point itself emitting infrared light. The way to provide infrared light is to install a reflective material on the outer surface of each calibration point of the calibration object to reflect the infrared light emitted by other devices to the calibration object.
所述第一红外摄像头采集标定物的至少两张第一红外图像,以及第二红外摄像头采集所述标定物至少两张第二红外图像,是指第一红外摄像头和第二红外摄像头分别采集相同标定物N和M张红外图像,并且N和M均大于或等于2。在实际应用中,通常第一红外摄像头和第二红外摄像头采集相同数量的红外图像,也就是N和M相等。计算标定物的三维运动轨迹时,至少需要两张红外图像,并且在一段时间内采集的红外图像越多,越能精确地描述标定物在这段时间内的运动轨迹,但是又由于采集红外图像越多时,计算量十分繁琐,因此通常可以通过在一段时间内采集标定物的三张红外图像,来较好的描述标定物在这段时间内的运动轨迹。The first infrared camera captures at least two first infrared images of the calibration object, and the second infrared camera captures at least two second infrared images of the calibration object, that is, the first infrared camera and the second infrared camera respectively acquire the same N and M infrared images are calibrated, and N and M are both greater than or equal to two. In practical applications, usually the first infrared camera and the second infrared camera collect the same number of infrared images, that is, N and M are equal. When calculating the three-dimensional motion trajectory of the calibration object, at least two infrared images are needed, and the more infrared images are acquired over a period of time, the more accurately the trajectory of the calibration object during this time can be accurately described, but the infrared image is acquired. The more the calculation, the more computationally intensive, so it is usually possible to better describe the trajectory of the calibration during this time by collecting three infrared images of the calibration over a period of time.
步骤S12:提取各所述标定点在各所述第一红外图像中对应的特征信息,并 且提取各所述标定点在各所述第二红外图像中对应的特征信息。Step S12: extract corresponding feature information of each of the calibration points in each of the first infrared images, and And extracting corresponding feature information of each of the calibration points in each of the second infrared images.
所述特征信息用于显示各所述标定点在各所述第一红外图像或各所述第二红外图像中的位置。The feature information is used to display the position of each of the calibration points in each of the first infrared images or each of the second infrared images.
由于第一红外摄像头采集了标定物的至少两张红外图像,并且该标定物中可能会有多标定点,因此提取各所述标定点在各所述第一红外图像中对应的特征信息的方式可以为,针对每一个标定点执行以下操作:先确定所述标定点在各所述第一红外图像中的对应区域,然后在所述对应区域内采用聚类算法提取所述标定点对应的特征信息;也可以为针对各所述第一红外图像,先计算出各所述第一红外图像中各标定点的特征信息,然后确定各标定点在各所述第一红外图像中的对应的特征信息。Since the first infrared camera captures at least two infrared images of the calibration object, and there may be multiple calibration points in the calibration, the manner of extracting the corresponding characteristic information of each of the calibration points in each of the first infrared images is The following operations may be performed for each calibration point: first determining a corresponding area of the calibration point in each of the first infrared images, and then using a clustering algorithm to extract features corresponding to the calibration point in the corresponding area Information; for each of the first infrared images, first calculating feature information of each calibration point in each of the first infrared images, and then determining corresponding features of each calibration point in each of the first infrared images information.
在实际应用中可以先通过卡尔曼预测等轨迹预测算法,确定同一个标定点在各所述第一红外图像中的对应区域,然后在该标定点对应的全部对应区域内采用k-means或k-mediods等聚类算法提取该标定点对应的特征信息,也可以先对各所述第一红外图像采用k-means或k-mediods等聚类算法计算出各所述第一红外图像中各标定点的特征信息,然后通过卡尔曼预测等轨迹预测算法,确定该标定点在各所述第一红外图像中的对应的特征信息。In practical applications, the trajectory prediction algorithm such as Kalman prediction can be used to determine the corresponding region of the same calibration point in each of the first infrared images, and then k-means or k is used in all corresponding regions corresponding to the calibration point. The clustering algorithm such as -mediods extracts the feature information corresponding to the calibration point, and may first calculate each of the first infrared images by using a clustering algorithm such as k-means or k-mediods for each of the first infrared images. The feature information of the fixed point is then determined by a trajectory prediction algorithm such as a Kalman prediction to determine corresponding feature information of the calibration point in each of the first infrared images.
提取各所述标定点在各所述第二红外图像中对应的特征信息的方法,可以采用与提取各所述标定点在各所述第一红外图像中对应的特征信息相同的方法。And a method for extracting corresponding feature information of each of the calibration points in each of the second infrared images, and adopting a method of extracting the same feature information corresponding to each of the calibration points in each of the first infrared images.
步骤S13:利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。Step S13: determining the three-dimensionality of each of the calibration points by using corresponding feature information in each of the first infrared images and corresponding feature information of each of the calibration points in each of the second infrared images by using the calibration points. Motion track information, and virtual reality interaction through the three-dimensional motion track information of each of the calibration points.
各所述标定点的三维运动轨迹信息指所述标定物在三维空间上运动时,所述标定物上的各所述标定点在三维空间上运动轨迹的信息。例如实际应用中一种标定手套,如图2所示,该标定手套上的标定点分别为5个手指,当使得该标定手套在三维空间上运动时,各所述标定点的三维运动轨迹信息指该标定手套5个手指上的标定点分别在三维空间上运动轨迹的信息。The three-dimensional motion trajectory information of each of the calibration points refers to information of a motion trajectory of each of the calibration points on the calibration object in a three-dimensional space when the calibration object moves in a three-dimensional space. For example, in a practical application, a calibration glove, as shown in FIG. 2, the calibration points on the calibration glove are respectively 5 fingers, and when the calibration glove is moved in a three-dimensional space, the three-dimensional motion track information of each calibration point is Refers to the information of the trajectory of the calibration point on the five fingers of the calibration glove in three-dimensional space.
利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息。以上述标定手套为例,分别通过每一个标定点在各第一红外图像中对应 的特征信息,以及该标定点在各第二红外图像中对应的特征信息,确定该标定点三维运动轨迹信息。Determining three-dimensional motion track information of each of the calibration points by using corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information of each of the calibration points in each of the second infrared images . Taking the above-mentioned calibration gloves as an example, each of the calibration points corresponds to each of the first infrared images. The feature information, and the corresponding feature information of the calibration point in each second infrared image, determine the three-dimensional motion track information of the calibration point.
通过各所述标定点的三维运动轨迹信息进行虚拟现实交互的方式有多种,可以为通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息,将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令,通过所述交互指令进行虚拟现实交互;也可以为将各所述标定点的三维运动轨迹信息分别和数据库中的信息进行比对,获取所述数据库中与各所述标定点的三维运动轨迹信息对应的交互指令,通过所述交互指令进行虚拟现实交互。The virtual reality interaction may be performed by using the three-dimensional motion trajectory information of each of the calibration points, and the three-dimensional motion trajectory information of the calibration object may be determined by using the three-dimensional motion trajectory information of each of the calibration points, and the calibration object is determined. Comparing the three-dimensional motion trajectory information with the information in the database, acquiring an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database, performing virtual reality interaction through the interaction instruction; The three-dimensional motion trajectory information of the calibration point is compared with the information in the database, and the interaction instruction corresponding to the three-dimensional motion trajectory information of each of the calibration points in the database is acquired, and the virtual reality interaction is performed by the interaction instruction.
采用实施例1提供的一种虚拟现实交互方法,通过第一红外摄像头和第二红外摄像头采集标定物的红外图像,并通过对所采集的红外图像进行特征信息提取以及特征信息的分析,确定该标定物的各标定点的三维运动轨迹信息,从而进行虚拟现实交互,因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格、技术成熟度、使用便捷性等原因的影响,导致基于该摄像头的虚拟现实技术在某些场景下难以实现,迫切需要一种其它虚拟现实技术的问题。Using the virtual reality interaction method provided in Embodiment 1, the infrared image of the calibration object is collected by the first infrared camera and the second infrared camera, and the feature information is extracted and the characteristic information is analyzed by the acquired infrared image to determine the The three-dimensional motion trajectory information of each calibration point of the calibration object is used for virtual reality interaction, thereby solving the prior art virtual reality interaction by acquiring images through the three-dimensional depth camera, due to the price, technical maturity, ease of use, etc. of the three-dimensional depth camera itself The influence of the cause causes the virtual reality technology based on the camera to be difficult to implement in some scenarios, and there is an urgent need for other virtual reality technologies.
实施例2Example 2
实施例1的步骤S13中提到,利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,其实,通过各标定点在各第一红外图像中对应的特征信息以及在各第二红外图像中对应的特征信息,确定各标定点三维运动轨迹信息的方法有多种。当所述第一红外摄像头和所述第二红外摄像头的镜头处于同一平面时,所述利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定所述标定物的三维运动轨迹信息可以为,先确定每一张第一红外图像在采集时,该红外图像中的各标定点到该第一红外摄像头和第二红外摄像头的镜头中心连线的距离的信息,然后通过至少两个该距离的信息,确定各所述标定点三维运动轨迹信息,这样就构成了本申请的实施例2,如附图3所述。It is mentioned in step S13 of the first embodiment that the corresponding feature information in each of the first infrared images and the corresponding feature information of each of the calibration points in each of the second infrared images are determined by using the calibration points. The three-dimensional motion trajectory information of each of the calibration points is determined by determining corresponding three-dimensional motion trajectory information of each calibration point by corresponding feature information of each calibration point in each first infrared image and corresponding feature information in each second infrared image. There are many ways. When the lenses of the first infrared camera and the second infrared camera are in the same plane, the feature information corresponding to each of the first infrared images by using the calibration points and each of the calibration points are Determining, in the second infrared image, the three-dimensional motion trajectory information of the calibration object, that is, first determining, when the first infrared image is collected, each calibration point in the infrared image to the first The information about the distance between the center of the lens of the infrared camera and the second infrared camera is determined by the information of at least two of the distances, and the three-dimensional motion track information of each of the calibration points is determined, thus forming Embodiment 2 of the present application, such as Figure 3 is described.
步骤S21:通过第一红外摄像头采集标定物的至少两张第一红外图像,同时通过第二红外摄像头采集所述标定物相同数量的第二红外图像,所述第一红外摄 像头和所述第二红外摄像头的镜头处于同一平面。Step S21: collecting at least two first infrared images of the calibration object by the first infrared camera, and acquiring the same number of second infrared images of the calibration object by the second infrared camera, the first infrared camera The lens of the head and the second infrared camera are in the same plane.
通常在描述标定物的运动轨迹时,为了充分利用所采集的红外图像,可以使得所述第一红外摄像头采集标定物的至少两张第一红外图像,同时第二红外摄像头采集所述标定物相同数量的第二红外图像,也就是第一红外摄像头和第二红外摄像头同时各采集相同标定物R张红外图像,并且R大于或等于2。Generally, in describing the motion trajectory of the calibration object, in order to make full use of the acquired infrared image, the first infrared camera may be configured to acquire at least two first infrared images of the calibration object, while the second infrared camera acquires the same calibration object. The second infrared image, that is, the first infrared camera and the second infrared camera, simultaneously acquire the same calibration R image, and R is greater than or equal to 2.
如图4所示,在实际应用中通常将第一红外摄像头和第二红外摄像头固定在同一设备上,并且使得该第一红外摄像头和该第二红外摄像头的镜头处于同一平面,通过调整设备的方向采集标定物的红外图像,并且通常还在该设备上安装红外光发射装置,通过该红外光的发射装置向标定物发射红外光线,由标定物中的标定点反射红外光线。As shown in FIG. 4, in a practical application, the first infrared camera and the second infrared camera are usually fixed on the same device, and the lenses of the first infrared camera and the second infrared camera are in the same plane, by adjusting the device. The infrared image of the calibration object is acquired in the direction, and an infrared light emitting device is usually also mounted on the device, and the infrared light emitting device emits infrared light to the calibration object, and the infrared light is reflected by the calibration point in the calibration object.
步骤S22:提取各所述标定点在各所述第一红外图像中对应的特征信息,并且提取各所述标定点在各所述第二红外图像中对应的特征信息。Step S22: Extract corresponding feature information of each of the calibration points in each of the first infrared images, and extract feature information corresponding to each of the calibration points in each of the second infrared images.
步骤S22与实施例1中的步骤S12相同,这里就不再赘叙。Step S22 is the same as step S12 in Embodiment 1, and will not be described here.
步骤S23:针对每一个标定点执行以下操作:Step S23: Perform the following operations for each calibration point:
步骤S231:确定与每一张所述第一红外图像同时采集的第二红外图像;Step S231: determining a second infrared image acquired simultaneously with each of the first infrared images;
步骤S232:利用所述标定点在所述第一红外图像中对应的特征信息,以及所述标定点在所述第二红外图像中对应的特征信息,确定在采集所述第一红外图像时,所述标定点到所述第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息;Step S232: determining, by using the feature information corresponding to the calibration point in the first infrared image, and the corresponding feature information of the calibration point in the second infrared image, when acquiring the first infrared image, Vertical distance information of the line connecting the calibration point to the lens center of the first infrared camera and the second infrared camera;
所述标定点到所述第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息指:标定点到第一红外摄像头的镜头中心和第二红外摄像头的镜头中心的连线垂线段的距离信息。The vertical distance information of the line connecting the calibration point to the lens center of the first infrared camera and the second infrared camera refers to: the calibration point to the lens center of the first infrared camera and the lens center of the second infrared camera The distance information of the vertical line segment.
在实际应用中,通常第一红外摄像头和第二红外摄像头之间的距离已知,并且两个摄像头的焦距相同且已知时,利用标定点在第一红外图像中对应的特征信息,以及第二红外图像中对应的特征信息,可以通过利用相似三角形的计算,得出该标定点到第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息;也可以通过第一红外图像中对应的特征信息,以及第二红外图像中对应的特征信息和两个摄像头的焦距的信息,确定两个摄像头对该标定点的视差,通过视差以及第一红外摄像头和第二红外摄像头之间的距离,确定该标定点到第 一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息。In practical applications, generally, the distance between the first infrared camera and the second infrared camera is known, and when the focal lengths of the two cameras are the same and are known, the corresponding feature information in the first infrared image is utilized by the calibration point, and Corresponding feature information in the two infrared images may be obtained by using a similar triangle to obtain vertical distance information of the connection point of the calibration point to the lens center of the first infrared camera and the second infrared camera; Corresponding feature information in the infrared image, and corresponding feature information in the second infrared image and information on the focal lengths of the two cameras, determining the parallax of the two cameras to the calibration point, through the parallax and the first infrared camera and the second infrared camera The distance between the calibration points to the first Vertical distance information of a line connecting the center of the lens of the infrared camera and the second infrared camera.
步骤S233:通过所述标定点对应的至少两个所述垂直距离信息确定所述标定点三维运动轨迹信息。Step S233: determining, by using at least two pieces of the vertical distance information corresponding to the calibration point, the calibration point three-dimensional motion track information.
步骤S234:通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。Step S234: Perform virtual reality interaction through the three-dimensional motion trajectory information of each of the calibration points.
采用实施例2提供的一种虚拟现实交互方法,第一红外摄像头和第二红外摄像头同时采集标定物相同数量的红外图像,并且通过设置该第一红外摄像头和该第二红外摄像头的镜头处于同一平面,从而可以通过各标定点到第一红外摄像头和第二红外摄像头的镜头中心的连线的垂直距离信息,来确定标定物的各标定点的三维运动轨迹信息,从而使得该虚拟现实方法更加易于实现。According to a virtual reality interaction method provided in Embodiment 2, the first infrared camera and the second infrared camera simultaneously acquire the same number of infrared images of the calibration object, and are disposed in the same lens by setting the first infrared camera and the second infrared camera. a plane, so that the vertical distance information of each calibration point of the calibration point to the lens center of the first infrared camera and the second infrared camera can be used to determine the three-dimensional motion trajectory information of each calibration point of the calibration object, thereby making the virtual reality method more Easy to implement.
最后需要说明的是,本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,是可以通过计算机程序来指令相关的硬件来完成,所述的程序可存储于一非暂态计算机可读存储介质中,该程序在执行时,可包括如上述各方法的实施例的流程。其中,所述的非暂态计算机可读存储介质可为磁碟、光盘、只读存储记忆体(Read-Only Memory,ROM)或随机存储记忆体(Random Access Memory,RAM)等。Finally, it should be understood that those skilled in the art can understand that all or part of the process of implementing the above embodiments can be completed by a computer program to instruct related hardware, and the program can be stored in a non-transitory computer. In a readable storage medium, the program, when executed, may include the flow of an embodiment of the methods as described above. The non-transitory computer readable storage medium may be a magnetic disk, an optical disk, a read-only memory (ROM), or a random access memory (RAM).
实施例3Example 3
实施例3提供一种虚拟现实交互装置,用以解决现有技术通过三维深度摄像头采集图像,由于三维深度摄像头自身价格等原因,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。该装置500的具体结构示意图如图5所示,包括下述单元:第一红外摄像单元501、第二红外摄像单元502、提取单元503、确定单元504和交互单元505,其中:Embodiment 3 provides a virtual reality interaction device for solving the problem that the prior art captures an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera. A schematic diagram of a specific structure of the apparatus 500 is shown in FIG. 5, and includes the following units: a first infrared camera unit 501, a second infrared camera unit 502, an extraction unit 503, a determination unit 504, and an interaction unit 505, wherein:
第一红外摄像单元501,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;The first infrared camera unit 501 is configured to collect at least two first infrared images of the calibration object by using the first infrared camera, the calibration object includes at least one calibration point, and the calibration point is used to provide infrared light;
第二红外摄像单元502,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;a second infrared imaging unit 502, configured to acquire at least two second infrared images of the calibration object by using a second infrared camera;
提取单元503,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;The extracting unit 503 is configured to extract feature information corresponding to each of the first infrared images of the calibration point and corresponding feature information in each of the second infrared images, where the feature information is used to display each of the a position of the calibration point in the first infrared image or the second infrared image;
确定单元504,用于利用各所述标定点在所述第一红外图像中对应的特征信 息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;a determining unit 504, configured to use a corresponding feature letter in each of the first infrared images by using each of the calibration points And corresponding feature information in the second infrared image, determining three-dimensional motion trajectory information of each of the calibration points;
交互单元505,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。The interaction unit 505 is configured to perform virtual reality interaction by using the three-dimensional motion trajectory information of each of the calibration points.
在实际应用中,所述提取单元503还可以包括第一提取子单元5031和第二提取子单元5032,其中:In an actual application, the extracting unit 503 may further include a first extracting subunit 5031 and a second extracting subunit 5032, wherein:
所述第一提取子单元5031,用于针对每一个标定点确定所述标定点在各所述第一红外图像或各所述第二红外图像中的对应区域;The first extraction subunit 5031 is configured to determine, for each calibration point, a corresponding area of the calibration point in each of the first infrared images or each of the second infrared images;
所述第二提取子单元5032,用于在所述对应区域内采用聚类算法提取所述标定点对应的特征信息。The second extraction sub-unit 5032 is configured to use the clustering algorithm to extract feature information corresponding to the calibration point in the corresponding area.
特别的,所述交互单元505还可以包括第一交互单元5051、第二交互单元5052和第三交互单元5053,其中:Specifically, the interaction unit 505 may further include a first interaction unit 5051, a second interaction unit 5052, and a third interaction unit 5053, where:
第一交互单元5051,用于通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息;a first interaction unit 5051, configured to determine three-dimensional motion trajectory information of the calibration object by using three-dimensional motion trajectory information of each of the calibration points;
第二交互单元5052,用于将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令;The second interaction unit 5052 is configured to compare the three-dimensional motion trajectory information of the calibration object with the information in the database, and acquire an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database;
第三交互单元5053,用于通过所述交互指令进行虚拟现实交互。The third interaction unit 5053 is configured to perform virtual reality interaction by using the interaction instruction.
采用实施例3提供的一种虚拟现实交互装置,第一红外摄像单元和第二红外摄像单元,通过红外摄像头采集相同标定物的至少两张红外图像,然后提取单元提取各红外图像中各标定点对应的特征信息,确定单元利用各标定点对应的特征信息确定各标定点的三维运动轨迹信息,交互单元基于各标定点的三维运动轨迹信息进行虚拟现实交互。因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格等原因的影响,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。Using the virtual reality interaction device provided in Embodiment 3, the first infrared camera unit and the second infrared camera unit collect at least two infrared images of the same calibration object through the infrared camera, and then extract the unit to extract each calibration point in each infrared image. Corresponding feature information, the determining unit determines the three-dimensional motion trajectory information of each calibration point by using the feature information corresponding to each calibration point, and the interaction unit performs virtual reality interaction based on the three-dimensional motion trajectory information of each calibration point. Therefore, the prior art solves the virtual reality interaction by acquiring images through the three-dimensional depth camera. Due to the influence of the price of the three-dimensional depth camera and the like, the virtual reality technology based on the camera is difficult to realize in a specific scene.
另外,需要说明的是本申请实施例中可以通过硬件处理器(hardware processor)来实现上述相关功能模块。In addition, it should be noted that the foregoing related functional modules may be implemented by a hardware processor in the embodiment of the present application.
实施例4 Example 4
实施例4提供一种虚拟现实交互系统,用以解决现有技术通过三维深度摄像头采集图像,由于三维深度摄像头自身价格等原因,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。该虚拟现实交互系统600的具体结构示意图如图6所示,包括:虚拟现实交互装置601和标定物602,其中:Embodiment 4 provides a virtual reality interaction system for solving the problem that the prior art acquires an image through a three-dimensional depth camera, and the virtual reality technology based on the camera is difficult to implement in a specific scene due to the price of the three-dimensional depth camera itself. FIG. 6 is a schematic diagram of a specific structure of the virtual reality interaction system 600, including: a virtual reality interaction device 601 and a calibration object 602, wherein:
所述虚拟现实交互装置601包括:第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。The virtual reality interaction device 601 includes: a first infrared imaging unit, a second infrared imaging unit, an extraction unit, a determining unit, and an interaction unit, wherein: the first infrared imaging unit is configured to collect at least the calibration object by the first infrared camera. Two first infrared images, the calibration object includes at least one calibration point, the calibration point is used to provide infrared light, and the second infrared imaging unit is configured to collect at least two second objects of the calibration object by the second infrared camera An infrared image; an extracting unit, configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, wherein the feature information is used to display each a position of the calibration point in the first infrared image or the second infrared image; a determining unit, configured to utilize corresponding feature information of each of the calibration points in the first infrared image and in the second infrared Corresponding feature information in the image, determining three-dimensional motion track information of each of the calibration points; and an interaction unit for performing three-dimensional transport through each of the calibration points Track information virtual reality interaction.
所述标定物602包含至少一个标定点,所述标定点用于反射红外光。The calibration 602 includes at least one calibration point for reflecting infrared light.
在实际应用中的一种虚拟现实交互系统,包括虚拟现实交互头盔和标定手套,虚拟现实交互头盔中有双红外摄像头,用于采集标定手套的红外图像,通常还可以在虚拟现实交互头盔上安装红外光发射装置,标定手套的5个手指上有能够反射红外光的材料,虚拟现实交互头盔上的双红外摄像头在采集红外图像后,可以将采集的红外图像传递给远端的服务器进行处理,也可以在该虚拟现实交互头盔上安装处理设备进行处理。A virtual reality interactive system in practical applications, including a virtual reality interactive helmet and calibration gloves. The virtual reality interactive helmet has a dual infrared camera for collecting infrared images of the calibration gloves, and can usually be installed on the virtual reality interactive helmet. Infrared light emitting device, the five fingers of the calibration glove have materials capable of reflecting infrared light, and the dual infrared camera on the virtual reality interactive helmet can transmit the collected infrared image to the remote server for processing after collecting the infrared image. A processing device can also be installed on the virtual reality interactive helmet for processing.
采用实施例4提供的一种虚拟现实交互系统,该系统中虚拟现实交互装置通过第一红外摄像单元和第二红外摄像单元中的红外摄像头,采集标定物的红外图像,并将所采集的红外图像进行一系列处理从而进行虚拟现实交互。因此解决了现有技术通过三维深度摄像头采集图像进行虚拟现实交互,由于三维深度摄像头自身价格等原因的影响,导致基于该摄像头的虚拟现实技术在特定场景下难以实现的问题。The virtual reality interaction system provided in Embodiment 4 is configured, wherein the virtual reality interaction device collects an infrared image of the calibration object through the infrared camera in the first infrared imaging unit and the second infrared imaging unit, and collects the collected infrared The image performs a series of processing for virtual reality interaction. Therefore, the prior art solves the virtual reality interaction by acquiring images through the three-dimensional depth camera. Due to the influence of the price of the three-dimensional depth camera and the like, the virtual reality technology based on the camera is difficult to realize in a specific scene.
另外,需要说明的是本申请实施例中可以通过硬件处理器(hardware processor)来实现上述相关功能模块。In addition, it should be noted that the foregoing related functional modules may be implemented by a hardware processor in the embodiment of the present application.
在本申请另一实施例中,还提供一种电子设备,包括前述任一实施例所述的 虚拟现实交互装置。In another embodiment of the present application, an electronic device is provided, including the foregoing any one of the embodiments. Virtual reality interactive device.
在本申请另一实施例中,还提供一种非暂态计算机可读存储介质,所述非暂态计算机可读存储介质存储有计算机可执行指令,该计算机可执行指令可执行上述任意方法实施例中的虚拟现实交互方法。In another embodiment of the present application, a non-transitory computer readable storage medium is also provided, the non-transitory computer readable storage medium storing computer executable instructions executable by any of the above methods The virtual reality interaction method in the example.
图7是本申请实施例提供的执行虚拟现实交互方法的电子设备的硬件结构示意图,如图7所示,该设备包括:FIG. 7 is a schematic diagram of a hardware structure of an electronic device for performing a virtual reality interaction method according to an embodiment of the present disclosure. As shown in FIG. 7, the device includes:
一个或多个处理器710以及存储器720,图7中以一个处理器710为例。One or more processors 710 and memory 720, one processor 710 is taken as an example in FIG.
执行虚拟现实交互方法的设备还可以包括:输入装置730和输出装置740。The apparatus that performs the virtual reality interaction method may further include: an input device 730 and an output device 740.
处理器710、存储器720、输入装置730和输出装置740可以通过总线或者其他方式连接,图7中以通过总线连接为例。The processor 710, the memory 720, the input device 730, and the output device 740 may be connected by a bus or other means, as exemplified by a bus connection in FIG.
存储器720作为一种非暂态计算机可读存储介质,可用于存储非易失性软件程序、非易失性计算机可执行程序以及模块,如本申请实施例中的虚拟现实交互方法对应的程序指令/模块(例如,附图5所示的第一红外摄像单元501、第二红外摄像单元502、提取单元503、确定单元504和交互单元505)。处理器710通过运行存储在存储器720中的非易失性软件程序、指令以及模块,从而执行电子设备的各种功能应用以及数据处理,即实现上述方法实施例虚拟现实交互方法。The memory 720 is used as a non-transitory computer readable storage medium, and can be used for storing a non-volatile software program, a non-volatile computer executable program, and a module, such as a program instruction corresponding to the virtual reality interaction method in the embodiment of the present application. / Module (for example, the first infrared camera unit 501, the second infrared camera unit 502, the extraction unit 503, the determination unit 504, and the interaction unit 505 shown in FIG. 5). The processor 710 executes various functional applications and data processing of the electronic device by executing non-volatile software programs, instructions, and modules stored in the memory 720, that is, implementing the virtual reality interaction method of the above method embodiment.
存储器720可以包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需要的应用程序;存储数据区可存储根据虚拟现实交互装置的使用所创建的数据等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实施例中,存储器720可选包括相对于处理器710远程设置的存储器,这些远程存储器可以通过网络连接至虚拟现实交互装置。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。The memory 720 may include a storage program area and an storage data area, wherein the storage program area may store an operating system, an application required for at least one function; the storage data area may store data created according to use of the virtual reality interactive device, and the like. Moreover, memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, memory 720 can optionally include memory remotely located relative to processor 710, which can be connected to the virtual reality interactive device over a network. Examples of such networks include, but are not limited to, the Internet, intranets, local area networks, mobile communication networks, and combinations thereof.
输入装置730可接收输入的数字或字符信息,以及产生与虚拟现实交互装置的用户设置以及功能控制有关的键信号输入。输出装置740可包括显示屏等显示设备。 Input device 730 can receive input numeric or character information and generate key signal inputs related to user settings and function control of the virtual reality interaction device. The output device 740 can include a display device such as a display screen.
所述一个或者多个模块存储在所述存储器720中,当被所述一个或者多个处理器710执行时,执行上述任意方法实施例中的虚拟现实交互方法。The one or more modules are stored in the memory 720, and when executed by the one or more processors 710, perform a virtual reality interaction method in any of the above method embodiments.
上述产品可执行本申请实施例所提供的方法,具备执行方法相应的功能模块 和有益效果。未在本实施例中详尽描述的技术细节,可参见本申请实施例所提供的方法。The above product can execute the method provided by the embodiment of the present application, and has a corresponding functional module for executing the method. And beneficial effects. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiments of the present application.
本申请实施例的电子设备以多种形式存在,包括但不限于:The electronic device of the embodiment of the present application exists in various forms, including but not limited to:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。(1) Mobile communication devices: These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication. Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。(2) Ultra-mobile personal computer equipment: This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access. Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。(3) Portable entertainment devices: These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。(4) Server: A device that provides computing services. The server consists of a processor, a hard disk, a memory, a system bus, etc. The server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
(5)其他具有数据交互功能的电子装置。(5) Other electronic devices with data interaction functions.
以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性的劳动的情况下,即可以理解并实施。The device embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate, and the components displayed as units may or may not be physical units, ie may be located A place, or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the embodiment. Those of ordinary skill in the art can understand and implement without deliberate labor.
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干信号用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。 Through the description of the above embodiments, those skilled in the art can clearly understand that the various embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, by hardware. Based on such understanding, the above-described technical solutions may be embodied in the form of software products in essence or in the form of software products, which may be stored in a computer readable storage medium such as ROM/RAM, magnetic Discs, optical discs, and the like, including a number of signals, are used to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform the methods described in various embodiments or portions of the embodiments.
最后应说明的是:以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围。 Finally, it should be noted that the above embodiments are only used to explain the technical solutions of the present application, and are not limited thereto; although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that they can still The technical solutions described in the foregoing embodiments are modified, or the equivalents of the technical features are replaced by the equivalents. The modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application.

Claims (13)

  1. 一种虚拟现实交互方法,其特征在于,应用于电子设备,包括:A virtual reality interaction method, which is characterized in that it is applied to an electronic device, including:
    通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;Acquiring at least two first infrared images of the calibration by the first infrared camera, and acquiring at least two second infrared images of the calibration by the second infrared camera, the calibration comprising at least one calibration point, the calibration point For providing infrared light;
    提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;Extracting corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, wherein the feature information is used to display each of the calibration points in the first a position in an infrared image or a second infrared image;
    利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。Determining three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image, and passing each of the labels The fixed three-dimensional motion trajectory information performs virtual reality interaction.
  2. 根据权利要求1所述的方法,其特征在于,所述提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息具体为:The method according to claim 1, wherein the extracting the corresponding feature information of each of the calibration points in each of the first infrared images and the corresponding feature information in each of the second infrared images is specifically :
    针对每一个标定点执行以下操作:Do the following for each calibration point:
    确定所述标定点在各所述第一红外图像或各所述第二红外图像中的对应区域;Determining a corresponding area of the calibration point in each of the first infrared image or each of the second infrared images;
    在所述对应区域内采用聚类算法提取所述标定点对应的特征信息。A clustering algorithm is used in the corresponding area to extract feature information corresponding to the calibration point.
  3. 根据权利要求1所述的方法,其特征在于,通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像具体为:通过第一红外摄像头采集标定物的至少两张第一红外图像,同时通过第二红外摄像头采集所述标定物相同数量的第二红外图像,并且所述第一红外摄像头和所述第二红外摄像头的镜头处于同一平面;The method according to claim 1, wherein at least two first infrared images of the calibration object are acquired by the first infrared camera, and at least two second infrared images are collected by the second infrared camera. : acquiring at least two first infrared images of the calibration object by the first infrared camera while acquiring the same number of second infrared images of the calibration object by the second infrared camera, and the first infrared camera and the second infrared image The lens of the camera is in the same plane;
    则,所述利用各所述标定点在各所述第一红外图像中对应的特征信息以及各所述标定点在各所述第二红外图像中对应的特征信息,确定所述标定物的三维运动轨迹信息具体为:And determining, by using the feature information corresponding to each of the first infrared images in each of the first infrared images and the corresponding feature information of each of the calibration points in each of the second infrared images, determining the three-dimensionality of the calibration object The motion track information is specifically:
    针对每一个标定点执行以下操作: Do the following for each calibration point:
    确定与每一张所述第一红外图像同时采集的第二红外图像;Determining a second infrared image acquired simultaneously with each of the first infrared images;
    利用所述标定点在所述第一红外图像中对应的特征信息,以及所述标定点在所述第二红外图像中对应的特征信息,确定在采集所述第一红外图像时,所述标定点到所述第一红外摄像头和所述第二红外摄像头的镜头中心的连线的垂直距离信息;Determining, by using the corresponding feature information of the calibration point in the first infrared image, and corresponding feature information of the calibration point in the second infrared image, when the first infrared image is acquired, the target Vertical distance information that is fixed to a line connecting the lens centers of the first infrared camera and the second infrared camera;
    通过所述标定点对应的至少两个所述垂直距离信息确定所述标定点三维运动轨迹信息。The calibration point three-dimensional motion trajectory information is determined by at least two of the vertical distance information corresponding to the calibration point.
  4. 根据权利要求1所述的方法,其特征在于,所述通过各所述标定点的三维运动轨迹信息进行虚拟现实交互具体为:The method according to claim 1, wherein the virtual reality interaction by the three-dimensional motion trajectory information of each of the calibration points is specifically:
    通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息;Determining three-dimensional motion trajectory information of the calibration object by three-dimensional motion trajectory information of each of the calibration points;
    将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令;Comparing the three-dimensional motion trajectory information of the calibration object with the information in the database, and acquiring an interaction instruction corresponding to the three-dimensional motion trajectory information of the calibration object in the database;
    通过所述交互指令进行虚拟现实交互。The virtual reality interaction is performed by the interactive instruction.
  5. 根据权利要求1所述的方法,其特征在于,所述第一红外摄像头和所述第二红外摄像头具体为在感光器件和镜头之间具有红外滤光片的摄像头。The method of claim 1 wherein said first infrared camera and said second infrared camera are specifically cameras having an infrared filter between the photosensitive device and the lens.
  6. 根据权利要求1所述的方法,其特征在于,所述方法还包括:向所述标定物发射红外光,标定物通过标定物上的标定点反射红外光。The method of claim 1 further comprising: emitting infrared light to said calibrator, said calibrator reflecting infrared light through a calibrated point on the calibrator.
  7. 一种虚拟现实交互装置,其特征在于,所述装置包括:A virtual reality interaction device, the device comprising:
    第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:a first infrared camera unit, a second infrared camera unit, an extraction unit, a determination unit, and an interaction unit, wherein:
    第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;a first infrared imaging unit configured to acquire at least two first infrared images of the calibration object by the first infrared camera, the calibration object comprising at least one calibration point, wherein the calibration point is used to provide infrared light;
    第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;a second infrared imaging unit, configured to acquire at least two second infrared images of the calibration object by using a second infrared camera;
    提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示 各所述标定点在所述第一红外图像或第二红外图像中的位置;An extracting unit, configured to extract corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, where the feature information is used for display a position of each of the calibration points in the first infrared image or the second infrared image;
    确定单元,用于利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;a determining unit, configured to determine three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image;
    交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。An interaction unit, configured to perform virtual reality interaction by using three-dimensional motion trajectory information of each of the calibration points.
  8. 根据权利要求7所述的装置,其特征在于,所述提取单元包括第一提取子单元和第二提取子单元,其中:The apparatus according to claim 7, wherein said extracting unit comprises a first extracting subunit and a second extracting subunit, wherein:
    所述第一提取子单元,用于针对每一个标定点确定所述标定点在各所述第一红外图像或各所述第二红外图像中的对应区域;The first extraction subunit is configured to determine, for each calibration point, a corresponding area of the calibration point in each of the first infrared images or each of the second infrared images;
    所述第二提取子单元,用于在所述对应区域内采用聚类算法提取所述标定点对应的特征信息。The second extraction subunit is configured to extract, by using a clustering algorithm, feature information corresponding to the calibration point in the corresponding area.
  9. 根据权利要求7所述的装置,其特征在于,所述交互单元包括第一交互单元、第二交互单元和第三交互单元,其中:The apparatus according to claim 7, wherein the interaction unit comprises a first interaction unit, a second interaction unit, and a third interaction unit, wherein:
    第一交互单元,用于通过各所述标定点的三维运动轨迹信息确定所述标定物的三维运动轨迹信息;a first interaction unit, configured to determine three-dimensional motion trajectory information of the calibration object by using three-dimensional motion trajectory information of each of the calibration points;
    第二交互单元,用于将所述标定物的三维运动轨迹信息和数据库中的信息进行比对,获取所述数据库中与所述标定物的三维运动轨迹信息对应的交互指令;a second interaction unit, configured to compare the three-dimensional motion track information of the calibration object with the information in the database, and acquire an interaction instruction corresponding to the three-dimensional motion track information of the calibration object in the database;
    第三交互单元,用于通过所述交互指令进行虚拟现实交互。The third interaction unit is configured to perform virtual reality interaction by using the interaction instruction.
  10. 一种虚拟现实交互系统,其特征在于,所述系统包括:虚拟现实交互装置和标定物,其中:A virtual reality interaction system, characterized in that the system comprises: a virtual reality interaction device and a calibration object, wherein:
    所述虚拟现实交互装置包括第一红外摄像单元、第二红外摄像单元、提取单元、确定单元和交互单元,其中:第一红外摄像单元,用于通过第一红外摄像头采集标定物的至少两张第一红外图像;第二红外摄像单元,用于通过第二红外摄像头采集所述标定物至少两张第二红外图像;提取单元,用于提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;确定单元,用于利用各所述标定 点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息;交互单元,用于通过各所述标定点的三维运动轨迹信息进行虚拟现实交互;The virtual reality interaction device includes a first infrared imaging unit, a second infrared imaging unit, an extraction unit, a determining unit, and an interaction unit, wherein: the first infrared imaging unit is configured to collect at least two of the calibration objects by using the first infrared camera. a first infrared image; a second infrared camera unit, configured to acquire at least two second infrared images of the calibration object by using a second infrared camera; and an extracting unit, configured to extract each of the calibration points in each of the first infrared images Corresponding feature information and corresponding feature information in each of the second infrared images, the feature information is used to display a position of each of the calibration points in the first infrared image or the second infrared image; For the use of each of the calibrations Determining three-dimensional motion trajectory information of each of the calibration points by corresponding feature information in the first infrared image and corresponding feature information in the second infrared image; and an interaction unit for passing each of the calibration points 3D motion trajectory information for virtual reality interaction;
    所述标定物包含至少一个标定点,所述标定点用于反射红外光。The calibration object includes at least one calibration point for reflecting infrared light.
  11. 一种非暂态计算机可读存储介质,其特征在于,所述非暂态计算机可读存储介质存储计算机指令,所述计算机指令用于使所述计算机执行权利要求1-6任一所述方法。A non-transitory computer readable storage medium, wherein the non-transitory computer readable storage medium stores computer instructions for causing the computer to perform the method of any of claims 1-6 .
  12. 一种电子设备,其特征在于,包括:An electronic device, comprising:
    一个或多个处理器;以及,One or more processors; and,
    与所述一个或多个处理器通信连接的存储器;其中,a memory communicatively coupled to the one or more processors; wherein
    所述存储器存储有可被所述一个或多个处理器执行的指令,所述指令被所述一个或多个处理器执行,以使所述一个或多个处理器能够:The memory stores instructions executable by the one or more processors, the instructions being executed by the one or more processors to enable the one or more processors to:
    通过第一红外摄像头采集标定物的至少两张第一红外图像,以及通过第二红外摄像头采集所述标定物至少两张第二红外图像,所述标定物包含至少一个标定点,所述标定点用于提供红外光;Acquiring at least two first infrared images of the calibration by the first infrared camera, and acquiring at least two second infrared images of the calibration by the second infrared camera, the calibration comprising at least one calibration point, the calibration point For providing infrared light;
    提取各所述标定点在各所述第一红外图像中对应的特征信息以及在各所述第二红外图像中对应的特征信息,所述特征信息用于显示各所述标定点在所述第一红外图像或第二红外图像中的位置;Extracting corresponding feature information of each of the calibration points in each of the first infrared images and corresponding feature information in each of the second infrared images, wherein the feature information is used to display each of the calibration points in the first a position in an infrared image or a second infrared image;
    利用各所述标定点在所述第一红外图像中对应的特征信息以及在所述第二红外图像中对应的特征信息,确定各所述标定点的三维运动轨迹信息,并通过各所述标定点的三维运动轨迹信息进行虚拟现实交互。Determining three-dimensional motion trajectory information of each of the calibration points by using corresponding feature information of each of the calibration points in the first infrared image and corresponding feature information in the second infrared image, and passing each of the labels The fixed three-dimensional motion trajectory information performs virtual reality interaction.
  13. 一种计算机程序产品,所述计算机程序产品包括存储在非暂态计算机可读存储介质上的计算机程序,所述计算机程序包括程序指令,当所述程序指令被计算机执行时,使所述计算机执行权利要求1-6所述的方法。 A computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions that, when executed by a computer, cause the computer to execute The method of claims 1-6.
PCT/CN2016/096983 2015-12-01 2016-08-26 Method, device, and system for virtual reality interaction WO2017092432A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510870209.8A CN105892638A (en) 2015-12-01 2015-12-01 Virtual reality interaction method, device and system
CN201510870209.8 2015-12-01

Publications (1)

Publication Number Publication Date
WO2017092432A1 true WO2017092432A1 (en) 2017-06-08

Family

ID=57002403

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/096983 WO2017092432A1 (en) 2015-12-01 2016-08-26 Method, device, and system for virtual reality interaction

Country Status (2)

Country Link
CN (1) CN105892638A (en)
WO (1) WO2017092432A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509173A (en) * 2018-06-07 2018-09-07 北京德火科技有限责任公司 Image shows system and method, storage medium, processor

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system
CN109313483A (en) * 2017-01-22 2019-02-05 广东虚拟现实科技有限公司 A kind of device interacted with reality environment
US11445094B2 (en) * 2017-08-07 2022-09-13 Apple Inc. Electronic device having a vision system assembly held by a self-aligning bracket assembly
CN110442235B (en) * 2019-07-16 2023-05-23 广东虚拟现实科技有限公司 Positioning tracking method, device, terminal equipment and computer readable storage medium
CN111736708B (en) * 2020-08-25 2020-11-20 歌尔光学科技有限公司 Head-mounted device, picture display system and method thereof, detection system and method thereof

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247280A1 (en) * 2013-03-01 2014-09-04 Apple Inc. Federated mobile device positioning
CN105068649A (en) * 2015-08-12 2015-11-18 深圳市埃微信息技术有限公司 Binocular gesture recognition device and method based on virtual reality helmet
CN105892633A (en) * 2015-11-18 2016-08-24 乐视致新电子科技(天津)有限公司 Gesture identification method and virtual reality display output device
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130063560A1 (en) * 2011-09-12 2013-03-14 Palo Alto Research Center Incorporated Combined stereo camera and stereo display interaction
CN103135755B (en) * 2011-12-02 2016-04-06 深圳泰山在线科技有限公司 Interactive system and method
US10234941B2 (en) * 2012-10-04 2019-03-19 Microsoft Technology Licensing, Llc Wearable sensor for tracking articulated body-parts
KR101465894B1 (en) * 2013-09-13 2014-11-26 성균관대학교산학협력단 Mobile terminal for generating control command using marker put on finger and method for generating control command using marker put on finger in terminal
CN104298345B (en) * 2014-07-28 2017-05-17 浙江工业大学 Control method for man-machine interaction system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140247280A1 (en) * 2013-03-01 2014-09-04 Apple Inc. Federated mobile device positioning
CN105068649A (en) * 2015-08-12 2015-11-18 深圳市埃微信息技术有限公司 Binocular gesture recognition device and method based on virtual reality helmet
CN105892633A (en) * 2015-11-18 2016-08-24 乐视致新电子科技(天津)有限公司 Gesture identification method and virtual reality display output device
CN105892638A (en) * 2015-12-01 2016-08-24 乐视致新电子科技(天津)有限公司 Virtual reality interaction method, device and system

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509173A (en) * 2018-06-07 2018-09-07 北京德火科技有限责任公司 Image shows system and method, storage medium, processor

Also Published As

Publication number Publication date
CN105892638A (en) 2016-08-24

Similar Documents

Publication Publication Date Title
WO2017092432A1 (en) Method, device, and system for virtual reality interaction
JP6716650B2 (en) System and method for deep learning based hand gesture recognition in first person view
CN109754471B (en) Image processing method and device in augmented reality, storage medium and electronic equipment
US9564175B2 (en) Clustering crowdsourced videos by line-of-sight
US10488195B2 (en) Curated photogrammetry
WO2019100757A1 (en) Video generation method and device, and electronic apparatus
US9392248B2 (en) Dynamic POV composite 3D video system
US20230274471A1 (en) Virtual object display method, storage medium and electronic device
US20130321589A1 (en) Automated camera array calibration
CN114097248B (en) Video stream processing method, device, equipment and medium
WO2021184952A1 (en) Augmented reality processing method and apparatus, storage medium, and electronic device
WO2018000619A1 (en) Data display method, device, electronic device and virtual reality device
WO2018000609A1 (en) Method for sharing 3d image in virtual reality system, and electronic device
CN109002248B (en) VR scene screenshot method, equipment and storage medium
CN104243961A (en) Display system and method of multi-view image
WO2017133147A1 (en) Live-action map generation method, pushing method and device for same
US20140218291A1 (en) Aligning virtual camera with real camera
CN110033423B (en) Method and apparatus for processing image
KR102197615B1 (en) Method of providing augmented reality service and server for the providing augmented reality service
WO2016183954A1 (en) Calculation method and apparatus for movement locus, and terminal
JP2017162103A (en) Inspection work support system, inspection work support method, and inspection work support program
AU2020309094B2 (en) Image processing method and apparatus, electronic device, and storage medium
US20150326847A1 (en) Method and system for capturing a 3d image using single camera
WO2020244078A1 (en) Football match special effect presentation system and method, and computer apparatus
JP2018033107A (en) Video distribution device and distribution method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16869743

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16869743

Country of ref document: EP

Kind code of ref document: A1