WO2015180497A1 - 一种基于立体视觉的动作采集和反馈方法及系统 - Google Patents

一种基于立体视觉的动作采集和反馈方法及系统 Download PDF

Info

Publication number
WO2015180497A1
WO2015180497A1 PCT/CN2015/070605 CN2015070605W WO2015180497A1 WO 2015180497 A1 WO2015180497 A1 WO 2015180497A1 CN 2015070605 W CN2015070605 W CN 2015070605W WO 2015180497 A1 WO2015180497 A1 WO 2015180497A1
Authority
WO
WIPO (PCT)
Prior art keywords
accessory device
computer
light source
infrared light
bluetooth module
Prior art date
Application number
PCT/CN2015/070605
Other languages
English (en)
French (fr)
Inventor
贺杰
洪健钧
Original Assignee
贺杰
洪健钧
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 贺杰, 洪健钧 filed Critical 贺杰
Publication of WO2015180497A1 publication Critical patent/WO2015180497A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor

Definitions

  • the invention relates to a motion acquisition and feedback method and system based on stereo vision, belonging to the field of virtual reality and augmented reality technology.
  • the hand motion is collected and used as an input signal for computer control so that the hand motion can be restored in a virtual environment for human-computer interaction.
  • This type of human-computer interaction usually has the following implementation paths:
  • Laser scanning The range in front of the laser scanning, the sensor collects the reflected light in each direction, calculates the distance, and obtains the depth information of each point in front to form a depth map. If you put your hand in front of the sensor, the outline of the hand can be displayed on the depth map; if the laser is used to continuously scan the front range, the spatial position, motion trajectory and posture of the hand can be continuously collected by the computer. After the collected data is input into the computer, control data is formed, thereby performing human-computer interaction.
  • the advantage of the method is that the obtained spatial environment information is accurate, and the disadvantage is that if a low-latency action is to be quickly captured, a relatively expensive device is required, and the overall size of the device is large;
  • Infrared spot scanning using an infrared light source to shoot a random shaped spot in front.
  • the shape of the spot will change accordingly; the camera compares the shape of the collected spot with the stored shape.
  • a special algorithm to solve to obtain the depth information of each point in front of the device, to form a depth map.
  • Continuous collection of hand movements can also be achieved by placing the hand in front of the device.
  • the method is applicable to mature entertainment equipment such as KINECT.
  • the technology is mature and reliable, the price is medium, and the total volume of the equipment is medium, but the algorithm is complicated, the system resources are occupied, and the overall delay is large, thereby reducing the comfort level of use;
  • Electromagnetic induction The device emits electromagnetic waves to the front. When an object such as a hand enters an electromagnetic field, it will form a disturbance. The device restores the disturbance to a hand motion through a special algorithm to achieve motion collection.
  • the method has small time delay and low cost, but the reduction precision and stability are poor, and only motion acquisition can be realized, and image acquisition cannot be realized for augmented reality use;
  • the object of the present invention is to provide a method and system for motion acquisition and feedback based on stereo vision, which can effectively solve the problems existing in the prior art, in particular, the prior art cannot simultaneously solve the cost, delay, accurate stability. Problems and problems that cannot produce force feedback.
  • a stereoscopic-based motion acquisition and feedback method comprising the steps of: collecting a spatial position of an infrared light source on an accessory device in a front space, and mapping the spatial position into the space position In the virtual space, the position and direction of the virtual object are controlled; when the virtual object interacts with the virtual environment, the wireless control accessory device vibrates, and the simulated force feedback feeling.
  • the spatial position of the infrared light source on the accessory device in the front space includes: a dual camera device fixed on the head mounted display separately collects the image in front of the left and right, and transmits the left and right image data to the computer.
  • the computer processes the two-way image data to obtain the spatial position of the infrared light source on the accessory device.
  • the dual camera device separately collects the image in front of the left and right, and transmits the left and right image data to the computer.
  • the dual camera device collects the image data of the front through the CMOS or CCD sensors on the left and right sides. After the image data passes through the A/D converter, it becomes a digital signal and enters the DSP processor for exposure, gain, and white balance processing; the processed digital signal enters the encoding chip for signal encoding; after encoding, the two video signals enter the data bus synthesis. All the way and output to the computer through the USB data line, so that you can get a stable, clear digital image.
  • the USB data cable can also be a data line that is included in the dual camera device.
  • the computer processes the two-way image data, that is, the spatial position of the infrared light source on the accessory device specifically includes: the computer decodes the two video signals, and collects two video signals, and the infrared light source is in the image.
  • the X and Y coordinate values; the coordinate values of the infrared light source collected by each lens are converted into the azimuth angle of the infrared light source relative to the lens; when the azimuth angle of the two lenses and the distance between the two lenses are known, the calculation is obtained.
  • the X, Y, and Z coordinate values of the infrared light source after adjusting the X, Y, and Z coordinate values of the infrared light source through the lens distortion mathematical experience model, the real space coordinates of the infrared light source relative to the camera are obtained.
  • the method can obtain the spatial position of the infrared light source point with high precision and efficiency, thereby realizing high-speed stable capture of the target object.
  • the method of the present invention further includes: coupling the relative positions of the plurality of infrared light sources on the accessory device (ie, the real space coordinates with respect to the camera) with the model stored in the computer, and determining the type of the accessory device corresponding to the infrared light source. And the specific position of the infrared light source on the accessory device, thereby judging the spatial position and posture of the accessory devices, and the continuous coordinate acquisition can also calculate the movement trajectory and velocity of the devices, and use this as a control means for human-computer interaction. .
  • the method of the present invention further comprises: after decoding the two video signals by the computer, displaying the left and right eyes to form a stereoscopic vision of the surrounding real space; in the stereoscopic vision, superimposing the virtual objects or scenes by the left and right eyes;
  • the X and Y coordinate values of the infrared source in the two video signals and obtain the control commands (ie, position and motion) of the accessory device in the field of view.
  • Command to achieve an interactive experience in an augmented reality environment, such as wearing a glove-shaped accessory device, can directly click and control the miniature virtual warrior in the small battlefield of the desktop, and these click touch behaviors can trigger the aforementioned force feedback event, Greatly improve the user's control experience.
  • a stereo vision-based motion acquisition and feedback system implementing the foregoing method, comprising: a dual camera device, a base, an accessory device, and a computer, wherein the base is provided with an A Bluetooth module and a USB data cable, and the USB data cable is connected to the computer.
  • the accessory device is provided with a B Bluetooth module, and the B Bluetooth module is wirelessly connected with the A Bluetooth module; the accessory device is further provided with an infrared LED or an infrared LED, a vibration module and a driving box, and the driving box is respectively associated with A
  • the Bluetooth module is connected to the vibration module.
  • the lens of the dual camera device adopts a wide-angle lens of 120 degrees or more, so that a large area of motion acquisition can be realized.
  • the system further comprises: a head mounted display, the dual camera device being separately fixed to the head mounted display.
  • the accessory device is a glove-shaped accessory device, a finger-shaped accessory device, a gun-shaped accessory device or a handle attachment device, the glove-shaped accessory device, the ring-shaped accessory device, the gun-shaped accessory device or the handle attachment device
  • the glove-shaped accessory device is further provided with a vibration module and a driving box.
  • the gun-shaped accessory device and the handle accessory device are further provided with a trigger and a button, and the trigger and the button are connected with the B bluetooth module.
  • the dual camera device includes: a CMOS or CCD sensor, an A/D converter, a DSP processor and an encoding chip, a CMOS or CCD sensor, an A/D converter, The DSP processor and the encoding chip are sequentially connected, and the encoding chip is connected to the computer through a data bus and a USB data line.
  • the base further includes: a charging line, and the charging line is connected to the accessory device.
  • Positioning is fast, accurate, and stable: through the stereo vision, the above-mentioned two-eye parallax principle is used to spatially locate the infrared source point. Since the characteristics of the infrared source point in the space environment are very clear, the positioning operation is simple and the precision is high.
  • the dual camera device supports a frame rate of up to 120 Hz, and can maintain stable tracking even in the case of high-speed movement of the target object, so the overall delay is low and the user experience is good;
  • the hardware cost is extremely low, the structure is simple, and the volume is very small: the invention collects the image in front by using a dual camera device, and then calculates the spatial position of the infrared light source on the accessory device in the front space, wherein the dual camera device adopts a miniature modular CMOS device.
  • CCD image sensor construction not only the price is very low, the module is highly integrated, and the volume and weight are very small, even if it is used on the head-mounted display, it will not form a burden;
  • Force feedback can be realized: force feedback can greatly enhance the realism of interaction and enhance user experience;
  • the augmented reality function can be realized synchronously: in the invention, the dual camera device is used as the sensor, and the stereo vision collected by the device can be output to the head-mounted display to reproduce the real vision in addition to the spatial positioning, and can also be in real vision. Superimposing information or virtual objects; in addition, the present invention can also implement augmented reality functions by interacting with such information or objects through hand movements.
  • Figure 1 is a schematic view showing the structure of a ring attachment device
  • Figure 2 is a schematic view showing the structure of a gun-shaped attachment device
  • Figure 3 is a schematic structural view of the handle attachment device
  • Figure 4 is a schematic structural view of the base
  • Figure 5 is a schematic structural view of a glove-shaped attachment device
  • FIG. 6 is a schematic diagram showing a connection manner of a dual camera device and a head mounted display
  • Figure 7 is a flow chart of a method of an embodiment of the present invention.
  • Figure 8 is a schematic structural view of Embodiment 2.
  • Figure 9 is a schematic structural view of Embodiment 3.
  • Figure 10 is a schematic structural view of Embodiment 4.
  • Figure 11 is a schematic structural view of Embodiment 5.
  • Figure 12 is a schematic view showing the structure of Embodiment 6.
  • Embodiment 1 of the present invention includes the following steps: acquiring a spatial position of an infrared light source on an accessory device 14 in a front space: fixed to the head mounted display 2
  • the front dual camera device 1 collects front image data through the left and right CMOS or CCD sensors 18, and the image data passes through the A/D converter 19 to become a digital signal and enters the DSP processor 20 for exposure, gain, and white balance.
  • the processed digital signal enters the encoding chip 21 for signal encoding; after encoding, the two video signals enter the data bus 22 to be combined and output to the computer 15 through the USB data line provided by the dual camera device 1;
  • the road video signal is decoded, and the X and Y coordinate values of the infrared light source in the image of the two video signals are collected; the coordinate value of the infrared light source collected by each lens is converted into the azimuth angle of the infrared light source relative to the lens;
  • the X, Y, and Z coordinate values of the infrared light source are calculated; after the X, Y, and Z coordinate values of the infrared light source are adjusted by the lens distortion mathematical experience model,
  • the real space coordinates of the infrared light source relative to the camera are obtained; and so on, the coordinates of the plurality of infrared light sources on the accessory device 14 can be obtained by the above method, and the relative positions of the
  • the wireless control accessory device 14 vibrates and simulates the feedback feeling of the force.
  • the computer 15 After decoding the two video signals, the computer 15 displays the left and right eyes to form a stereoscopic vision of the surrounding real space; in the stereoscopic vision, the left and right eyes are superimposed with the virtual objects or scenes; and the computer 15 collects the two channels of the video signals.
  • the X, Y coordinate values of the light source obtain the position and motion commands of the accessory device 14 in the field of view to achieve interaction in an augmented reality environment.
  • Embodiment 2 A stereo vision-based motion acquisition and feedback system implementing the method described in Embodiment 1, as shown in FIG. 8, comprising: a dual camera device 1, a base 11, an accessory device 14, and a computer 15, said The base 11 (shown in FIG. 4) is provided with an A Bluetooth module 16 and a USB data line 13, and the USB data line 13 is connected to the computer 15.
  • the accessory device 14 is provided with a B Bluetooth module 17, and a Bluetooth module 17 Wireless connection with the A Bluetooth module 16; the dual camera device 1 is connected to the computer 15; the accessory device 14 is a glove-shaped accessory device 3 (shown in FIG. 5), and the glove-shaped accessory device 3 is also provided with an infrared LED 8 and vibration.
  • the module 4 and the driving box 5, the driving box 5 are respectively connected with the A Bluetooth module 16 and the vibration module 4;
  • the dual camera device 1 comprises: a CMOS or CCD sensor 18, an A/D converter 19, and a DSP processor 20.
  • the encoding chip 21, the CMOS or CCD sensor 18, the A/D converter 19, the DSP processor 20 and the encoding chip 21 are sequentially connected, and the encoding chip 21 is connected to the computer 15 via the data bus 22 and the USB data line 13;
  • Head-mounted display 2 dual camera device 1 is fixed separately to the head-mounted display The front of the display 2 (as shown in Figure 6).
  • Embodiment 3 A stereo vision-based motion acquisition and feedback system implementing the method described in Embodiment 1, as shown in FIG. 9, comprising: a dual camera device 1, a base 11, an accessory device 14, and a computer 15, said The base 11 (shown in FIG. 4) is provided with an A Bluetooth module 16 and a USB data line 13, and the USB data line 13 is connected to the computer 15; The device 14 is provided with a B Bluetooth module 17, which is wirelessly connected to the A Bluetooth module 16; the dual camera device 1 is connected to the computer 15; and the accessory device 14 refers to the ring accessory device 6 (shown in Figure 1).
  • the ring attachment device 6 is further provided with an infrared LED 8;
  • the dual camera device 1 comprises: a CMOS or CCD sensor 18, an A/D converter 19, a DSP processor 20 and an encoding chip 21, a CMOS or CCD sensor 18
  • the A/D converter 19, the DSP processor 20 and the encoding chip 21 are sequentially connected, and the encoding chip 21 is connected to the computer 15 via the data bus 22 and the USB data line 13; further comprising: a head mounted display 2, a dual camera device 1 The left and right sides are fixed to the front of the head mounted display 2 (as shown in FIG. 6).
  • Embodiment 4 A stereo vision-based motion acquisition and feedback system implementing the method described in Embodiment 1, as shown in FIG. 10, comprising: a dual camera device 1, a base 11, an accessory device 14, and a computer 15, said The base 11 (shown in FIG. 4) is provided with an A Bluetooth module 16 and a USB data line 13, and the USB data line 13 is connected to the computer 15.
  • the accessory device 14 is provided with a B Bluetooth module 17, and a Bluetooth module 17 Wireless connection with A Bluetooth module 16; dual camera device 1 is connected to computer 15; said accessory device 14 is a gun-shaped accessory device 7 (shown in Figure 2), and the gun-shaped accessory device 7 is provided with an infrared LED 8, a trigger and The button 9, the trigger and the button 9 are connected to the B Bluetooth module 17;
  • the dual camera device 1 comprises: a CMOS or CCD sensor 18, an A/D converter 19, a DSP processor 20 and an encoding chip 21, a CMOS or CCD sensor 18
  • the A/D converter 19, the DSP processor 20 and the encoding chip 21 are sequentially connected, and the encoding chip 21 is connected to the computer 15 via the data bus 22 and the USB data line 13; further comprising: a head mounted display 2, a dual camera device 1
  • the left and right sides are fixed to the front of the head mounted display 2 (as shown in FIG. 6).
  • Embodiment 5 A stereo vision-based motion acquisition and feedback system implementing the method described in Embodiment 1, as shown in FIG. 11, comprising: a dual camera device 1, a base 11, an accessory device 14, and a computer 15, said The base 11 (shown in FIG. 4) is provided with an A Bluetooth module 16 and a USB data line 13, and the USB data line 13 is connected to the computer 15.
  • the accessory device 14 is provided with a B Bluetooth module 17, and a Bluetooth module 17 Wireless connection with the A Bluetooth module 16; the dual camera device 1 is connected to the computer 15; the accessory device 14 is a handle attachment device 10 (shown in FIG.
  • the handle attachment device 10 is provided with an infrared LED 8, a trigger and a button 9
  • the trigger and button 9 are connected to the B Bluetooth module 17;
  • the dual camera device 1 includes: a CMOS or CCD sensor 18, an A/D converter 19, a DSP processor 20 and an encoding chip 21, a CMOS or CCD sensor 18, A
  • the /D converter 19, the DSP processor 20 and the encoding chip 21 are sequentially connected, and the encoding chip 21 is connected to the computer 15 via the data bus 22 and the USB data line 13; further comprising: a head mounted display 2, the dual camera device 1 is separated It is fixed in front of the head mounted display 2 (as shown in Figure 6).
  • Embodiment 6 A stereo vision-based motion acquisition and feedback system, as shown in FIG. 12, includes: a dual camera device 1, a base 11, an accessory device 14, and a computer 15, and the base 11 is provided with A Bluetooth module 16 and USB data line 13, USB data line 13 is connected to computer 15; said accessory device 14 is provided with B Bluetooth module 17, B The Bluetooth module 17 is wirelessly connected to the A Bluetooth module 16; the accessory device 14 is further provided with an infrared LED 8.
  • Glove-shaped accessory device 3 Infrared LED8 and vibration module 4 (infrared LED8 and vibration module 4 are together) at key points such as fingertips of gloves, micro-eccentric motor or ultrasonic in glove position
  • a vibration module 4 of the vibration generator has a driving box 5 at the back of the glove, and the driving box 5 includes a battery, a B Bluetooth module 17, a vibration module driving circuit and a micro charging interface. After the battery is connected to the charging line 12 through the micro charging interface, the device is powered; the B Bluetooth module 17 is wirelessly connected to the A Bluetooth module 16 on the base 11, and the computer 15 passes the A Bluetooth module 16 on the base 11 to the B Bluetooth module. 17 transmitting a signal, the control signal is transmitted by the B Bluetooth module 17 to the vibration module driving circuit, and finally the vibration module driving circuit drives the vibration module 4 to generate vibration;
  • the ring attachment device 6 only includes the battery, micro charging interface, micro switch and infrared LED8, the battery is connected to the charging line 12 via the micro charging interface and charged to the infrared LED8;
  • Gun-shaped accessory device 7 Two or more infrared LEDs 8 are arranged on the back of the gun.
  • the gun body is provided with a trigger and a button 9.
  • the button can realize various operations such as walking control in software, and the trigger can realize simulated fire operation.
  • the gun body includes a battery, a B Bluetooth module 17, a vibration module driving circuit and a micro charging interface.
  • the battery is connected to the charging line 12 through the micro charging interface to charge the device, and the B Bluetooth module 17 and the A Bluetooth on the base 11
  • the module 16 is wirelessly connected.
  • the computer 15 transmits a signal to the B Bluetooth module 17 through the A Bluetooth module 16 on the base 11, and the control signal is transmitted to the vibration module driving circuit by the B Bluetooth module 17, and finally driven by the vibration module driving circuit.
  • the vibration module 4 generates vibration, and the second is to press the button or trigger of the gun body, the B Bluetooth module 17 transmits the button signal back to the computer 15 through the base 11;
  • Handle attachment device 10 When the application software environment requires space positioning and complicated button operation at the same time, the handle attachment device 10 or other special-shaped device can be used, and the device is provided with an infrared light source for spatial position calibration, due to various types of buttons The operation function is realized, and the built-in force feedback vibration module 4 is connected to the computer 15 through the B Bluetooth module 17 and the base 11 , and the implementation principle is the same as that of other types of devices;
  • Base 11 The base 11 is connected to the computer 15 via the USB data line 13, and is connected to the charging line 12 via the micro charging interface to charge the accessory device, and wirelessly communicates, exchanges signals or implements with the accessory device through the A Bluetooth module 16. control;
  • the dual camera device 1 collects the front image data through the left and right CMOS or CCD sensors 18, and the image data passes through the A/D converter 19 to become a digital signal and enters the DSP processor 20.
  • the processed digital signal enters the encoding chip 21 for signal encoding; after encoding, the two video signals are integrated into the data bus 22 and output to the computer 15 through the USB data line 13; After obtaining two video signals input by the dual camera device 1, decoding the two video signals, and The X and Y coordinate values of the two infrared video source points in the image are collected, and the coordinates of the infrared light source collected by each lens can be converted into the azimuth angle of the light source relative to the lens, and the relative azimuth of the two lenses and the distance between the two lenses are known. In this case, the X, Y, and Z coordinate values of the light source can be calculated.
  • the real space coordinates of the light source point relative to the camera are obtained.
  • the coordinates of the plurality of light source points can be obtained by the above method, and the relative positions of the coordinates can be coupled with the computer-stored model to determine whether the light source points correspond to the glove or other accessory device 14, thereby determining the spatial position of the devices and attitude.
  • Continuous coordinate acquisition can calculate the movement trajectory and velocity of these devices, and use this as a control method for human-computer interaction;
  • Augmented reality principle When the computer 15 obtains two video signals input by the dual camera device 1, the two video signals are decoded, and the left and right eyes are displayed on the head mounted display 2 to form a three-dimensional real space. Vision, in this stereoscopic vision, a virtual object or scene can be superimposed, such as displaying a small battlefield on the desktop; at the same time, the computer 15 collects the X and Y coordinate values of the infrared light source in the two channels of video, according to the aforementioned space.
  • the position calculation principle and the operation principle of the accessory device 14 obtain the position and motion command of the accessory device 14 in the visual field, thereby realizing an interactive experience in an augmented reality environment, such as wearing a glove-shaped accessory device 3, which can directly click and control the desktop small battlefield.
  • the miniature virtual warrior in the middle, and these click touch behaviors can trigger the aforementioned force feedback event, greatly improving the user's control experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Position Input By Displaying (AREA)

Abstract

本发明公开了一种基于立体视觉的动作采集和反馈方法及系统,所述方法包括以下步骤:采集前方空间内附件装置上红外光源的空间位置,并将该空间位置映射入虚拟空间中,控制虚拟物体的位置和方向;当虚拟物体与虚拟环境发生交互行为时,无线控制附件装置发生震动,模拟力的回馈感。本发明的优点:1.定位迅速、精准、运行稳定;2.硬件成本极其低廉,结构简单且体积非常小;3.可实现力反馈:力反馈能大大加强交互行为的真实感,提升用户体验;4.能实现增强现实功能,增加人机交互方式。

Description

一种基于立体视觉的动作采集和反馈方法及系统 技术领域
本发明涉及一种基于立体视觉的动作采集和反馈方法及系统,属于虚拟现实和增强现实技术领域。
背景技术
采集手部动作并将其作为输入信号用于计算机控制,从而可以在虚拟环境中还原该手部动作用以人机交互。此类人机交互方式通常有如下的实现途径:
1、激光扫描:激光扫描前方的范围,传感器采集每个方向的反射光,计算距离,即得前方每个点位的深度信息,组成一幅深度图。如果将手放至传感器前方,手的轮廓即可在深度图上显示出来;而如果采用激光连续扫描前方范围,那么手的空间位置、运动轨迹和姿态均可被计算机连续采集。采集的数据输入计算机后,形成控制数据,进而进行人机交互。该方法的优点是得到的空间环境信息精确,缺点是若想实现低延时的动作快速捕捉需要较昂贵的设备,且设备总体体积较大;
2、红外光斑扫描:使用红外光源向前方射出随机形状的光斑,该光斑照射于前方外形不同的物体上时,光斑形状会相应的发生改变;摄像头将采集到的光斑形状与存储的形状进行对比,并通过特殊算法进行解算,从而得到设备前方每个点位的深度信息,组成深度图。若将手放在设备前,同样可实现手部动作的连续采集。该方法适用于KINECT等成熟的娱乐设备,技术成熟可靠,价格中等,设备总体积中等,但是其算法复杂,占用的系统资源较多,总体延时较大,因而降低了使用的舒适程度;
3、电磁感应:设备向前方发射电磁波,当手等物体进入电磁场时,会形成扰动;设备通过特殊算法将扰动还原为手部动作,实现动作采集。该方法延时小,成本低,但是还原精度和稳定性较差,且只能实现动作采集,无法实现图像采集用于增强现实用途;
4、视觉识别:通过单摄像头或双摄像头采集图像,并通过图形学算法采集手部动作,进行人机交互。但是由于手部轮廓复杂,特征不明显,因而该方法的运算量较大,会产生一定的延迟,影响用户体验。
综上,以上现有的解决方案均无法同时解决成本、延时、精确稳定性的问题。最重要的是没有反馈机制,当人手与虚拟世界物体发生交互时,仅仅由视觉感知动作,无法产生触觉等力的反馈,真实性大打折扣。
发明内容
本发明的目的在于,提供一种基于立体视觉的动作采集和反馈方法及系统,它可以有效解决现有技术中存在的问题,尤其是现有技术无法同时解决成本、延时、精确稳定性的问题及无法产生力反馈的问题。
为解决上述技术问题,本发明采用如下的技术方案:一种基于立体视觉的动作采集和反馈方法,包括以下步骤:采集前方空间内附件装置上红外光源的空间位置,并将该空间位置映射入虚拟空间中,控制虚拟物体的位置和方向;当虚拟物体与虚拟环境发生交互行为时,无线控制附件装置发生震动,模拟力的回馈感。
优选的,所述的采集前方空间内附件装置上红外光源的空间位置具体包括:固定在头戴式显示器上的双摄像头装置分开左右采集前方的图像,并将左右两路图像数据传入计算机中,计算机对两路图像数据进行处理,即得附件装置上红外光源的空间位置。
更优选的,所述的双摄像头装置分开左右采集前方的图像,并将左右两路图像数据传入计算机中具体包括:双摄像头装置通过左右两侧的CMOS或CCD传感器采集前方的图像数据,该图像数据经过A/D转换器后成为数字信号进入DSP处理器,进行曝光、增益、白平衡处理;处理后的数字信号进入编码芯片中进行信号编码;编码后,两路视频信号进入数据总线合成一路并通过USB数据线输出至计算机,从而能得到稳定、清晰的数字图像。
所述的USB数据线也可为双摄像头装置自带的数据线。
更优选的,所述的计算机对两路图像数据进行处理,即得附件装置上红外光源的空间位置具体包括:计算机对两路视频信号进行解码,并采集两路视频信号中红外光源在图像中的X、Y坐标值;将每个镜头采集的红外光源的坐标值换算成红外光源相对于镜头的方位角;当已知相对两个镜头的方位角及两个镜头的间距时,计算即得红外光源的X、Y、Z坐标值;将红外光源的X、Y、Z坐标值经镜头畸变数学经验模型调整后,即得红外光源相对于摄像头的真实空间坐标。该方法能够以较高精度和效率得到红外光源点的空间位置,从而实现目标物体的高速稳定捕捉。
本发明所述方法还包括:将附件装置上多个红外光源的相对位置(即相对于摄像头的真实空间坐标)与计算机中存储的模型进行耦合对比,判断所述红外光源对应的附件装置的种类以及红外光源在该附件装置上的具体位置,从而判断这些附件装置的空间位置和姿态,而连续的坐标采集也能计算出这些装置的移动轨迹和速率,并以此作为人机交互的控制手段。
本发明所述方法还包括:计算机对两路视频信号解码后,分左右眼进行显示,形成对周围真实空间的立体视觉;在该立体视觉中,分左右眼叠加虚拟物体或场景;同时计算机采集两路视频信号中红外光源的X、Y坐标值,获得视野中附件装置的控制命令(即位置和动作 命令),实现在增强现实环境下的互动体验,比如戴上手套形附件装置,能直接点击并控制桌面小战场中的微缩虚拟战士,且这些点击触碰行为一样可触发前述的力回馈事件,大大提升使用者的控制体验。
实现前述方法的基于立体视觉的动作采集和反馈系统,包括:双摄像头装置、基座、附件装置和计算机,所述的基座上设有A蓝牙模块和USB数据线,USB数据线与计算机连接;所述的附件装置上设有B蓝牙模块,B蓝牙模块与A蓝牙模块无线连接;所述的附件装置上还设有红外LED或者红外LED、震动模组与驱动盒,驱动盒分别与A蓝牙模块和震动模组连接。
优选的,所述双摄像头装置的镜头采用120度及以上广角镜头,从而可以实现较大面积的动作采集。
优选的,所述系统还包括:头戴式显示器,双摄像头装置分开左右固定于头戴式显示器上。
本发明中,所述的附件装置为手套形附件装置、指环形附件装置、枪形附件装置或手柄附件装置,所述的手套形附件装置、指环形附件装置、枪形附件装置或手柄附件装置上分别设有红外LED,手套形附件装置上还设有震动模组和驱动盒,枪形附件装置、手柄附件装置上还设有扳机及按键,扳机及按键与B蓝牙模块连接。
前述的基于立体视觉的动作采集和反馈系统中,所述的双摄像头装置包括:CMOS或CCD传感器、A/D转换器、DSP处理器和编码芯片,CMOS或CCD传感器、A/D转换器、DSP处理器和编码芯片顺次连接,编码芯片通过数据总线及USB数据线与计算机连接。
本发明中,所述的基座上还包括:充电线,充电线与附件装置连接。
与现有技术相比,本发明的优点在于:
1.定位迅速、精准、运行稳定:通过立体视觉运用上述两眼视差原理对红外光源点进行空间定位,由于红外光源点在空间环境中的特征非常清晰,因此定位运算简单,且精度较高,加上双摄像头装置最高支持120Hz帧率,在目标物高速运动的情况下,也能保持稳定追踪,因此整体延时很低,用户体验较好;
2.硬件成本极其低廉,结构简单且体积非常小:本发明通过采用双摄像头装置采集前方的图像,进而计算前方空间内附件装置上红外光源的空间位置,其中,双摄像头装置采用微型模块化CMOS、CCD图像传感器搭建,不仅价格极低,模块高度集成,而且体积和重量非常小,即使挂在头戴式显示器上使用,也不会形成负担;
3.可实现力反馈:力反馈能大大加强交互行为的真实感,提升用户体验;
4.动作采集范围广:由于镜头采用120度及以上广角镜头,能实现较大面积的动作采集;
5.可同步实现增强现实功能:本发明中采用双摄像头装置作为传感器,设备采集的立体视觉除了用于空间定位,还能输出至头戴式显示器重现真实视觉,而且在真实视觉中也可叠加信息或是虚拟物体;另外本发明还能通过手部动作与这些信息或物体互动,实现增强现实功能。
附图说明
图1是指环形附件装置的结构示意图;
图2是枪形附件装置的结构示意图;
图3是手柄附件装置的结构示意图;
图4是基座的结构示意图;
图5是手套形附件装置的结构示意图;
图6是双摄像头装置和头戴式显示器的连接方式示意图;
图7是本发明的一种实施例的方法流程图;
图8是实施例2的结构示意图;
图9是实施例3的结构示意图;
图10是实施例4的结构示意图;
图11是实施例5的结构示意图;
图12是实施例6的结构示意图。
附图标记:1-双摄像头装置,2-头戴式显示器,3-手套形附件装置,4-震动模组,5-驱动盒,6-指环形附件装置,7-枪形附件装置,8-红外LED,9-扳机及按键,10-手柄附件装置,11-基座,12-充电线,13-USB数据线,14-附件装置,15-计算机,16-A蓝牙模块,17-B蓝牙模块,18-CMOS或CCD传感器,19-A/D转换器,20-DSP处理器,21-编码芯片,22-数据总线。
下面结合附图和具体实施方式对本发明作进一步的说明。
具体实施方式
本发明的实施例1:一种基于立体视觉的动作采集和反馈方法,如图7所示,包括以下步骤:采集前方空间内附件装置14上红外光源的空间位置:固定在头戴式显示器2前方的双摄像头装置1通过左右两侧的CMOS或CCD传感器18采集前方的图像数据,该图像数据经过A/D转换器19后成为数字信号进入DSP处理器20,进行曝光、增益、白平衡处 理;处理后的数字信号进入编码芯片21中进行信号编码;编码后,两路视频信号进入数据总线22合成一路并通过双摄像头装置1自带的USB数据线输出至计算机15;计算机15对两路视频信号进行解码,并采集两路视频信号中红外光源在图像中的X、Y坐标值;将每个镜头采集的红外光源的坐标值换算成红外光源相对于镜头的方位角;当已知相对两个镜头的方位角及两个镜头的间距时,计算即得红外光源的X、Y、Z坐标值;将红外光源的X、Y、Z坐标值经镜头畸变数学经验模型调整后,即得红外光源相对于摄像头的真实空间坐标;以此类推,通过以上方法可以获得附件装置14上多个红外光源的坐标,将这些坐标的相对位置与计算机15中存储的模型进行耦合对比,从而可以判断出所述红外光源对应的附件装置14的种类(手套形附件或指环形附件等)以及红外光源在该附件装置14上的具体位置,进而判断这些附件装置14的空间位置和姿态;连续的进行坐标采集即可计算得出这些附件装置14的移动轨迹和速率;将所述的附件装置空间位置、移动轨迹和速率映射入虚拟空间中,控制虚拟物体的位置和方向;当虚拟物体与虚拟环境发生交互行为时,无线控制附件装置14发生震动,模拟力的回馈感。计算机15对两路视频信号解码后,分左右眼进行显示,形成对周围真实空间的立体视觉;在该立体视觉中,分左右眼叠加虚拟物体或场景;同时计算机15采集两路视频信号中红外光源的X、Y坐标值,获得视野中附件装置14的位置和动作命令,实现在增强现实环境下的互动。
实施例2:实现实施例1中所述方法的基于立体视觉的动作采集和反馈系统,如图8所示,包括:双摄像头装置1、基座11、附件装置14和计算机15,所述的基座11(如图4所示)上设有A蓝牙模块16和USB数据线13,USB数据线13与计算机15连接;所述的附件装置14上设有B蓝牙模块17,B蓝牙模块17与A蓝牙模块16无线连接;双摄像头装置1与计算机15连接;所述的附件装置14为手套形附件装置3(如图5所示),手套形附件装置3上还设有红外LED8、震动模组4与驱动盒5,驱动盒5分别与A蓝牙模块16和震动模组4连接;所述的双摄像头装置1包括:CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21,CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21顺次连接,编码芯片21通过数据总线22及USB数据线13与计算机15连接;还包括:头戴式显示器2,双摄像头装置1分开左右固定于头戴式显示器2前方(如图6所示)。
实施例3:实现实施例1中所述方法的基于立体视觉的动作采集和反馈系统,如图9所示,包括:双摄像头装置1、基座11、附件装置14和计算机15,所述的基座11(如图4所示)上设有A蓝牙模块16和USB数据线13,USB数据线13与计算机15连接;所述的附 件装置14上设有B蓝牙模块17,B蓝牙模块17与A蓝牙模块16无线连接;双摄像头装置1与计算机15连接;所述的附件装置14为指环形附件装置6(如图1所示),指环形附件装置6上还设有红外LED8;所述的双摄像头装置1包括:CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21,CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21顺次连接,编码芯片21通过数据总线22及USB数据线13与计算机15连接;还包括:头戴式显示器2,双摄像头装置1分开左右固定于头戴式显示器2前方(如图6所示)。
实施例4:实现实施例1中所述方法的基于立体视觉的动作采集和反馈系统,如图10所示,包括:双摄像头装置1、基座11、附件装置14和计算机15,所述的基座11(如图4所示)上设有A蓝牙模块16和USB数据线13,USB数据线13与计算机15连接;所述的附件装置14上设有B蓝牙模块17,B蓝牙模块17与A蓝牙模块16无线连接;双摄像头装置1与计算机15连接;所述的附件装置14为枪形附件装置7(如图2所示),枪形附件装置7上设有红外LED8、扳机及按键9,扳机及按键9与B蓝牙模块17连接;所述的双摄像头装置1包括:CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21,CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21顺次连接,编码芯片21通过数据总线22及USB数据线13与计算机15连接;还包括:头戴式显示器2,双摄像头装置1分开左右固定于头戴式显示器2前方(如图6所示)。
实施例5:实现实施例1中所述方法的基于立体视觉的动作采集和反馈系统,如图11所示,包括:双摄像头装置1、基座11、附件装置14和计算机15,所述的基座11(如图4所示)上设有A蓝牙模块16和USB数据线13,USB数据线13与计算机15连接;所述的附件装置14上设有B蓝牙模块17,B蓝牙模块17与A蓝牙模块16无线连接;双摄像头装置1与计算机15连接;所述的附件装置14为手柄附件装置10(如图3所示),手柄附件装置10上设有红外LED8、扳机及按键9,扳机及按键9与B蓝牙模块17连接;所述的双摄像头装置1包括:CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21,CMOS或CCD传感器18、A/D转换器19、DSP处理器20和编码芯片21顺次连接,编码芯片21通过数据总线22及USB数据线13与计算机15连接;还包括:头戴式显示器2,双摄像头装置1分开左右固定于头戴式显示器2前方(如图6所示)。
实施例6:一种基于立体视觉的动作采集和反馈系统,如图12所示,包括:双摄像头装置1、基座11、附件装置14和计算机15,所述的基座11上设有A蓝牙模块16和USB数据线13,USB数据线13与计算机15连接;所述的附件装置14上设有B蓝牙模块17,B 蓝牙模块17与A蓝牙模块16无线连接;所述的附件装置14上还设有红外LED8。
工作原理:
1、手套形附件装置3:在手套指尖等关键节点设有红外LED8和震动模组4(红外LED8和震动模组4是在一起的),手套指肚位置内藏微型偏心电机或是超声波震动发生器一类震动模组4,手套手背位置有驱动盒5,驱动盒5内包含电池、B蓝牙模块17、震动模块驱动电路和微型充电接口等。电池通过微型充电接口与充电线12连接充电后,为装置供电;B蓝牙模块17与基座11上的A蓝牙模块16无线连接,计算机15通过基座11上的A蓝牙模块16向B蓝牙模块17传送信号,由B蓝牙模块17将控制信号传送至震动模块驱动电路,最终由震动模块驱动电路驱动震动模组4产生震动;
2、指环形附件装置6:仅包括电池、微型充电接口、微型开关和红外LED8,电池经微型充电接口与充电线12连接充电后向红外LED8供电;
3、枪形附件装置7:枪背脊部排列有两个或多个红外LED8,枪身上设有扳机及按键9,按键可在软件中实现行走控制等多种操作,扳机可实现模拟开火等操作,枪身内包含电池、B蓝牙模块17、震动模块驱动电路和微型充电接口等,电池通过微型充电接口与充电线12连接充电后,为装置供电;B蓝牙模块17与基座11上的A蓝牙模块16无线连接,一是由计算机15通过基座11上的A蓝牙模块16向B蓝牙模块17传送信号,由B蓝牙模块17将控制信号传送至震动模块驱动电路,最终由震动模块驱动电路驱动震动模组4产生震动,二是按下枪身按键或扳机时,B蓝牙模块17将按键信号通过基座11传回计算机15;
4、手柄附件装置10:当应用软件环境同时需要空间定位和复杂的按键操作时,可采用手柄附件装置10或其他异形装置,装置上既设有红外光源用于空间位置标定,由于各类按键实现操作功能,同时内置力回馈震动模组4,装置通过B蓝牙模块17和基座11与计算机15相连,实现原理与其他几类装置相同;
5、基座11:基座11通过USB数据线13与计算机15连接,通过微型充电接口与充电线12连接为附件装置充电,并通过A蓝牙模块16与附件装置进行无线通信、交换信号或实施控制;
6、红外光源点的空间位置定位原理:双摄像头装置1通过左右两侧的CMOS或CCD传感器18采集前方的图像数据,该图像数据经过A/D转换器19后成为数字信号进入DSP处理器20,进行曝光、增益、白平衡处理;处理后的数字信号进入编码芯片21中进行信号编码;编码后,两路视频信号进入数据总线22合成一路并通过USB数据线13输出至计算机15;在计算机15获得双摄像头装置1输入的两路视频信号后,对两路视频信号解码,并 采集两路视频中红外光源点在图像中的X、Y坐标值,每个镜头采集红外光源的坐标可换算为光源相对镜头方位角,在已知相对两个镜头方位角和两个镜头间距的情况下,可算出光源的X、Y、Z坐标值,该坐标经过镜头畸变数学经验模型调整后,得到光源点相对于摄像头的真实空间坐标。通过以上方法可以获得多个光源点的坐标,这些坐标的相对位置可与计算机存储的模型进行耦合对比,以判断这些光源点对应的是手套还是其他附件装置14,进而判断这些装置的空间位置和姿态。而连续的坐标采集能算出这些装置的移动轨迹和速率,并以此作为人机交互的控制手段;
7、力反馈原理:当使用者通过双摄像头装置1和附件装置14对虚拟环境内虚拟人物或虚拟物体进行控制时,虚拟人物或虚拟物体会与虚拟环境发生交互行为,在某些预设的交互发生时,计算机15会向附件装置基座11发送信号以控制某个震动模组4进行力的反馈。比如虚拟人物手指触碰了虚拟环境的一块石头,软件引擎就会向附件装置基座11的驱动软件发出信号,由基座11通过A蓝牙模块16无线控制手套形附件装置3手指上的震动模组4产生震动,以模拟虚拟的碰触感;
8、增强现实原理:当计算机15获得双摄像头装置1输入的两路视频信号后,对两路视频信号解码,并分左右眼在头戴式显示器2上进行显示,形成对周围真实空间的立体视觉,在这一立体视觉中,可以叠加虚拟的物体或场景,比如在桌面显示出一个小型的战场;同时计算机15对两路视频中的红外光源X、Y坐标值进行采集,按前述的空间位置计算原理和附件装置14运作原理得出视野中附件装置14的位置和动作命令,从而实现在增强现实环境下的互动体验,比如戴上手套形附件装置3,能直接点击并控制桌面小战场中的微缩虚拟战士,且这些点击触碰行为一样可触发前述的力回馈事件,大大提升使用者的控制体验。

Claims (10)

  1. 一种基于立体视觉的动作采集和反馈方法,其特征在于,包括以下步骤:采集前方空间内附件装置(14)上红外光源的空间位置,并将该空间位置映射入虚拟空间中,控制虚拟物体的位置和方向;当虚拟物体与虚拟环境发生交互行为时,无线控制附件装置(14)发生震动,模拟力的回馈感。
  2. 根据权利要求1所述的基于立体视觉的动作采集和反馈方法,其特征在于,所述的采集前方空间内附件装置(14)上红外光源的空间位置具体包括:固定在头戴式显示器(2)上的双摄像头装置(1)分开左右采集前方的图像,并将左右两路图像数据传入计算机(15)中,计算机(15)对两路图像数据进行处理,即得附件装置(14)上红外光源的空间位置。
  3. 根据权利要求2所述的基于立体视觉的动作采集和反馈方法,其特征在于,所述的双摄像头装置(1)分开左右采集前方的图像,并将左右两路图像数据传入计算机(15)中具体包括:双摄像头装置(1)通过左右两侧的CMOS或CCD传感器(18)采集前方的图像数据,该图像数据经过A/D转换器(19)后成为数字信号进入DSP处理器(20),进行曝光、增益、白平衡处理;处理后的数字信号进入编码芯片(21)中进行信号编码;编码后,两路视频信号进入数据总线(22)合成一路并通过USB数据线(13)输出至计算机(15)。
  4. 根据权利要求3所述的基于立体视觉的动作采集和反馈方法,其特征在于,所述的计算机(15)对两路图像数据进行处理,即得附件装置(14)上红外光源的空间位置具体包括:对两路视频信号进行解码,并采集两路视频信号中红外光源在图像中的X、Y坐标值;将每个镜头采集的红外光源的坐标值换算成红外光源相对于镜头的方位角;当已知相对两个镜头的方位角及两个镜头的间距时,计算即得红外光源的X、Y、Z坐标值;将红外光源的X、Y、Z坐标值经镜头畸变数学经验模型调整后,即得红外光源相对于摄像头的真实空间坐标。
  5. 根据权利要求4所述的基于立体视觉的动作采集和反馈方法,其特征在于,所述方法还包括:将附件装置(14)上多个红外光源的相对位置与计算机(15)中存储的模型进行耦合对比,判断所述红外光源对应的附件装置(14)的种类。
  6. 根据权利要求4所述的基于立体视觉的动作采集和反馈方法,其特征在于,所述方法还包括:计算机(15)对两路视频信号解码后,分左右眼进行显示,形成对周围真实空间的立体视觉;在该立体视觉中,叠加虚拟物体或场景;同时计算机(15)采集两路视频信号中红外光源的X、Y坐标值,获得视野中附件装置(14)的控制命令,实现在增强现实环境下的互动。
  7. 实现权利要求1~6所述方法的基于立体视觉的动作采集和反馈系统,其特征在于,包 括:双摄像头装置(1)、基座(11)、附件装置(14)和计算机(15),所述的基座(11)上设有A蓝牙模块(16)和USB数据线(13),USB数据线(13)与计算机(15)连接;所述的附件装置(14)上设有B蓝牙模块(17),B蓝牙模块(17)与A蓝牙模块(16)无线连接;所述的附件装置(14)上还设有红外LED(8)或红外LED(8)、震动模组(4)和驱动盒(5),驱动盒(5)分别与A蓝牙模块(16)和震动模组(4)连接。
  8. 根据权利要求7所述的基于立体视觉的动作采集和反馈系统,其特征在于,还包括:头戴式显示器(2),双摄像头装置(1)分开左右固定于头戴式显示器(2)上。
  9. 根据权利要求7或8所述的基于立体视觉的动作采集和反馈系统,其特征在于,所述的附件装置(14)为手套形附件装置(3)、指环形附件装置(6)、枪形附件装置(7)或手柄附件装置(10),所述的手套形附件装置(3)、指环形附件装置(6)、枪形附件装置(7)或手柄附件装置(10)上分别设有红外LED(8),手套形附件装置(3)上还设有震动模组(4)和驱动盒(5),枪形附件装置(7)、手柄附件装置(10)上还设有扳机及按键(9),扳机及按键(9)与B蓝牙模块(17)连接。
  10. 根据权利要求9所述的基于立体视觉的动作采集和反馈系统,其特征在于,所述的双摄像头装置(1)包括:CMOS或CCD传感器(18)、A/D转换器(19)、DSP处理器(20)和编码芯片(21),CMOS或CCD传感器(18)、A/D转换器(19)、DSP处理器(20)和编码芯片(21)顺次连接,编码芯片(21)通过数据总线(22)及USB数据线(13)与计算机(15)连接。
PCT/CN2015/070605 2014-05-30 2015-01-13 一种基于立体视觉的动作采集和反馈方法及系统 WO2015180497A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410240371 2014-05-30
CN201410240371.7 2014-05-30

Publications (1)

Publication Number Publication Date
WO2015180497A1 true WO2015180497A1 (zh) 2015-12-03

Family

ID=51638381

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/070605 WO2015180497A1 (zh) 2014-05-30 2015-01-13 一种基于立体视觉的动作采集和反馈方法及系统

Country Status (2)

Country Link
CN (2) CN104090660B (zh)
WO (1) WO2015180497A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112025735A (zh) * 2020-09-10 2020-12-04 河南工业职业技术学院 基于视觉感知的被动柔顺机器人抛磨装置

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104090660B (zh) * 2014-05-30 2017-11-10 广东虚拟现实科技有限公司 一种基于立体视觉的动作采集和反馈方法及系统
CN104536579B (zh) * 2015-01-20 2018-07-27 深圳威阿科技有限公司 交互式三维实景与数字图像高速融合处理系统及处理方法
CN104539929B (zh) * 2015-01-20 2016-12-07 深圳威阿科技有限公司 带有运动预测的立体图像编码方法和编码装置
CN104699247B (zh) * 2015-03-18 2017-12-12 北京七鑫易维信息技术有限公司 一种基于机器视觉的虚拟现实交互系统及方法
CN104898669B (zh) * 2015-04-30 2019-01-11 广东虚拟现实科技有限公司 一种基于惯性传感器进行虚拟现实行走控制的方法及系统
CN104991650B (zh) * 2015-07-24 2018-08-03 广东虚拟现实科技有限公司 一种手势控制器及一种虚拟现实系统
CN105445937B (zh) * 2015-12-27 2018-08-21 深圳游视虚拟现实技术有限公司 基于标记点的多目标实时定位追踪装置、方法及系统
CN105721857A (zh) * 2016-04-08 2016-06-29 刘海波 一种具有双摄像头的头盔
CN106354253A (zh) * 2016-08-19 2017-01-25 上海理湃光晶技术有限公司 一种光标控制方法与基于该方法的ar眼镜与智能指环
CN106547458A (zh) * 2016-11-29 2017-03-29 北京小鸟看看科技有限公司 一种虚拟现实系统及其空间定位装置
WO2018072593A1 (zh) * 2016-10-21 2018-04-26 北京小鸟看看科技有限公司 虚拟现实系统及其空间定位装置、定位方法
CN106768361B (zh) * 2016-12-19 2019-10-22 北京小鸟看看科技有限公司 与vr头戴设备配套的手柄的位置追踪方法和系统
KR101767569B1 (ko) * 2017-02-20 2017-08-11 주식회사 유조이월드 디스플레이되는 영상컨텐츠와 관련된 증강현실 인터랙티브 시스템 및 상기 시스템의 운영방법
CN106899599A (zh) * 2017-03-09 2017-06-27 华东师范大学 一种工业环境实景增强式交互方法
CN110622219B (zh) * 2017-03-10 2024-01-19 杰创科增强现实有限公司 交互式增强现实
CN107168520B (zh) * 2017-04-07 2020-12-18 北京小鸟看看科技有限公司 基于单目摄像头的追踪方法、vr设备和vr头戴设备
CN109240483A (zh) * 2017-05-12 2019-01-18 上海华博信息服务有限公司 一种vr动作编辑系统
CN107392961B (zh) * 2017-06-16 2019-12-06 华勤通讯技术有限公司 基于增强现实的空间定位方法及装置
CN107368187A (zh) * 2017-07-12 2017-11-21 深圳纬目信息技术有限公司 一种双重交互控制的头戴式显示设备
CN108076339B (zh) * 2017-12-19 2019-07-05 歌尔股份有限公司 一种视野可连续延展的ar设备及使用方法
CN108205373B (zh) * 2017-12-25 2021-08-13 北京致臻智造科技有限公司 一种交互方法及系统
CN113296605B (zh) * 2021-05-24 2023-03-17 中国科学院深圳先进技术研究院 力反馈方法、力反馈装置及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387908A (zh) * 2007-09-10 2009-03-18 佳能株式会社 信息处理装置及信息处理方法
CN101808250A (zh) * 2009-02-13 2010-08-18 北京邮电大学 基于双路视觉的立体影像合成方法及系统
EP2600331A1 (en) * 2011-11-30 2013-06-05 Microsoft Corporation Head-mounted display based education and instruction
CN104090660A (zh) * 2014-05-30 2014-10-08 贺杰 一种基于立体视觉的动作采集和反馈方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9344612B2 (en) * 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
CN202870727U (zh) * 2012-10-24 2013-04-10 上海威镜信息科技有限公司 一种带有动作捕捉模块的显示单元设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101387908A (zh) * 2007-09-10 2009-03-18 佳能株式会社 信息处理装置及信息处理方法
CN101808250A (zh) * 2009-02-13 2010-08-18 北京邮电大学 基于双路视觉的立体影像合成方法及系统
EP2600331A1 (en) * 2011-11-30 2013-06-05 Microsoft Corporation Head-mounted display based education and instruction
CN104090660A (zh) * 2014-05-30 2014-10-08 贺杰 一种基于立体视觉的动作采集和反馈方法及系统

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112025735A (zh) * 2020-09-10 2020-12-04 河南工业职业技术学院 基于视觉感知的被动柔顺机器人抛磨装置

Also Published As

Publication number Publication date
CN104090660B (zh) 2017-11-10
CN104090660A (zh) 2014-10-08
CN203941499U (zh) 2014-11-12

Similar Documents

Publication Publication Date Title
WO2015180497A1 (zh) 一种基于立体视觉的动作采集和反馈方法及系统
JP7095602B2 (ja) 情報処理装置、情報処理方法及び記録媒体
KR102065687B1 (ko) 무선 손목 컴퓨팅과 3d 영상화, 매핑, 네트워킹 및 인터페이스를 위한 제어 장치 및 방법
CN104699247B (zh) 一种基于机器视觉的虚拟现实交互系统及方法
JP6344380B2 (ja) 画像処理装置および方法、並びにプログラム
CN105608746B (zh) 一种将现实进行虚拟实现的方法
US10996757B2 (en) Methods and apparatus for generating haptic interaction for virtual reality
US20160041391A1 (en) Virtual reality system allowing immersion in virtual space to consist with actual movement in actual space
US11086392B1 (en) Devices, systems, and methods for virtual representation of user interface devices
US20170285694A1 (en) Control device, control method, and program
JP2001356875A (ja) ポインタ表示システム
JP2021060627A (ja) 情報処理装置、情報処理方法、およびプログラム
JP2023507241A (ja) 随意のデュアルレンジ運動学を用いたプロキシコントローラスーツ
CN203899120U (zh) 真实感的遥控体验游戏系统
WO2017061890A1 (en) Wireless full body motion control sensor
JP5597087B2 (ja) 仮想物体操作装置
CN108062102A (zh) 一种手势控制具有辅助避障功能的移动机器人遥操作系统
US20220230357A1 (en) Data processing
CN106293012A (zh) 一种三维体感双向交互系统和方法
TW201517963A (zh) 環場虛擬射擊遊戲系統
CN108268126B (zh) 基于头戴式显示设备的交互方法及装置
CN116787422A (zh) 一种基于多维感知的机器人控制系统和方法
CN116700492A (zh) 触感反馈方法及装置、扩展现实设备和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15799162

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/05/17)

122 Ep: pct application non-entry in european phase

Ref document number: 15799162

Country of ref document: EP

Kind code of ref document: A1