CN204480228U - motion sensing and imaging device - Google Patents

motion sensing and imaging device Download PDF

Info

Publication number
CN204480228U
CN204480228U CN201420453536.4U CN201420453536U CN204480228U CN 204480228 U CN204480228 U CN 204480228U CN 201420453536 U CN201420453536 U CN 201420453536U CN 204480228 U CN204480228 U CN 204480228U
Authority
CN
China
Prior art keywords
information
image
described
sensor
characterized
Prior art date
Application number
CN201420453536.4U
Other languages
Chinese (zh)
Inventor
D·S·霍尔茨
N·罗伊
何宏远
Original Assignee
厉动公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US201462035008P priority Critical
Priority to US62/035,008 priority
Application filed by 厉动公司 filed Critical 厉动公司
Application granted granted Critical
Publication of CN204480228U publication Critical patent/CN204480228U/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/0093Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00 with means for monitoring data relating to the user, e.g. head-tracking, eye-tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B30/00Optical systems or apparatus for producing three-dimensional [3D] effects, e.g. stereoscopic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/2251Constructional details
    • H04N5/2253Mounting of pick-up device, electronic image sensor, deviation or focusing coils
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/2256Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, camcorders, webcams, camera modules specially adapted for being embedded in other devices, e.g. mobile phones, computers or vehicles provided with illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infra-red radiation
    • H04N5/332Multispectral imaging comprising at least a part of the infrared region
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0132Head-up displays characterised by optical features comprising binocular systems
    • G02B2027/0134Head-up displays characterised by optical features comprising binocular systems of stereoscopic type
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/11Positioning of individual sound objects, e.g. moving airplane, within a sound field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/15Aspects of sound capture and related signal processing for recording or reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones

Abstract

Disclosed technology relates to a kind of motion sensing and imaging device, comprising: multiple imaging sensor, is arranged to and is provided for just by the three-dimensional imaging information of scenery of checking; One or more irradiation source arranged around imaging sensor; And be coupled to imaging sensor and irradiation source to control the controller of its operation, gather the image-forming information of scenery, transmitting close to straight-through in real time of image-forming information is at least provided to user.Motion sensing and imaging device can gather the image-forming information of scenery and at least provide transmitting close to straight-through in real time of image-forming information to user.Sensing and imaging device can be used alone or be coupled to can wear or portable equipment with create can to wearer present with the virtual of information or establishment the sensing system worn presenting the image-forming information expanded.

Description

Motion sensing and imaging device

Technical field

Disclosed technology relates to for imaging or other sensor can being used to detect the gesture in three-dimensional (3D) sensing space and presenting sensing and the imaging device of the height function/pin-point accuracy that can use in wearable sensor system of the reality that 3D expands to user.

Background technology

One kind equipment (such as Google Glass) is provided for being presented on the ability of the information that perspective (see through) screen worn by user superposes.The equipment (such as Oculus Rift) of another type provides virtual reality to show to user, and this user is not from the information of the real world around user.But the equipment of these two types cannot provide fully to reflection around Integrated Virtual in the realtime graphic stream of the environment of wearer (such as, calculating) information.Therefore, exist for the image-forming information that can gather scenery and at least provide the close of image-forming information to lead directly to the sensing of height function and the needs of imaging device that transmit (pass-through) in real time to user.Sensing and imaging device can be coupled to ideally can wear or portable equipment with create can to wearer present with the virtual of information or establishment the sensing system worn presenting the image-forming information expanded.Hitherto known equipment does not provide these abilities.

Utility model content

Exist for the image-forming information that can gather scenery and at least provide the close of image-forming information to lead directly to the motion sensing of height function and the needs of imaging device that transmit in real time to user.This motion sensing and imaging device can be coupled to ideally can wear or portable equipment with create can to wearer present with the virtual of information or establishment the sensing system worn presenting the image-forming information expanded.

According to various embodiment of the present utility model, disclose a kind of motion sensing and imaging device, it is characterized in that, comprising: multiple imaging sensor, be arranged to and be provided for just by the three-dimensional imaging information of scenery of checking; One or more irradiation source arranged around this imaging sensor; And be coupled to this imaging sensor and irradiation source to control the controller of its operation, gather the image-forming information of scenery, transmitting close to straight-through in real time of image-forming information is at least provided to user.

In this motion sensing and imaging device, wherein this controller also provides: catch the image-forming information for the control object in the viewpoint of this imaging sensor, and this image-forming information wherein for interested control object is used for determining the gesture information to machine directive command in control.

In this motion sensing and imaging device, wherein this is caught and also comprises: by the information received from the pixel sensitive to IR light and the information separated received from the pixel of the visible-light response to such as RGB; Process will be used for the image information of gesture identification from IR sensor; And process is fed to as live video the image information be provided using via presenting interface from RGB sensor.

In this motion sensing and imaging device, wherein this process also comprises from the image information of RGB sensor: use the rgb pixel of the redness of the irradiation be captured in respectively in this scenery, green and blue component to extract the coarse features of respective real-world spaces.

In this motion sensing and imaging device, wherein this process also comprises from the image information of IR sensor: use the IR pixel of the infrared ray component of the irradiation be captured in this scenery to extract the fine feature of respective real-world spaces.

In this motion sensing and imaging device, wherein the fine feature of this respective real-world spaces comprises the superficial makings of this respective real-world spaces.

In this motion sensing and imaging device, wherein the fine feature of this respective real-world spaces comprises the edge of this respective real-world spaces.

In this motion sensing and imaging device, wherein the fine feature of this respective real-world spaces comprises the curvature of this respective real-world spaces.

In this motion sensing and imaging device, wherein the fine feature of this respective real-world spaces is included in the superficial makings of the object in this respective real-world spaces.

In this motion sensing and imaging device, wherein the fine feature of this respective real-world spaces is included in the edge of the object in this respective real-world spaces.

In this motion sensing and imaging device, wherein the fine feature of this respective real-world spaces is included in the curvature of the object in this respective real-world spaces.

In this motion sensing and imaging device, wherein this controller also provides: determine ambient lighting conditions; And based on the display that this condition adjustment determined exports, determine this sensor in the very first time and the second time relative to the first locating information of point of fixity and the second locating information.

In this motion sensing and imaging device, also comprise: motion sensor; And wherein this controller also provides: determine from the difference information between the first locating information of this motion sensor and the second locating information; And the mobile message relative to point of fixity being used for this equipment is calculated based on this difference information.

In this motion sensing and imaging device, also comprise: this imaging sensor and this irradiation source are fastened to and can wearing one or more fastener of the assembly surface in display device.

In this motion sensing and imaging device, also comprise: this imaging sensor and this irradiation source are fastened to and can wearing one or more fastener in the chamber in display device.

In this motion sensing and imaging device, also comprise: one or more fastener this imaging sensor and this irradiation source being fastened to the assembly surface in portable display device.

In this motion sensing and imaging device, also comprise: one or more fastener this imaging sensor and this irradiation source being fastened to the chamber in portable display device.

The implementation of disclosed technology can gather the image-forming information of scenery by providing a kind of and at least provide the motion sensing close to straight-through transmission in real time of image-forming information and imaging device to solve these and other problem to user.Sensing and imaging device can be used alone or be coupled to can wear or portable equipment with create can to wearer present with the virtual of information or establishment the sensing system worn presenting the image-forming information expanded.

An implementation of motion sensing and imaging device comprises: multiple imaging sensor, is arranged to and is provided for just by the three-dimensional imaging information of scenery of checking; One or more irradiation source arranged around imaging sensor; And be coupled to imaging sensor and irradiation source to control the controller of its operation.Controller enables this equipment gather the image-forming information of scenery and at least provide transmitting close to straight-through in real time of image-forming information to user.This equipment can be coupled to can wear equipment with create can to wearer present with the virtual of information or establishment the sensing system worn presenting the image-forming information expanded.

In one implementation, the controller of a kind of motion sensing and imaging device also provides the image-forming information of catching for the control object (comprising the control object of such as human hand) in the viewpoint of imaging sensor.Image-forming information for interested control object can be used for determining the gesture information to machine directive command in control.In implementation, accuracy below this equipment support millimeter to detect around the location of the object of the wearer of this equipment, attitude and motion and provides this information for being integrated into presenting of providing to wearer.

In one implementation, a kind of motion sensing and imaging device comprise for by from the information that the sensitive pixel of IR light is received with the information separated received from the pixel sensitive to visible ray (such as, RGB (red, green and blue)) and process from IR (infrared ray) sensor by be used for gesture identification image information and process from RGB sensor using via the ability presenting interface and to be fed to as live video the image information be provided.Such as, use the camera with rgb pixel set and the set of IR pixel to carry out capturing video stream, this video flowing is included in the image sequence of the scenery in real world.Sub-argument out from the information of IR sensitive pixels for process to identify gesture.The information from RGB sensitive pixels is provided to be fed to as to the live video presenting output to the interface that presents can wearing equipment (HUD, HMD etc.).Output is presented to the user's display can wearing equipment.One or more virtual objects can present output with video streaming image is integrated to be formed.Thus, this equipment is enable to provide the real world via straight-through transmission video feed of gesture identification, real world objects to present and/or comprise any item in the reality of the expansion of the virtual objects integrated with real world-view.

In one implementation, a kind of motion sensing and imaging device may be used for the combination of RGB and the IR pixel using camera to follow the tracks of the motion of this equipment itself.Specifically, it relates to and uses rgb pixel to catch the rough of respective real-world spaces or rough features and characteristic of correspondence value and to use IR pixel to catch the trickle of respective real-world spaces or accurate profile and characteristic of correspondence value.Once captured, then based on different time example detection to eigenwert between relatively determine can the movable information of wearable sensor system at least one feature of scenery.Such as, the feature of respective real-world spaces is the object in the given orientation in respective real-world spaces, and then eigenwert can be three-dimensional (3D) coordinate of the location of object in respective real-world spaces.If the value of the elements of a fix changes between paired picture frame or other image volume, then this can be used for determining to wear sensing system is positioned at the object changed between picture frame movable information relative to it.

In another example, the feature of respective real-world spaces is the wall in respective real-world spaces, and characteristic of correspondence value be as by with wearable sensor system can engage the orientation of the wall of viewer's perception of (engage).In the example present, if can the directed change of registration (register) wall between the camera of the wearable sensor system subsequent image frame of catching by being coupled to electronically, then this can indicate check wall can the change of location of wearable sensor system.

According to an implementation, identifying the object in respective real-world spaces and object from image or image sequence (object outline of the object such as respective real-world spaces, shape, volume-based model, skeleton pattern, sihouette, general layout and structure) significantly or together with coarse features can be used for from the information of the rgb pixel of camera.This can be implemented by the variable texture in the Average pixel intensity of measured zone or region.Therefore, rgb pixel allows the coarse estimation gathering respective real-world spaces and/or the object in respective real-world spaces.

In addition, the data from IR pixel can be used for catching the trickle of respective real-world spaces or accurate profile, and these trickle or accurate profile strengthen the data extracted from rgb pixel.The example of fine feature comprises the superficial makings of respective real-world spaces and the object in respective real-world spaces, edge, curvature and other Weak characteristic.In one example, when rgb pixel catches the solid model of hand, IR pixel is used for catching the blood vessel of hand and/or artery pattern or fingerprint.

Some other implementations can comprise by using RGB and IR pixel to catch view data in various combination and arrangement.Such as, implementation can comprise and activates RGB and IR pixel simultaneously and do not distinguish between coarse or minutia to perform the full scale collection of view data.Another implementation can comprise use RGB and IR pixel off and on.Another implementation can comprise according to secondary or Gaussian function activation RGB and IR pixel.Some other implementations can comprise and use IR pixel to perform the first scanning, are followed by RGB scanning and vice versa.

In one implementation, determine and can environment for use lighting condition with adjust export display.Such as, in normal lighting conditions, show the information from rgb pixel set, and show the information from the set of IR pixel in dim lighting condition.Alternatively or additionally, from the information of IR pixel set can be used for strengthening for light conditions the information from rgb pixel set or vice versa.Some implementations will receive following selection from user, the preferred display that this selection instruction is selected from one of the chromatic image from rgb pixel and the IR image from IR pixel or its combination.Alternatively or additionally, this equipment itself can depend on environmental baseline, user preference, situation awareness, other factors or it is combined in the video information using RGB sensitive pixels to catch and the video information using IR sensitive pixels to catch dynamically switches between showing.

In one implementation, sub-argument out from the information of IR sensitive pixels for process to identify gesture; And provide the information from RGB sensitive pixels to be fed to as live video to output; Support thus to save the bandwidth for gesture recognition process.In gesture process, can in detected image with the object characteristic of correspondence in real world.Cross over multiple image make the feature of object be correlated with to determine can be relevant to gesture motion change.Gesture motion can be used for determining machine in control, application resident thereon or its command information combined.

In one implementation, the sensors coupled of motion sensor and/or other type to motion capture system to monitor the motion such as produced due to the touch of user of the sensor of at least motion capture system.Information from motion sensor can be used for determining sensor in the very first time and the second time relative to the first locating information of point of fixity and the second locating information.Determine the difference information between the first locating information and the second locating information.The mobile message relative to point of fixity being used for sensor is calculated based on difference information.Mobile message for sensor be applied to the apparent environmental information that sensed by sensor with the motion of removing sensor from it with produce can be communicated actual environment information.Can to be configured to provide virtual reality via portable equipment or expand experience of reality system and/or to based on obtain from sensor and be adjusted to remove the motion of sensor itself, for the system reception and registration control information of the capturing movement information control machine etc. of the object of movement in space.In some applications, virtual unit experience can be expanded by adding sense of touch, audio frequency and/or the visual projector.

In one implementation, the sensor of motion capture system is used to catch apparent environmental information in the very first time and the second time from the locating information of object part.The object part mobile message in the very first time and the second time relative to point of fixity is calculated based on difference information with for the mobile message of sensor.

In further implementation, determine the mobile message of sensor repeatedly by using motion sensor at consecutive time and use sensor determine the mobile message of object part repeatedly and analyze mobile message sequence to determine that object part comes the path of calculating object relative to the path of point of fixity.Path and template can be compared to identify track.Can using the track identification of body part as gesture.Gesture can indicate the command information will passed on to system.Some gestures pass on the order (such as, amplify, reduce, shake, more details, next display page etc. are shown) of the operator scheme for changing system.

Advantageously, some implementations can improve for the User support can wearing equipment Consumer's Experience, larger security and improvement function.Some implementations are also provided for motion capture system the ability identifying gesture, thus allow user's execution to relate to the virtual gesture directly perceived contacted with virtual objects.Such as, can be provided for distinguishing the motion of object and the motion of equipment itself to contribute to the ability of appropriate gesture identification to equipment.Some implementations can provide with multiple portable or can wear machine (such as, smart phone, portable computing system (comprising laptop computer, tablet computing device, personal digital assistant), particular virtual computing machine (comprising display (HUD) on the head for such as using in aircraft or automobile), reality system (comprising GoogleGlass etc.) that is virtual and/or that expand, graphic process unit, embedded microcontroller, game console etc. can be worn; Aforementioned every in one or the multinomial network be wired or wirelessly coupled and/or its combination) improvement docking thus avoid or reduce for the needs of input equipment (such as mouse, operating rod, touch panel or touch screen) based on contact.Some implementations can provide with calculate and/or the hitherto known technology of ratio of other machine by the interface of the improvement of presumable interface.In some implementations, abundanter man-machine interface can be provided to experience.

According to various embodiment of the present utility model, achieve the image-forming information that can gather scenery and the motion sensing close to the straight-through height function transmitted in real time and the imaging device of image-forming information are at least provided to user.This motion sensing and imaging device can be coupled to ideally can wear or portable equipment with create can to wearer present with the virtual of information or establishment the sensing system worn presenting the image-forming information expanded.

Other side and the advantage of this technology can be understood when checking appended accompanying drawing, embodiment and claim.

Accompanying drawing explanation

Fig. 1 shows example kinematic sensing and imaging device.

The example that Fig. 2 shows based on motion sensing and imaging device can wear sensing system.

Fig. 3 shows the simplified block diagram of computer system.

Fig. 4 shows the basic operation and functional unit that relate to capturing movement and graphical analysis.

Fig. 5 illustrates by can the reality that expands of the example that presents of the equipment of motion sensing and imaging is straight-through transmits.

Embodiment

Disclosed technology relate to a kind of can catch scenery in real time or close to realtime graphic, detect the gesture in 3D sensing space and gesture is explained as to system in control or machine order and motion sensing and the imaging device of image information and the order of catching are provided in due course.

Implementation comprises to be provided " straight-through transmit ", and wherein live video separately or be provided to the user of virtual reality device in conjunction with the display of one or more virtual objects, thus allows user can direct perception real world.Such as, user is allowed can to see real work platform environment and the virtual application mixed with it or object.Gesture identification and sensing enable implementation be provided for grasping the other practical object of virtual objects (such as to user, the laughable tank of user) (such as, at the virtual document that the surface of the real work platform of user is floating) or the ability with these object interactions.In some implementations, the information from different spectral source is used for driving one or another aspect experiencing selectively.Such as, can be used for detecting the hand exercise of user from the information of IR fast-response probe and identify gesture.And can be used for being presented by video driving transmission from the information of visible region, thus the real world creating reality and virtual objects presents.In another example, the combination of the image information from multiple source can be used; System---or user---is combined between IR image and visible image based on situation, condition, environment or other factors or its and selects.Such as, equipment can switch to IR imaging when environment light condition is permitted from visual light imaging.User also can have the ability for being controlled to image source.In further example, the information from the sensor of a type can be used for expanding, correcting or confirm the information from the sensor of another type.Information from IR sensor can be used for correcting the display of the imaging carried out from visible-light response sensor and vice versa.At low light or be unfavorable for, in other situation (wherein can not identify the gesture of free form by the fiduciary level of abundant degree optically) of optical imagery, can detecting and use sound signal or vibration wave for direction and the position of answering object as described further herein.

Disclosed technology can be applied to and immersing in reality environment of wearable sensor system can strengthen Consumer's Experience in use.The example of the system according to disclosed implementation, apparatus and method is described in " can wearable sensor system " situation.There is provided the example of " can wearable sensors system " only in order to increase situation and auxiliary understand disclosed in implementation.In other example, the mutual example based on gesture in other situation (such as automobile, robot or other machine) can be applied to virtual game, virtual application, virtual program, virtual opetrating system etc.Other application is possible, thus following example is not to be read as and has definition or restriction in scope, situation or arrange.Therefore, those skilled in the art are by clear, and implementation or can be put into practice in " can wearable sensor system " situation in addition.

First with reference to the Fig. 1 showing example kinematic sensor device 100, motion sensing device 100 comprises the irradiation plate 172 that can be coupled to mainboard (main board) 182 with threaded fastener or alternate manner.Cable lays, and (not shown in FIG in order to know) produces in the electrical interconnection of irradiating between plate 172 and mainboard 182 thus allows to exchange signal and poower flow.These parts can also be fastened to by fastening mainboard 182 (Part I) and the fastener irradiating plate 172 (Part II) can wear or the assembly surface A of mancarried electronic aid (such as, HMD, HUD, smart phone etc.).Assembly surface A can wear or the surface (inner or outside) of mancarried electronic aid.Alternatively, equipment can use frictional fit (friction fit), fastener or its any combination and be arranged at and can to wear or in the chamber of mancarried electronic aid or receptacle.Equipment 100 can be embedded in multiple wear or in any equipment in mancarried electronic aid to meet the designing requirement of extensive multiple application.

Irradiate plate 172 and have the multiple controlled individually irradiation source 115,117 embedded thereon, these irradiation sources can be such as LED.Two cameras 102,104 provide by the sensing based on stereo-picture of scenery checked and reside in illustrated implementation on the mainboard 182 of equipment 100.Mainboard 182 also can comprise for primary image process, the processor controlling the LED of camera 102,104 and plate 172.The various amendments of the design shown in Fig. 1 are possible; Such as, LED, photodetector and camera number and arrange can change; Scanning and imaging h ardware can be integrated on single plate or two plates according to the requirement of application-specific.

Can to wear or the user of mancarried electronic aid provides the three-dimensional imaging information provided by camera 102,104 selectively or continuously to wearing or carrying.Equipment 100 can provide the fact of the image information from camera " in real time " or close in real time feeding, image by Practical computer teaching, information, icon or other virtualized present expanded in real time or close to real time imagery information, by the virtualized expression of the scenery checked, the time dependent combination selected from their.The gesture that user makes also can be sensed by the camera 102,104 of sensor device 100, and provides gained image-forming information with mark and the order being really oriented in any system (comprise and can wear or portable equipment itself) under from the control of gesture to motion capture system.Advantageously, gesture identification and imaging capability are integrated in single motion sensing device 100 that the height function that is adapted at wearing or install in mancarried electronic aid etc. is provided, flexible and compact equipment.

Some irradiation sources in irradiation source 115,117 can the related focusing optics of tool (in order to clear and do not illustrated by Fig. 1) in some implementations.Plate 172 or 182 also can comprise to know not by the socket for coupling light detecting device (or other sensor) shown in Fig. 1.Come the photodetector of self-inductance measurement reflectivity change information denoted object exist or be not present in following area of space, irradiation source (such as, LED) during irradiation source " scanning " this area of space to utilizing emitted light in this area of space.

Referring now to Fig. 2, which illustrates the system 200 for catching view data of an implementation according to disclosed technology.System 200 is preferably coupled to can wear equipment 201, can wear equipment 201 can be have than safety goggles profile as shown in Figure 2, helmet profile individual head mounted display (HMD) or can be incorporated into wrist-watch, smart phone or other type portable equipment in or be coupled thus formed can wear sensing system with them.

In various implementations, the system and method moved for the 3D catching object as described herein can apply (such as headset equipment or mobile device) with other integrated.Referring again to Fig. 2, headset equipment 201 can comprise the optical module 203 showing surrounding environment or virtual environment to user; The environment that motion capture system 200 controls to show with allowing user interactions is incorporated in headset equipment 201.Such as, the virtual objects that the hand gesture that the passive movement capture systems 200 that virtual environment can comprise user is followed the tracks of can be handled.In one implementation, the motion capture system 200 integrated with headset equipment 201 detects the location of the hand of user and shape and project it on the display of headset equipment 201, thus the user being can see she gesture and in virtual environment alternatively control object.This can such as be applied in game or internet browsing.

System 200 comprises some cameras 102,104 being coupled to sense process system 206.Camera 102,104 can be the camera of any type, comprise cross over visible light spectrum sensitive or the camera of enhancing sensitivity that has the wavelength band (such as, infrared ray (IR) or ultraviolet frequency band) limited; More generally, term " camera " this refers to the image that can catch object and represents any equipment (or equipment combination) of this image in the form of digital data.Such as, line sensor or line camera can be used instead of catch the conventional equipment of two dimension (2D) image.Term " light " be generally used for expression can or can not in visible light spectrum and can be any electromagnetic radiation of broadband (such as, white light) or arrowband (such as, single wavelength or narrow wavelength band).

Camera 102,104 preferably can capture video images (that is, in the subsequent image frame of the substantial constant speed of about 15 frames per second etc.); But without the need to particular frame speed.The ability of camera 102,104 is unimportant for disclosed technology, and camera can about frame rate, image resolution ratio (such as, the pixel of every image), color or strength distinguish rate (such as, the figure place of the intensity data of every pixel), lens focal length, the depth of field etc. and change.Generally speaking, for application-specific, any camera that can the object in interested spatial volume focus on can be used.Such as, in order to catch the motion of the hand of individual static in addition, interested volume can be defined as on be approximately the cube of a meter.

As shown, camera 102,104 can the part by the motion of equipment 201 towards interested region 212 be directed to check the view played up virtually or expand virtually in interested region 212, and this interested region can comprise multiple virtual objects 216 and be included in the interested object 214 (being one or more hand in the example present) of movement in interested region 212.The motion of one or more sensor 208,210 capture device 201.In some implementations, one or more light source 115,117 is arranged to and irradiates interested region 212.In some implementations, one or more camera in camera 102,104 and motion to be detected (such as, wherein estimating that hand 214 moves) are arranged on the contrary.This is optimal location because the quantity of information about hand of record and its number of pixels of taking in camera image proportional, and hand by camera relative to the angle of " pointing direction " of hand as far as possible close to taking more pixels time vertical.Can be such as that the sense process system 206 of computer system can control the operation of camera 102,104 to catch the image of interested region 212 and sensor 208,210 with the motion of capture device 201.The information carrying out sensor 208,210 can be applied to the model of the image taken by camera 102,104 with the impact of the motion of counteracting equipment 201, thus provides larger accuracy to the virtual experience played up by equipment 201.Based on the image of catching and the motion of equipment 201, sense process system 206 is determined the location of object 214 and/or motion and plays up it via assembly 203 to user to represent.

Such as, as the action when determining the motion of object 214, sense process system 206 can determine which pixel of the various images of being caught by camera 102,104 comprises the part of object 214.In some implementations, any pixel in the picture can be classified as " object " pixel or " background " pixel by depending on this pixel whether to comprise a part for object 214.Therefore object pixel and background pixel can easily be distinguished based on brightness.In addition, also can based on the edge of the easily detected object of luminance difference between adjacent pixels, thus allow to determine the location of object in each image.In some implementations, from the sihouette of one or more image zooming-out object of object, this one or more image discloses as from the information about object seen by different vantage point.Although multiple different technologies can be used to obtain sihouette, in some implementations, by using camera to catch the image of object and analysis chart picture obtains sihouette with detected object edge.Make objects location relevant and the motion of catching of offsetting the equipment 201 of sensor 208,210 allows sense process system 206 to determine object 214 position in the 3 d space between the image from camera 102,104, and analyze image sequence and allow sense process system 206 to use regular motion algorithm or other technology to carry out the 3D motion of reconstructed object 214.Such as, see by reference whole disclosure being incorporated into this 13/414th, No. 485 U.S. Patent applications (being filed on March 7th, 2012) and the 61/724th, No. 091 (being filed on November 8th, 2012) and the 61/587th, No. 554 (being filed on January 7th, 2012) U.S. Provisional Patent Application.

Present interface 220 tracking combined based on sensing and use projective technique to present by applying created visual (or virtualized reality) object (visual, audio frequency, sense of touch etc.) as follows, these application can be loaded into the optical module 203 of equipment 201 or coordinate with optical module 203 and implement to provide personal virtual to experience to the user of equipment.Projection can comprise image or other visual representation of object.

Implementation uses and is coupled to the motion sensor of motion capture system and/or the sensor of other type to monitor the motion in actual environment.Can to the virtual objects integrated in the playing up of expansion of actual environment of user's projection of portable equipment 201.Can at least partly based on the movable information of the sensitive information determination user's body part from imaging 102,104 or acoustics or the reception of other sensor device.At least partly pass on control information based on the motion of portable equipment 201 and the combination of the motion detected of user determined according to the sensitive information received from imaging 102,104 or acoustics or other sensor device to system.Virtual unit experience can be expanded in some implementations by adding sense of touch, audio frequency and/or other sensitive information projector.Such as, with reference to Fig. 5, visual projection assembly 504 can project the image of the page (such as, virtual unit 501) of the virtual books object of comfortable real world objects (such as, via the worktable 216 of live video feeding to user's display) upper superposition; Thus create read actual books or on physical electronic reader reading electronic book virtual unit experience, although books and electronic reader all do not exist.The sense of touch projector 506 can to the texture sense of " virtual page number " of the finger projection books of reader.The audio frequency projector 502 can project the sound of page turning in response to detecting that reader makes hitting with page turning.Due to it be expand real world, so project the back side of hand 214 to user, thus make scenery In the view of user as user at the hand seeing user oneself.

Referring again to Fig. 2, multiple sensor 208,210 is coupled to sense process system 206 with the motion of capture device 201.Sensor 208,210 can be the sensor had for obtaining any type of signal from various kinematic parameter (acceleration, speed, angular acceleration, angular rate, position/location); More generally, term " motion detector " this refers to any equipment (or equipment combination) that mechanical motion can be converted to electric signal.Such equipment can separately or be comprised accelerometer, gyroscope and magnetometer in various combination and be designed to be changed by orientation, magnetic force or gravity carry out sense movement.Permitted eurypalynous motion sensor to exist and implementation is alternative changes widely.

Illustrated system 200 can comprise the virtual experience in order to clear and any sensor in other sensors various of not being illustrated in fig. 2 provides to the user of equipment 201 with enhancing separately or in any combination.Such as, can not identify in the low light situation of the gesture of free form optically by the fiduciary level of abundant degree wherein, system 206 can switch based on the touch mode of acoustics or vibration transducer identification touch gestures wherein.Alternatively, system 206 can switch to touch mode or carry out supplemental image with touch-sensing when sensing the signal from acoustics or vibration transducer and catches and process.In another operator scheme, to knock or touch gestures can serve as " waking up " signal to take image and audio analysis system 206 to operator scheme from standby mode.Such as, do not exist if exceed thresholding interval from the optical signalling of camera 102,104, then system 206 can enter standby mode.

To understand, the project is in fig. 2 exemplary.In some implementations, containment 200 may to be wished in differing formed cover or is integrated in larger parts or assembly.In addition, schematically illustrate number and the type of imageing sensor, motion detector, irradiation source etc. to know, but size and number are all not identical in all implementations.

Referring now to Fig. 3, it illustrates the simplified block diagram of the computer system 300 for implementing sense process system 206.Computer system 300 comprises processor 302, storer 304, motion detector and camera interface 306, presents interface 220, loudspeaker 309, microphone 310 and wave point 311.Storer 304 can be used for storing the input and/or input data that fly the instruction performed by processor 302 and and instruction to perform association.Especially, storer 304 is included in the conceptive instruction being illustrated as the following one group of module more specifically described, the operation of these instructs control processor 302 and mutual with other hardware component thereof.Operating system is guided and is performed rudimentary, basic system functions, and such as storer distributes, the operation of file management and mass memory unit.Operating system can be or comprise several operation systems, such as Microsoft WINDOWS operating system, Unix operating system, (SuSE) Linux OS, Xenix operating system, IBM AIX operating system, Hewlett Packard UX operating system, Novell NETWARE operating system, Sun Microsystems solaris operating system, os/2 operation system, BeOS operating system, MACINTOSH operating system, APACHE operating system, OPENACTION operating system, iOS, Android or other Mobile operating system or another platform operating system.

Computing environment also can comprise other can be removed/non-ly to remove, volatile/nonvolatile computer storage media.Such as, hard drive can read from non-non-volatile magnetic disk medium of removing or write to it, and disc drives can read from removing anonvolatile optical disk (such as CD-ROM or other optical medium) or write to it.Can use in Illustrative Operating Environment other can remove/non-ly to remove, volatile/nonvolatile computer storage media includes but not limited to tape cassete, flash card, digital versatile disc, digital video cassette, solid-state RAM, solid-state ROM etc.Storage medium is usually by can to remove or non-memory interface of removing is connected to system bus.

Processor 302 can be general purpose microprocessor, but depend on implementation and can alternatively microcontroller, peripheral integrated circuit unit, CSIC (client's dedicated integrated circuit), ASIC (special IC), logical circuit, digital signal processor, programmable logic device (PLD) (such as FPGA (field programmable gate array), PLD (programmable logic device (PLD)), PLA (programmable logic array)), RFID processor, intelligent chip or any miscellaneous equipment of action of process or the layout of equipment of disclosed technology can be implemented.

Motion detector and camera interface 306 can comprise and be supported in the hardware and/or software (see Fig. 2) that computer system 300 communicates with between camera 102,104 and sensor 208,210.Therefore, such as, motion detector and camera interface 306 can comprise camera, irradiation source and motion detector can be connected to one or more camera data port 316 of (via conventional male prongs etc.), 318, irradiation source port 313, 315 and motion detector port 317, 319 and for providing to the capturing movement performed on processor 302 (" mocap ") program 314 data-signal that receives from camera and motion detector as input before revise this signal (such as, to reduce noise or reformatting data) hardware and/or software signal processors.In some implementations, motion detector and camera interface 306 also can to camera, irradiation source and sensor transmissions signal with such as activate or deactivation they, control camera and (frame rate, picture quality, sensitivity etc.) be set, control to irradiate and (intensity, duration etc.) is set, controls sensor setting (calibration, level of sensitivity etc.) etc.Such as can transmit such signal in response to the control signal carrying out self processor 302, these control signals then can in response to user's input or other event detected and being generated.

The instruction of definition mocap program 314 is stored in storer 304, and these instructions when being performed to the image from camera supply and perform capturing movement analysis from the sound signal of the sensor being connected to motion detector and camera interface 306.In one implementation, mocap program 314 comprises various module, such as object analysis module 322 and path side's Connection Module 324.Object analysis module 322 can analysis chart picture (such as, via the image that interface 306 is caught) to detect the out of Memory of the edge of object wherein and/or the position about object.In some implementations, object analysis module 322 also can analyzing audio signal (such as, via the sound signal that interface 306 is caught) with such as by time of arrival distance, multipoint positioning etc. makes object localize.(" multipoint positioning " is a kind of based on measuring and the airmanship at two or more distance difference of standing in known location of known time broadcast singal.See the http://en.wikipedia.org/w/index.php of Wikipedia? title=Multilateration & oldid=523281858, on November 16th, 2012 06:07UTC).Path analysis module 234 based on the information trace obtained via camera and can predict the object move in 3D.Some implementations will comprise the actual environment manager 326 of virtual reality/expansion, it provides the virtual objects of reflection practical object (such as, hand 214) and the integrated of the object 216 of synthesis to present to provide personal virtual to experience 213 for via presenting the user of interface 220 to equipment 201.One or more application 328 can be loaded in storer 304 (or otherwise become can be used for processor 302) to expand or the function of equipment for customizing 201, makes system 200 can as working platform thus.Pixel-level analyze in succession camera image to extract object move and speed.Sound signal is placing objects in known surface, and the intensity of signal and change can be used for the existence of detected object.If both audio frequency and image information are simultaneously available, then can analyze and be in harmonious proportion (reconcile) two category information to produce more specifically and/or path analysis accurately.Video feed integrator 329 provides from the live video feeding of camera 102,104 and the integrated of one or more virtual objects (such as, Fig. 5 501).Video feed integrator 329 arranges the process of the video information from dissimilar camera 102,104.Such as, to the sensitive pixel of IR light with can be separated by integrator 329 to the information that the pixel (such as, RGB) of visible-light response receives and be treated differently for printing.Image information from IR sensor may be used for gesture identification, and provides the image information from RGB sensor to be fed to as live video via presenting interface 220.Information from a kind of type sensor can be used for strengthening, correcting and/or confirm the information from the sensor of another type.The information from the sensor of a type can be partial in the situation of some types or environmental baseline (such as, low light, mist, light etc.).Equipment automatically or by receiving from user can be selected to select providing to present between output based on the image information of a type or other type.Integrator 329 controls the establishment via presenting the environment that interface 220 presents to user in conjunction with VR/AR environment manager 326.

Presenting interface 220, loudspeaker 309, microphone 310 and radio network interface 311, can be used for contributing to user mutual via equipment 201 and computer system 300.These parts can be general conventional design or be modified to the user interactions providing any type as desired.In some implementations, the result of the capturing movement of motion detector and camera interface 306 and mocap program 314 is used can be inputted as user by explanation.Such as, user can cross over surface and perform the hand gesture or motion that use mocap program 314 analyzed, and the result of this analysis can be explained as the instruction to some other programs performed on processor 302 (such as, web browser, word processor or other application).Therefore, for example, user can use upwards or gesture of hitting downwards with " rolling " via presenting the webpage of interface 220 to the current display of user of equipment 201, using rotate gesture to increase or to reduce the volume etc. of audio frequency exported from loudspeaker 309.The path representation detected can be vector and infer to present interface 220 playing up the action on equipment 201 with predicted path such as to be improved by anticipated movement by path analysis module 324.

To understand, computer system 300 is for exemplary and change and amendment are possible.Computer system can be implemented with the various shapes comprising server system, desktop system, laptop system, flat computer, smart phone or personal digital assistant etc.Specific implementation mode can wrap other function, such as do not described here, wired and/or radio network interface, media play and/or registering capacity etc.In some implementations, one or more camera and two or more microphone can be built in computing machine instead of be supplied as the parts be separated.In addition, can only use the subset of computer system part to implement image or audio analyzer (such as, as the processor of executive routine code, ASIC or fixed function digital signal processor and there is suitable I/O interface for receiving view data and exporting analysis result).

Although describe computer system 300 with reference to specific piece here, will understand, that block is for convenience of description and is defined and be not intended to imply the specific physical layout of ingredient.In addition, block is without the need to corresponding to physically different parts.In the degree using physically different parts, connection (such as, for data communication) between the parts can be wired and/or wireless as desired.Therefore, such as, processor 302 perform object analysis module 322 can make processor 302 operational movement detecting device and camera interface 306 with catch cross over surface advance and with the image of the object of surface contact and/or sound signal to detect entering of it by analysis chart picture and/or voice data.

Fig. 4 depicts the basic operation relating to capturing movement and graphical analysis according to the implementation of disclosed technology and functional unit 400.As shown in Figure 4, camera 402,404 records the digital picture 410 of scenery.Each digital picture caught by the imageing sensor of the camera of association is pixel value array, and transmits (with " original " form or after conventional pre-service) digital picture to one or more frame buffer 415.Frame buffer is volatile memory subregion or private section, and this volatile memory subregion or private section store the picture frame 420 of " bitmapped " corresponding with the pixel value of the image such as exported by the camera recording image.Generally conceptually bitmap groups is woven to grid, wherein each pixel is by one to one or be otherwise mapped to the output unit of display.But, should emphasize in frame buffer 415, how physically to organize the topology of memory cell unimportant and without the need to directly meeting concept structure.

The number of the frame buffer comprised in systems in which generally reflects the number of the image simultaneously analyzed by the following analytic system that more specifically describes or module 430.In brief, the pixel data during analysis module 430 analyzes in the sequence of picture frame 420 each picture frame is to position object wherein and to follow the tracks of their movements (as indicated 440) in time.This analysis can adopt various forms, and the algorithm of execution analysis specifies how to dispose the pixel in picture frame 420.Such as, the algorithm implemented by analysis module 430 can process the pixel of each frame buffer on a row by row basis, namely one after the other analyzes the often row of pixel grid.Other algorithm can analyze pixel in the region of row, tiling or other organized formats.

In various implementations, the motion of catching in sequence of camera images is used for calculating the output image for the corresponding series shown on a display 220.Such as, the camera image of mobile hand can be changed into wire frame or other graphic depiction of hand by processor.Alternatively, hand gesture can by the input explained as being used for controlling the visual output be separated; For example, user can use upwards or hit downwards gesture with the webpage of " rolling " current display or other document or open and close her hand with the mitigation and amplification page.Under any circumstance, output image is generally stored in frame buffer (such as, one of frame buffer 415) with the form of pixel data.Video display controller reads frame buffer to generate data stream and the control signal associated with to assembly 230 output image.Can to be provided together with storer 304 with the processor 302 on the motherboard of computer system 300 and can be integrated with processor 302 or be implemented as the coprocessor handling the video processor be separated by presenting video display and control that interface 220 provides.As noted, the figure of computer system 300 separation that Aided Generation can be equipped with to be fed to for the output image of assembly 203 or video card.An implementation comprises the video card generally with Graphics Processing Unit (GPU) and video memory, and especially at the complicated and high image procossing and useful in playing up of assessing the cost.Graphics card can comprise the function (and on plate of can stopping using video display controller) of frame buffer and video display controller.Generally speaking, can the image procossing of compartment system and capturing movement function between GPU and primary processor 302 in various manners.

Below and more specifically be incorporated into by quoting completely this, be respectively filed on January 17th, 2012, on March 7th, 2012, on November 8th, 2012, on Dec 21st, 2012 and on January 16th, 2013 the 61/587th, No. 554, the 13/414th, No. 485, the 61/724th, No. 091, the 13/724th, No. 357 and the 13/742nd, the appropriate algorithm being used for capturing movement program 314 is described in No. 953 U.S. Patent applications.Can with including, without being limited to any suitable programming language of higher level lanquage (such as C, C++, C#, OpenGL, Ada, Basic, Cobra, FORTRAN, Java, Lisp, Perl, Python, Ruby or Object Pascal) or lower level assembler language to various module programming.

Referring again to Fig. 4, be equipped with the operator scheme of the equipment of motion sensing opertaing device can determine according to the entry in performance database the roughness of the data provided to image analysis module 430, its roughness of analysis or the two.Such as, during wide area operation pattern, image analysis module 430 can to each picture frame with to all data manipulations in frame, capacity limit can the every frame of provision discussion minimizing quantity view data (i.e. resolution) if or each frame buffer in frame buffer 415 be organized as sequence on data line, abandon some frames completely.The mode abandoning data from analyze can depend on image analysis algorithm or capturing movement exports the purposes be imported into.In some implementations, abandon data with symmetrical or non-uniform manner, such as, abandon every a line, every 3rd line etc. until image analysis algorithm or utilize its tolerance restriction of application of output.In other implementation, frequency abandoned by line can increase towards the edge of frame.Other image acquisition parameters more that can change comprise the frame number of frame sign, frame resolution and collection per second.Especially, can such as by abandoning edge pixel or reducing frame sign by resampling to more low resolution (and only utilizing a part for frame buffer capacity).The parameter relevant to the collection of view data (such as, size and frame rate and characteristic) be collectively referred to as " acquisition parameter ", and the parameter relevant to the operation of image analysis module 430 (such as, when the profile of defining objects) be collectively referred to as " image analysis parameters ".The aforementioned exemplary of acquisition parameter and image analysis parameters is only representational and unrestricted.

Acquisition parameter can be applied to camera 402,404 and/or frame buffer 415.Camera 402,404 such as can operate camera 402,404 with when the speed of ordering gathers image in response to acquisition parameter or the collection of (time per unit) is transmitted in frame number from restriction to frame buffer 415 can be replaced.Image analysis parameters can be applied to image analysis module 430 as the numerical value quantity of operation affecting outline definition algorithm.

The characteristic that such as can depend on image analysis module 430 for the desired value of acquisition parameter and image analysis parameters being suitable for the available resources of given level, the character of application utilizing mocap to export and design preference.Some image processing algorithms can make contour approximation resolution and incoming frame resolution tradeoff in wide region, and other algorithm can not represent a large amount of tolerance completely, such as, require minimum image resolution, algorithm falls flat below this minimum image resolution.

Some implementations can be applied to the real world applications of virtual reality or expansion.Such as and with reference to Fig. 5, which illustrates the system 500 for projecting the experience of reality 213 that virtual unit expands for an implementation according to disclosed technology, the experience of reality that this virtual unit expands comprises practical object (such as, worktable surface dielectric 516) and the view of one or more virtual objects (such as, object 501).System 500 comprises and controls multiple sensors and the projector (as such as one or more camera 102,104 (or other imageing sensor)) and the disposal system 206 controlling to comprise some irradiation sources 115,117 of imaging system alternatively.Alternatively, multiple vibrations (or sound) sensor 508,510 of the contact being positioned for sensing and worktable 516 can be comprised.Alternatively, the projector (such as the optional audio projector 502) under the control of system 206 is for such as providing audible feedback, optional VIDEO PROJECTION device 504, the optional sense of touch projector 508 for such as providing tactile feedback to expand reality to user.Can for the more information about the projector, consult " Visio-Tactile Projector " Youtube (https: //www.youtube.com/watch? v=Bb0hNMxxewg) (access on January 15th, 2014).In operation, sensor and the projector are directed towards interested region 212, and this region at least can comprise the part of worktable 516 or interested object 214 (the being hand in the example present) free space along path 518 movement wherein of instruction.One or more application 521 and 522 can be provided as virtual objects integrated in the display to the reality 214 expanded.Thus, user (such as, the owner of hand 214) can be mutual with practical object (such as, worktable 516, laughable 517) in the environment identical with virtual objects 501.

In some implementations, virtual unit is projected to user.Projection can comprise image or other visual representation of object.Such as, the virtual projection mechanism 504 of Fig. 5 can from books to the projection page (such as, virtual unit 501) in the actual environment 213 (such as, surface portion 516 and/or encirclement space 212) of the expansion of reader; Thus create read actual books or on physical electronic reader reading electronic book virtual unit experience, although books and electronic reader all do not exist.In some implementations, the optional sense of touch projector 506 can to the texture sense of " virtual page number " of the finger projection books of reader.In some implementations, the optional audio projector 502 can project the sound of page turning in response to detecting that reader carries out hitting with page turning.

Claims (17)

1. motion sensing and an imaging device, is characterized in that, comprising:
Multiple imaging sensor, is arranged to and is provided for just by the three-dimensional imaging information of scenery of checking;
One or more irradiation source arranged around described imaging sensor; And
Be coupled to described imaging sensor and irradiation source to control the controller of its operation, gather the image-forming information of scenery, transmitting close to straight-through in real time of image-forming information is at least provided to user.
2. equipment according to claim 1, is characterized in that, wherein said controller also provides:
Catch the image-forming information for the control object in the viewpoint of described imaging sensor, the described image-forming information wherein for interested control object is used for determining the gesture information to machine directive command in control.
3. equipment according to claim 2, is characterized in that, wherein said catching also comprises:
By the information received from the pixel sensitive to IR light and the information separated received from the pixel of the visible-light response to such as RGB;
Process will be used for the image information of gesture identification from IR sensor; And
Process is fed to as live video the image information be provided using via presenting interface from RGB sensor.
4. equipment according to claim 3, is characterized in that, wherein said process also comprises from the image information of RGB sensor:
Use the rgb pixel of the redness of the irradiation be captured in respectively in described scenery, green and blue component to extract the coarse features of respective real-world spaces.
5. equipment according to claim 3, is characterized in that, wherein said process also comprises from the image information of IR sensor:
Use the IR pixel of the infrared ray component of the irradiation be captured in described scenery to extract the fine feature of respective real-world spaces.
6. equipment according to claim 5, is characterized in that, the fine feature of wherein said respective real-world spaces comprises the superficial makings of described respective real-world spaces.
7. equipment according to claim 5, is characterized in that, the fine feature of wherein said respective real-world spaces comprises the edge of described respective real-world spaces.
8. equipment according to claim 5, is characterized in that, the fine feature of wherein said respective real-world spaces comprises the curvature of described respective real-world spaces.
9. equipment according to claim 5, is characterized in that, the fine feature of wherein said respective real-world spaces is included in the superficial makings of the object in described respective real-world spaces.
10. equipment according to claim 5, is characterized in that, the fine feature of wherein said respective real-world spaces is included in the edge of the object in described respective real-world spaces.
11. equipment according to claim 5, is characterized in that, the fine feature of wherein said respective real-world spaces is included in the curvature of the object in described respective real-world spaces.
12. equipment according to claim 1, is characterized in that, wherein said controller also provides:
Determine ambient lighting conditions; And
Based on the display that the described condition adjustment determined exports,
Determine described sensor in the very first time and the second time relative to the first locating information of point of fixity and the second locating information.
13. equipment according to claim 1, is characterized in that, also comprise:
Motion sensor; And wherein said controller also provides:
Determine from the difference information between the first locating information of described motion sensor and the second locating information; And
The mobile message relative to point of fixity being used for described equipment is calculated based on described difference information.
14. equipment according to claim 1, is characterized in that, also comprise:
Described imaging sensor and described irradiation source are fastened to and can wearing one or more fastener of the assembly surface in display device.
15. equipment according to claim 1, is characterized in that, also comprise:
Described imaging sensor and described irradiation source are fastened to and can wearing one or more fastener in the chamber in display device.
16. equipment according to claim 1, is characterized in that, also comprise:
Described imaging sensor and described irradiation source are fastened to one or more fastener of the assembly surface in portable display device.
17. equipment according to claim 1, is characterized in that, also comprise:
Described imaging sensor and described irradiation source are fastened to one or more fastener in the chamber in portable display device.
CN201420453536.4U 2014-08-08 2014-08-12 motion sensing and imaging device CN204480228U (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US201462035008P true 2014-08-08 2014-08-08
US62/035,008 2014-08-08

Publications (1)

Publication Number Publication Date
CN204480228U true CN204480228U (en) 2015-07-15

Family

ID=51618846

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201420453536.4U CN204480228U (en) 2014-08-08 2014-08-12 motion sensing and imaging device

Country Status (4)

Country Link
US (2) US10349036B2 (en)
JP (1) JP2016038889A (en)
CN (1) CN204480228U (en)
DE (1) DE202014103729U1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228614A (en) * 2016-07-29 2016-12-14 宇龙计算机通信科技(深圳)有限公司 A kind of scene reproduction method and apparatus

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9857919B2 (en) * 2012-05-17 2018-01-02 Hong Kong Applied Science And Technology Research Wearable device with intelligent user-input interface
US10007329B1 (en) 2014-02-11 2018-06-26 Leap Motion, Inc. Drift cancelation for portable object detection and tracking
US9754167B1 (en) 2014-04-17 2017-09-05 Leap Motion, Inc. Safety for wearable virtual reality devices via object detection and tracking
US10007350B1 (en) 2014-06-26 2018-06-26 Leap Motion, Inc. Integrated gestural interaction and multi-user collaboration in immersive virtual reality environments
CN107209960A (en) 2014-12-18 2017-09-26 脸谱公司 For system, the device and method of the user interface for providing reality environment
US9407865B1 (en) 2015-01-21 2016-08-02 Microsoft Technology Licensing, Llc Shared scene mesh data synchronization
US10616561B2 (en) * 2015-09-03 2020-04-07 Inuitive Ltd. Method and apparatus for generating a 3-D image
EP3179338A1 (en) * 2015-12-11 2017-06-14 Tata Consultancy Services Ltd. Hybrid reality based object interaction and control
CN105657494B (en) * 2015-12-31 2018-12-25 北京小鸟看看科技有限公司 A kind of virtual theater and its implementation
US10067636B2 (en) * 2016-02-09 2018-09-04 Unity IPR ApS Systems and methods for a virtual reality editor
JP2019113882A (en) 2016-03-23 2019-07-11 株式会社ソニー・インタラクティブエンタテインメント Head-mounted device
US20180005437A1 (en) * 2016-06-30 2018-01-04 Glen J. Anderson Virtual manipulator rendering
JP2018022292A (en) 2016-08-02 2018-02-08 キヤノン株式会社 Information processing apparatus, method for controlling information processing apparatus, and program
EP3494447A1 (en) 2016-08-04 2019-06-12 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems
US9983687B1 (en) * 2017-01-06 2018-05-29 Adtile Technologies Inc. Gesture-controlled augmented reality experience using a mobile communications device
US10679669B2 (en) 2017-01-18 2020-06-09 Microsoft Technology Licensing, Llc Automatic narration of signal segment
US10606814B2 (en) 2017-01-18 2020-03-31 Microsoft Technology Licensing, Llc Computer-aided tracking of physical entities
US10482900B2 (en) 2017-01-18 2019-11-19 Microsoft Technology Licensing, Llc Organization of signal segments supporting sensed features
US10637814B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Communication routing based on physical status
US10635981B2 (en) 2017-01-18 2020-04-28 Microsoft Technology Licensing, Llc Automated movement orchestration
US10437884B2 (en) 2017-01-18 2019-10-08 Microsoft Technology Licensing, Llc Navigation of computer-navigable physical feature graph
FR3062488A1 (en) * 2017-02-01 2018-08-03 Peugeot Citroen Automobiles Sa Analysis device for determining a latency time of an immersive system of virtual reality
CN108399633A (en) * 2017-02-06 2018-08-14 罗伯团队家居有限公司 Method and apparatus for stereoscopic vision
WO2018236601A1 (en) * 2017-06-19 2018-12-27 Get Attached, Inc. Context aware digital media browsing and automatic digital media interaction feedback

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2301216A (en) 1995-05-25 1996-11-27 Philips Electronics Uk Ltd Display headset
US10019962B2 (en) * 2011-08-17 2018-07-10 Microsoft Technology Licensing, Llc Context adaptive user interface for augmented reality display
US8693731B2 (en) * 2012-01-17 2014-04-08 Leap Motion, Inc. Enhanced contrast for object detection and characterization by optical imaging

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228614A (en) * 2016-07-29 2016-12-14 宇龙计算机通信科技(深圳)有限公司 A kind of scene reproduction method and apparatus

Also Published As

Publication number Publication date
US10349036B2 (en) 2019-07-09
US20160044298A1 (en) 2016-02-11
JP2016038889A (en) 2016-03-22
DE202014103729U1 (en) 2014-09-09
US20190335158A1 (en) 2019-10-31

Similar Documents

Publication Publication Date Title
US9934580B2 (en) Enhanced contrast for object detection and characterization by optical imaging based on differences between images
US10269222B2 (en) System with wearable device and haptic output device
US10203765B2 (en) Interactive input system and method
US10395116B2 (en) Dynamically created and updated indoor positioning map
US10354449B2 (en) Augmented reality lighting effects
US10564799B2 (en) Dynamic user interactions for display control and identifying dominant gestures
US20200050281A1 (en) Machine responsiveness to dynamic user movements and gestures
CN104871084B (en) Self adaptation projector
US20160224128A1 (en) Display control apparatus, display control method, and display control program
US10620775B2 (en) Dynamic interactive objects
US10019074B2 (en) Touchless input
US9858722B2 (en) System and method for immersive and interactive multimedia generation
US9465443B2 (en) Gesture operation input processing apparatus and gesture operation input processing method
US10293252B2 (en) Image processing device, system and method based on position detection
US20170068326A1 (en) Imaging surround system for touch-free display control
US20160048725A1 (en) Automotive and industrial motion sensory device
US10134120B2 (en) Image-stitching for dimensioning
US10638036B2 (en) Adjusting motion capture based on the distance between tracked objects
KR101879478B1 (en) Method to extend laser depth map range
US10587864B2 (en) Image processing device and method
DE112013000590B4 (en) Improved contrast for object detection and characterization by optical imaging
Berman et al. Sensors for gesture recognition systems
US10097754B2 (en) Power consumption in motion-capture systems with audio and optical signals
US8660362B2 (en) Combined depth filtering and super resolution
DE112015002463T5 (en) Systems and methods for gestural interacting in an existing computer environment

Legal Events

Date Code Title Description
C14 Grant of patent or utility model
GR01 Patent grant