CN107577334A - A kind of somatosensory operation method and device of mobile terminal - Google Patents
A kind of somatosensory operation method and device of mobile terminal Download PDFInfo
- Publication number
- CN107577334A CN107577334A CN201610517335.XA CN201610517335A CN107577334A CN 107577334 A CN107577334 A CN 107577334A CN 201610517335 A CN201610517335 A CN 201610517335A CN 107577334 A CN107577334 A CN 107577334A
- Authority
- CN
- China
- Prior art keywords
- somatosensory operation
- mobile terminal
- somatosensory
- stream information
- determined
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
- Position Input By Displaying (AREA)
Abstract
The invention discloses a kind of somatosensory operation method and device of mobile terminal, colour imagery shot, infrared camera and infrared transmitter are provided with the mobile terminal;Its method includes:The depth data stream information that the color video stream information and infrared camera that acquisition colour imagery shot collects collect;According to color video stream information and depth data stream information, it is determined that the somatosensory operation currently inputted;The corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and performing the operational motion corresponding to the somatosensory operation currently inputted.The present invention by colour imagery shot, infrared transmitter and infrared camera by being integrated in mobile terminal, and the depth data stream information that the color video stream information and infrared camera collected according to colour imagery shot collects, it is determined that the somatosensory operation currently inputted, so as to perform corresponding operational motion, it can be achieved to carry out perspective operational to mobile terminal whenever and wherever possible without hand-held sensing stage property, it is easy to operate, substantially increase Consumer's Experience.
Description
Technical field
The present invention relates to the somatosensory operation method and dress in mobile terminal intellectuality field, more particularly to a kind of mobile terminal
Put.
Background technology
Development and maturation with body-sensing technology, body-sensing technology is achieved applied to computer input field, has been
User provides more comfortable and natural computer input mode.Kinect the and Leap Motion companies of Microsoft
Leap 3D are exactly two main brands in current body-sensing input field, in premise of the operator without hand-held any sensing stage property
Under, caught by camera and sensor, the action of identification operator.Body-sensing technology has obtained good excellent in computer realm
Change and develop.Body-sensing technology obtains certain achievement in research in field of play, by capturing outer signals, is resolved to control
Signal processed, message processing device is sent to by wireless device, the control to application program can be realized.At present on the market two
Main flow body-sensing camera kinect and LeEco body-sensing camera are based primarily upon colour imagery shot, infrared camera and infrared transmitter
Three big devices.Utilize the color video of the depth data stream and colour imagery shot collection of infrared transmitter and infrared camera collection
Stream can be achieved with body-sensing and gesture control by algorithm.But body-sensing shooting needs to buy special body-sensing camera realization, needs
User carries independent body-sensing camera to realize somatosensory operation anywhere or anytime at any time, and Consumer's Experience is not high.
The content of the invention
The invention provides a kind of somatosensory operation method and device of mobile terminal, solves mobile terminal in the prior art
Somatosensory operation can not be carried out whenever and wherever possible, the problem of poor user experience.
According to one aspect of the present invention, there is provided a kind of somatosensory operation method of mobile terminal, set on the mobile terminal
It is equipped with colour imagery shot, infrared camera and infrared transmitter;Wherein the somatosensory operation method includes:
The depth data stream that the color video stream information and infrared camera that acquisition colour imagery shot collects collect
Information;
According to color video stream information and depth data stream information, it is determined that the somatosensory operation currently inputted;
The corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and to perform the somatosensory operation institute currently inputted right
The operational motion answered.
Wherein, the step of obtaining the depth data stream information that infrared camera collects includes:
Obtain infrared transmitter and irradiate on a particular space reference speckle image information formed;
When there is required movement generation in particular space, the speckle image information that infrared camera collects is obtained;
To speckle image information with carrying out computing cross-correlation with reference to speckle image information, the three-dimensional depth of particular space is obtained
Degrees of data stream information.
Wherein, according to color video stream information and depth data stream information, it is determined that currently input somatosensory operation the step of
Including:
Computing is overlapped to color video stream information and depth data stream information, obtains corresponding three-dimensional modeling data;
The somatosensory operation that three-dimensional modeling data is defined as currently inputting.
Wherein, the step of three-dimensional modeling data being defined as into the somatosensory operation currently inputted includes:
The artis extracted in three-dimensional modeling data builds a skeleton model;
According to the change of skeleton model, it is determined that currently inputting somatosensory operation.
Wherein, the corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and performing the body-sensing behaviour currently inputted
The step of making corresponding operational motion includes:
Detect the somatosensory operation currently inputted and whether the corresponding somatosensory operation of specific operation action is identical;
It is if identical, it is determined that the operational motion corresponding to somatosensory operation currently inputted acts for specific operation, and performs
Specific operation acts.
Wherein, the step of performing specific operation action includes:
Calculate the space coordinates of left hand or the right hand in the somatosensory operation currently inputted;
According to the mapping relations between the space coordinates of somatosensory operation and the display screen coordinate of mobile terminal, space is sat
Mark maps to the display screen coordinate of mobile terminal;
The specific operation occurred in coordinates in space action is mapped on display screen coordinate and performed.
Foundation another aspect of the present invention, additionally provide a kind of somatosensory operation device of mobile terminal, the mobile terminal
On be provided with colour imagery shot, infrared camera and infrared transmitter;The somatosensory operation device includes:
Acquisition module, collected for obtaining color video stream information that colour imagery shot collects and infrared camera
Depth data stream information;
First processing module, for according to color video stream information and depth data stream information, it is determined that the body currently inputted
Sense operation;
Second processing module, for the corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and performing current
Operational motion corresponding to the somatosensory operation of input.
Wherein, the acquisition module includes:
First acquisition unit, the reference speckle image formed letter is irradiated on a particular space for obtaining infrared transmitter
Breath;
Second acquisition unit, for when there is required movement generation in particular space, obtaining what infrared camera collected
Speckle image information;
First arithmetic element, for, with carrying out computing cross-correlation with reference to speckle image information, being obtained to speckle image information
The three-dimensional depth data stream information of particular space.
Wherein, the first processing module includes:
Second arithmetic element, for being overlapped computing to color video stream information and depth data stream information, obtain pair
The three-dimensional modeling data answered;
First processing units, for the somatosensory operation for being defined as currently inputting by three-dimensional modeling data.
Wherein, the first processing units include:
Subelement is built, a skeleton model is built for extracting the artis in three-dimensional modeling data;
First processing subelement, for the change according to skeleton model, it is determined that currently inputting somatosensory operation.
Wherein, the Second processing module includes:
Detection unit, for detect the somatosensory operation currently inputted and the corresponding somatosensory operation of specific operation action whether phase
Together;
Second processing unit, the somatosensory operation currently inputted body-sensing behaviour corresponding with specific operation action is detected for working as
When making identical, it is determined that the operational motion corresponding to somatosensory operation currently inputted acts for specific operation, and performs specific behaviour
Act.
Wherein, the second processing unit includes:
Computation subunit, for calculating the space coordinates of left hand or the right hand in the somatosensory operation currently inputted;
Subelement is mapped, for reflecting between the space coordinates according to somatosensory operation and the display screen coordinate of mobile terminal
Relation is penetrated, space coordinates is mapped to the display screen coordinate of mobile terminal;
Second processing subelement, for the specific operation occurred in coordinates in space action to be mapped into display screen coordinate
Go up and perform.
The beneficial effect of embodiments of the invention is:
By the way that colour imagery shot, infrared transmitter and infrared camera are integrated in into mobile terminal, and imaged according to colour
The depth data stream information that the color video stream information and infrared camera that head collects collect, it is determined that the body-sensing currently inputted
Operation, so as to perform corresponding operational motion, it can be achieved whenever and wherever possible to found mobile terminal without hand-held sensing stage property
Gymnastics is made, easy to operate, substantially increases Consumer's Experience.
Brief description of the drawings
Fig. 1 represents the structural representation of the mobile terminal of the present invention;
Fig. 2 represents the schematic flow sheet of the somatosensory operation method of the mobile terminal of the present invention;
Fig. 3 represents the schematic flow sheet of the somatosensory operation method of example one in the embodiment of the present invention one;
Fig. 4 represents the schematic flow sheet of the somatosensory operation method of example two in the embodiment of the present invention one;
Fig. 5 represents the structural representation of the somatosensory operation device of the mobile terminal of the present invention.
Embodiment
The exemplary embodiment of the present invention is more fully described below with reference to accompanying drawings.Although the present invention is shown in accompanying drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the present invention without should be by embodiments set forth here
Limited.Conversely, there is provided these embodiments are to be able to be best understood from the present invention, and can be by the scope of the present invention
Completely it is communicated to those skilled in the art.
Embodiment one
As depicted in figs. 1 and 2, the embodiment provides a kind of somatosensory operation method of mobile terminal, specifically,
As shown in figure 1, colour imagery shot 1, infrared transmitter 2 and infrared camera 3 are provided with the mobile terminal.
Wherein, infrared transmitter 2 can use similar flash lamp and colour imagery shot 1 with the combining form of infrared camera 3
Combination.Before infrared camera 3 gathers each frame, infrared transmitter 2 launches infrared laser and " illuminates " target subject, red
What outer camera 3 collected is the hot spot that Infrared laser emission device impinges upon the multiple special shapes formed in target subject.Wherein,
Infrared transmitter 2 by special chip (such as:BOOST power supply chips) instantaneous high pressure pulsed drive is provided.Infrared camera 3 can lead to
Cross MIPI or I2C buses and be connected row data communication of going forward side by side with mobile phone base band chip.
Further, based on above-mentioned hardware configuration, as shown in Fig. 2 the somatosensory operation method of the present embodiment includes:
Step 201:The depth that the color video stream information and infrared camera that acquisition colour imagery shot collects collect
Degrees of data stream information.
Wherein, physically sensitive mode can be set in mobile terminal, body-sensing related software is opened (such as under physically sensitive mode:Body-sensing is swum
Play, intelligent television simulation body-sensing camera, virtual touch screen etc.), body-sensing function is opened on mobile terminal backstage, and being capable of automatic start
Colour imagery shot, infrared transmitter and infrared camera enter working condition.
Further, colour imagery shot obtains the process of color video stream information, with utilizing colored shooting in the prior art
The process of head shooting photo or video is similar, therefore is no longer discussed in detail.The present embodiment will introduce mobile terminal and utilize infrared emission
The process of device and infrared camera sampling depth traffic flow information.Wherein, the collection of depth data stream information is based on pumped FIR laser skill
Art, as the term suggests it is exactly that the space measured with light source lighting needs encodes, such as the laser speckle in pumped FIR laser technology
(laser speckle), laser speckle are the random diffraction spot formed after laser is irradiated to rough object or penetrates frosted glass
Point.These speckles have the randomness of height, and can be with the different changing patterns of distance, that is to say, that any two in space
The speckle pattern at place is all different.As long as stamping such structure light in space, whole space is just all marked,
One object puts this space into, as long as the speckle pattern looked above object, it is possible to know this object where
.
Specifically, the step of obtaining the depth data stream information that infrared camera collects specifically includes:Obtain infrared hair
Emitter irradiates on a particular space reference speckle image information formed;When there is required movement generation in particular space, obtain
The speckle image information that infrared camera collects;To speckle image information with carrying out cross-correlation fortune with reference to speckle image information
Calculate, obtain the three-dimensional depth data stream information of particular space.
Wherein, the speckle pattern being projected out in plane of the pre-recorded lower laser speckle in particular space at a certain distance
As information.Assuming that User Activity space is the scope apart from 1 meter to 4 meters of camera, a reference planes are taken every 10cm, then
The pre-recorded lower 30 width speckle image of our cans, it is defined as the reference speckle image information of particular space.When user is in spy
When determining movable in space, action is had in particular space, at this moment the particular space needs to measure, and shooting one is secondary to be measured
The speckle image of scene, diagram picture and our the 30 width reference pictures that preserve are taken turns doing into computing cross-correlation, so I
Can obtain 30 width correlation chart pictures, and have position existing for object in space, peak value will be shown on correlation chart picture.
These peak values are stacked from level to level, then by some interpolation, the 3D shape of whole scene will be obtained, therefore can obtain
To the three-dimensional depth data stream information of the particular space.
Step 202:According to color video stream information and depth data stream information, it is determined that the somatosensory operation currently inputted.
The color video stream of two dimension and the depth data stream information of three-dimensional are combined, establish three-dimensional modeling data.Tool
Body, computing is overlapped to color video stream information and depth data stream information, obtains corresponding three-dimensional modeling data;By three
Dimension module data are defined as the somatosensory operation currently inputted.
Wherein, the somatosensory operation that three-dimensional modeling data is defined as currently inputting is specifically included:Extract three-dimensional modeling data
In artis build a skeleton model;According to the change of skeleton model, it is determined that currently inputting somatosensory operation.That is, will collect
Two-dimentional color video stream be overlapped the three-dimensional modeling data that user is calculated with three-dimensional depth data stream information after,
The three-dimensional modeling data is simplified to the skeleton model of tens artis compositions.By skeleton model or the gesture motion collected
With prerecording into the reference model of system or being contrasted with reference to gesture, it is determined that the somatosensory operation currently inputted.In addition, it can incite somebody to action
The skeleton model of user is stored, and after can confirming user identity by way of face recognition when using next time, is directly carried
Model data uses corresponding to taking.
Step 203:The corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and performing the body-sensing currently inputted
The corresponding operational motion of operation.
Wherein, whether the somatosensory operation and the corresponding somatosensory operation of specific operation action that detection currently inputs are identical;If phase
Together, it is determined that the operational motion corresponding to somatosensory operation currently inputted acts for specific operation, and performs specific operation action.
For example, whether the somatosensory operation corresponding to somatosensory operation and slide that detection currently inputs is identical, cunning is performed if identical
Dynamic operation.
Further, the step of performing specific operation action includes:Calculate left hand or the right side in the somatosensory operation currently inputted
The space coordinates of hand;, will according to the mapping relations between the space coordinates of somatosensory operation and the display screen coordinate of mobile terminal
Space coordinates maps to the display screen coordinate of mobile terminal;The specific operation occurred in coordinates in space action is mapped to aobvious
Show on screen coordinate and perform.Refer here to:The relative distance and coordinate of finger and space background are determined, by mobile whole
Show that transparent virtual finger or other marks allow user to know relative coordinate where finger on the screen of end, and pass through virtual finger
Or the shade of other marks allows user to know the relative distance of finger and virtual screen, so as to by finger on virtual screen
Operation directly transmits touchscreen commands, realizes the operation of virtual touch screen.
Here, by the way that colour imagery shot, infrared transmitter and infrared camera are integrated in into mobile terminal, and according to colour
The depth data stream information that the color video stream information and infrared camera that camera collects collect, it is determined that currently input
Somatosensory operation, so as to perform corresponding operational motion, it can be achieved whenever and wherever possible to enter mobile terminal without hand-held sensing stage property
Row perspective operational, it is easy to operate, substantially increase Consumer's Experience.
Applied below in conjunction with motion sensing control class and virtual two application scenarios of touch screen are grasped to the body-sensing of the embodiment of the present invention
It is described further as method.
Example one:As shown in figure 3, the somatosensory operation method of body-sensing class application specifically includes following steps:
Step 301:Body-sensing function is opened, starts colour imagery shot, infrared transmitter and infrared camera.Open body-sensing phase
Close software (such as:Somatic sensation television game, intelligent television simulation body-sensing camera, virtual touch screen etc.), body-sensing function, body-sensing phase are opened in backstage
Close hardware and enter working condition.
Step 302:Gather color video stream information and depth data stream information.Color video is gathered by colour imagery shot
Stream information, pass through infrared transmitter and infrared camera sampling depth traffic flow information.
Step 303:According to color video stream information and depth data stream information, three-dimensional modeling data is established.By two dimension
Color video stream is combined with the depth data stream of three-dimensional and establishes three-dimensional modeling data.
Step 304:According to three-dimensional modeling data, skeleton model is established.The manikin collected is simplified to tens
The skeleton model of artis composition, can be by the model data store of user, and next time confirms user's body by way of face recognition
After part, the model data for directly extracting specific user uses.
Step 305:According to the contrast of skeleton model and preset model, operation information is exported.By skeleton model or collect
Gesture motion and prerecord model or the gesture contrast into system, meet the requirements and then export corresponding data or operation, finally
Realize the man-machine interaction of body-sensing.
So, mobile terminal is interconnected by home network and intelligent television or Intelligent set top box, and variable body turns into special body
Feel camera, realize the functions such as gesture remote control and somatic sensation television game.
Example two:As shown in figure 4, the somatosensory operation method of body-sensing class application specifically includes following steps:
Step 401:Body-sensing function is opened, starts colour imagery shot, infrared transmitter and infrared camera.Open body-sensing phase
Close software (such as:Somatic sensation television game, intelligent television simulation body-sensing camera, virtual touch screen etc.), body-sensing function, body-sensing phase are opened in backstage
Close hardware and enter working condition.
Step 402:Gather color video stream information and depth data stream information.Color video is gathered by colour imagery shot
Stream information, pass through infrared transmitter and infrared camera sampling depth traffic flow information.
Step 403:According to color video stream information and depth data stream information, three-dimensional modeling data is established.By two dimension
Color video stream is combined with the depth data stream of three-dimensional and establishes three-dimensional modeling data.
Step 404:According to three-dimensional modeling data, finger and the relative position and coordinate of particular space background are calculated, it is determined that
Virtual touchscreen commands.Determine the relative distance and coordinate of finger and background.By shown on screen transparent virtual finger or
Other marks allow user to know relative coordinate where finger;User is allowed to know finger by the shade of virtual finger or other marks
With the relative distance of virtual screen.
Step 405:Virtual touchscreen commands are sent, realize virtual contact action.Pass through operation of the finger on virtual screen
Touchscreen commands are directly transmitted, realize the operation of virtual touch screen.
So, with body-sensing camera technique by catching background (such as desktop, wall or thigh) of the finger in particular space
On action or gesture can be achieved with the input of operational order.
Embodiment two
Above example is explained explanation to the somatosensory operation method of mobile terminal of the present invention, below the present embodiment
Explanation will be described further to its corresponding device.
As shown in figure 5, the embodiment provides a kind of somatosensory operation device of mobile terminal, on the mobile terminal
It is provided with colour imagery shot, infrared camera and infrared transmitter;The somatosensory operation device includes:
Acquisition module 51, for obtaining color video stream information and the infrared camera collection that colour imagery shot collects
The depth data stream information arrived;
First processing module 52, for according to color video stream information and depth data stream information, it is determined that currently input
Somatosensory operation;
Second processing module 53, for the corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and performing and working as
Operational motion corresponding to the somatosensory operation of preceding input.
Wherein, the acquisition module 51 includes:
First acquisition unit, the reference speckle image formed letter is irradiated on a particular space for obtaining infrared transmitter
Breath;
Second acquisition unit, for when there is required movement generation in particular space, obtaining what infrared camera collected
Speckle image information;
First arithmetic element, for, with carrying out computing cross-correlation with reference to speckle image information, being obtained to speckle image information
The three-dimensional depth data stream information of particular space.
Wherein, the first processing module 52 includes:
Second arithmetic element, for being overlapped computing to color video stream information and depth data stream information, obtain pair
The three-dimensional modeling data answered;
First processing units, for the somatosensory operation for being defined as currently inputting by three-dimensional modeling data.
Wherein, the first processing units include:
Subelement is built, a skeleton model is built for extracting the artis in three-dimensional modeling data;
First processing subelement, for the change according to skeleton model, it is determined that currently inputting somatosensory operation.
Wherein, the Second processing module 53 includes:
Detection unit, for detect the somatosensory operation currently inputted and the corresponding somatosensory operation of specific operation action whether phase
Together;
Second processing unit, the somatosensory operation currently inputted body-sensing behaviour corresponding with specific operation action is detected for working as
When making identical, it is determined that the operational motion corresponding to somatosensory operation currently inputted acts for specific operation, and performs specific behaviour
Act.
Wherein, the second processing unit includes:
Computation subunit, for calculating the space coordinates of left hand or the right hand in the somatosensory operation currently inputted;
Subelement is mapped, for reflecting between the space coordinates according to somatosensory operation and the display screen coordinate of mobile terminal
Relation is penetrated, space coordinates is mapped to the display screen coordinate of mobile terminal;
Second processing subelement, for the specific operation occurred in coordinates in space action to be mapped into display screen coordinate
Go up and perform.
It should be noted that the device is device corresponding with the somatosensory operation method of above-mentioned mobile terminal, the above method
All implementations can also reach identical technique effect suitable for the embodiment of the device in embodiment.
Above-described is the preferred embodiment of the present invention, it should be pointed out that is come for the ordinary person of the art
Say, some improvements and modifications can also be made under the premise of principle of the present invention is not departed from, and these improvements and modifications also exist
In protection scope of the present invention.
Claims (12)
1. a kind of somatosensory operation method of mobile terminal, it is characterised in that colour imagery shot, red is provided with the mobile terminal
Outer camera and infrared transmitter;The somatosensory operation method includes:
Obtain the color video stream information that the colour imagery shot collects and the depth number that the infrared camera collects
According to stream information;
According to the color video stream information and the depth data stream information, it is determined that the somatosensory operation currently inputted;
The corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and performing corresponding to the somatosensory operation currently inputted
Operational motion.
2. the somatosensory operation method of mobile terminal according to claim 1, it is characterised in that obtain the infrared camera
The step of depth data stream information collected, includes:
Obtain the infrared transmitter and irradiate on a particular space reference speckle image information formed;
When there is required movement generation in the particular space, the speckle image information that the infrared camera collects is obtained;
Computing cross-correlation is carried out with reference to speckle image information with described to the speckle image information, obtains the particular space
Three-dimensional depth data stream information.
3. the somatosensory operation method of mobile terminal according to claim 1, it is characterised in that according to the color video stream
Information and the depth data stream information, it is determined that currently input somatosensory operation the step of include:
Computing is overlapped to the color video stream information and the depth data stream information, obtains corresponding threedimensional model number
According to;
The somatosensory operation that the three-dimensional modeling data is defined as currently inputting.
4. the somatosensory operation method of mobile terminal according to claim 3, it is characterised in that by the three-dimensional modeling data
The step of being defined as the somatosensory operation currently inputted includes:
The artis extracted in the three-dimensional modeling data builds a skeleton model;
According to the change of the skeleton model, it is determined that currently inputting somatosensory operation.
5. the somatosensory operation method of mobile terminal according to claim 1, it is characterised in that according to somatosensory operation with presetting
The corresponding relation of operational motion, it is determined that and including the step of perform the operational motion corresponding to the somatosensory operation that currently inputs:
Detect the somatosensory operation currently inputted and whether the corresponding somatosensory operation of specific operation action is identical;
It is if identical, it is determined that the operational motion corresponding to somatosensory operation currently inputted acts for the specific operation, and performs
The specific operation action.
6. the somatosensory operation method of mobile terminal according to claim 5, it is characterised in that perform the specific operation and move
As the step of include:
Calculate the space coordinates of left hand or the right hand in the somatosensory operation currently inputted;
According to the mapping relations between the space coordinates of somatosensory operation and the display screen coordinate of mobile terminal, the space is sat
Mark maps to the display screen coordinate of the mobile terminal;
The specific operation occurred in the coordinates in space action is mapped on display screen coordinate and performed.
7. the somatosensory operation device of a kind of mobile terminal, it is characterised in that colour imagery shot, red is provided with the mobile terminal
Outer camera and infrared transmitter;The somatosensory operation device includes:
Acquisition module, adopted for obtaining color video stream information that the colour imagery shot collects and the infrared camera
The depth data stream information collected;
First processing module, for according to the color video stream information and the depth data stream information, it is determined that current input
Somatosensory operation;
Second processing module, for the corresponding relation acted according to somatosensory operation and predetermined registration operation, it is determined that and the current input of execution
Somatosensory operation corresponding to operational motion.
8. the somatosensory operation device of mobile terminal according to claim 7, it is characterised in that the acquisition module includes:
First acquisition unit, the reference speckle image formed letter is irradiated on a particular space for obtaining the infrared transmitter
Breath;
Second acquisition unit, for when there is required movement generation in the particular space, obtaining the infrared camera collection
The speckle image information arrived;
First arithmetic element, for carrying out computing cross-correlation with reference to speckle image information with described to the speckle image information,
Obtain the three-dimensional depth data stream information of the particular space.
9. the somatosensory operation device of mobile terminal according to claim 7, it is characterised in that the first processing module bag
Include:
Second arithmetic element, for being overlapped computing to the color video stream information and the depth data stream information, obtain
To corresponding three-dimensional modeling data;
First processing units, for the somatosensory operation for being defined as currently inputting by the three-dimensional modeling data.
10. the somatosensory operation device of mobile terminal according to claim 9, it is characterised in that the first processing units
Including:
Subelement is built, a skeleton model is built for extracting the artis in the three-dimensional modeling data;
First processing subelement, for the change according to the skeleton model, it is determined that currently inputting somatosensory operation.
11. the somatosensory operation device of mobile terminal according to claim 7, it is characterised in that the Second processing module
Including:
Whether detection unit is identical for detecting the somatosensory operation currently inputted and the corresponding somatosensory operation of specific operation action;
Second processing unit, the somatosensory operation currently inputted somatosensory operation phase corresponding with specific operation action is detected for working as
Simultaneously, it is determined that the operational motion corresponding to somatosensory operation currently inputted acts for the specific operation, and performs the spy
Determine operational motion.
12. the somatosensory operation device of mobile terminal according to claim 11, it is characterised in that the second processing unit
Including:
Computation subunit, for calculating the space coordinates of left hand or the right hand in the somatosensory operation currently inputted;
Subelement is mapped, is closed for the mapping between the space coordinates according to somatosensory operation and the display screen coordinate of mobile terminal
System, the space coordinates is mapped to the display screen coordinate of the mobile terminal;
Second processing subelement, for the specific operation occurred in the coordinates in space action to be mapped into display screen coordinate
Go up and perform.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610517335.XA CN107577334A (en) | 2016-07-04 | 2016-07-04 | A kind of somatosensory operation method and device of mobile terminal |
PCT/CN2016/096407 WO2018006481A1 (en) | 2016-07-04 | 2016-08-23 | Motion-sensing operation method and device for mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610517335.XA CN107577334A (en) | 2016-07-04 | 2016-07-04 | A kind of somatosensory operation method and device of mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107577334A true CN107577334A (en) | 2018-01-12 |
Family
ID=60901650
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610517335.XA Pending CN107577334A (en) | 2016-07-04 | 2016-07-04 | A kind of somatosensory operation method and device of mobile terminal |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN107577334A (en) |
WO (1) | WO2018006481A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111640176A (en) * | 2018-06-21 | 2020-09-08 | 华为技术有限公司 | Object modeling movement method, device and equipment |
CN112102934A (en) * | 2020-09-16 | 2020-12-18 | 南通市第一人民医院 | Nurse standardized training examination scoring method and system |
WO2021008589A1 (en) * | 2019-07-18 | 2021-01-21 | 华为技术有限公司 | Application running mehod and electronic device |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112308015A (en) * | 2020-11-18 | 2021-02-02 | 盐城鸿石智能科技有限公司 | Novel depth recovery scheme based on 3D structured light |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202870727U (en) * | 2012-10-24 | 2013-04-10 | 上海威镜信息科技有限公司 | Display unit device with motion capture module |
KR20130107093A (en) * | 2012-03-21 | 2013-10-01 | 엘지전자 주식회사 | Mobile terminal and control method thereof |
CN103914129A (en) * | 2013-01-04 | 2014-07-09 | 云联(北京)信息技术有限公司 | Man-machine interactive system and method |
CN203950270U (en) * | 2014-01-22 | 2014-11-19 | 南京信息工程大学 | Body sense recognition device and by the man-machine interactive system of its mouse beacon keyboard operation |
CN104252231A (en) * | 2014-09-23 | 2014-12-31 | 河南省辉耀网络技术有限公司 | Camera based motion sensing recognition system and method |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101373285B1 (en) * | 2009-12-08 | 2014-03-11 | 한국전자통신연구원 | A mobile terminal having a gesture recognition function and an interface system using the same |
KR20150042039A (en) * | 2013-10-10 | 2015-04-20 | 엘지전자 주식회사 | Mobile terminal and operating method thereof |
CN204481940U (en) * | 2015-04-07 | 2015-07-15 | 北京市商汤科技开发有限公司 | Binocular camera is taken pictures mobile terminal |
-
2016
- 2016-07-04 CN CN201610517335.XA patent/CN107577334A/en active Pending
- 2016-08-23 WO PCT/CN2016/096407 patent/WO2018006481A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130107093A (en) * | 2012-03-21 | 2013-10-01 | 엘지전자 주식회사 | Mobile terminal and control method thereof |
CN202870727U (en) * | 2012-10-24 | 2013-04-10 | 上海威镜信息科技有限公司 | Display unit device with motion capture module |
CN103914129A (en) * | 2013-01-04 | 2014-07-09 | 云联(北京)信息技术有限公司 | Man-machine interactive system and method |
CN203950270U (en) * | 2014-01-22 | 2014-11-19 | 南京信息工程大学 | Body sense recognition device and by the man-machine interactive system of its mouse beacon keyboard operation |
CN104252231A (en) * | 2014-09-23 | 2014-12-31 | 河南省辉耀网络技术有限公司 | Camera based motion sensing recognition system and method |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111640176A (en) * | 2018-06-21 | 2020-09-08 | 华为技术有限公司 | Object modeling movement method, device and equipment |
US11436802B2 (en) | 2018-06-21 | 2022-09-06 | Huawei Technologies Co., Ltd. | Object modeling and movement method and apparatus, and device |
WO2021008589A1 (en) * | 2019-07-18 | 2021-01-21 | 华为技术有限公司 | Application running mehod and electronic device |
US11986726B2 (en) | 2019-07-18 | 2024-05-21 | Honor Device Co., Ltd. | Application running method and electronic device |
CN112102934A (en) * | 2020-09-16 | 2020-12-18 | 南通市第一人民医院 | Nurse standardized training examination scoring method and system |
Also Published As
Publication number | Publication date |
---|---|
WO2018006481A1 (en) | 2018-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN101715581B (en) | Volume recognition method and system | |
CN103914152B (en) | Multi-point touch and the recognition methods and system that catch gesture motion in three dimensions | |
CN102822862B (en) | Calculation element interface | |
CN103135754B (en) | Adopt interactive device to realize mutual method | |
CN105528082A (en) | Three-dimensional space and hand gesture recognition tracing interactive method, device and system | |
WO2018076437A1 (en) | Method and apparatus for human facial mapping | |
CN104423578B (en) | Interactive input system and method | |
US11244511B2 (en) | Augmented reality method, system and terminal device of displaying and controlling virtual content via interaction device | |
CN107577334A (en) | A kind of somatosensory operation method and device of mobile terminal | |
CN103955316B (en) | A kind of finger tip touching detecting system and method | |
CN103793060A (en) | User interaction system and method | |
CN106325509A (en) | Three-dimensional gesture recognition method and system | |
CN104035557B (en) | Kinect action identification method based on joint activeness | |
CN103064514A (en) | Method for achieving space menu in immersive virtual reality system | |
CN104346816A (en) | Depth determining method and device and electronic equipment | |
CN101393497A (en) | Multi-point touch method based on binocular stereo vision | |
KR20130099317A (en) | System for implementing interactive augmented reality and method for the same | |
CN109839827B (en) | Gesture recognition intelligent household control system based on full-space position information | |
JP2017530447A (en) | System and method for inputting a gesture in a 3D scene | |
CN107423392A (en) | Word, dictionaries query method, system and device based on AR technologies | |
CN106201173A (en) | The interaction control method of a kind of user's interactive icons based on projection and system | |
CN113672099A (en) | Electronic equipment and interaction method thereof | |
CN106774938A (en) | Man-machine interaction integrating device based on somatosensory device | |
CN106814963A (en) | A kind of human-computer interaction system and method based on 3D sensor location technologies | |
CN202159302U (en) | Augment reality system with user interaction and input functions |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180112 |