CN107145822A - Deviate the method and system of user's body feeling interaction demarcation of depth camera - Google Patents
Deviate the method and system of user's body feeling interaction demarcation of depth camera Download PDFInfo
- Publication number
- CN107145822A CN107145822A CN201710184711.2A CN201710184711A CN107145822A CN 107145822 A CN107145822 A CN 107145822A CN 201710184711 A CN201710184711 A CN 201710184711A CN 107145822 A CN107145822 A CN 107145822A
- Authority
- CN
- China
- Prior art keywords
- depth
- depth camera
- facing
- image
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Social Psychology (AREA)
- Psychiatry (AREA)
- Oral & Maxillofacial Surgery (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of user's body feeling interaction scaling method and system for deviateing depth camera, methods described by identify user just facing to, then by obtained depth image or posture and action be transformed into it is relative align facing to depth image or posture and action, image or posture and action after conversion will not be affected because of the orientation of depth camera so that Consumer's Experience is ensured;The system includes at least one depth camera and at least one main equipment, the depth camera includes image acquisition units, depth calculation unit and interface unit, the main equipment includes processor, memory and interface unit, for realizing application of the method for the invention under different scenes.
Description
Technical field
The present invention relates to computer application field, more particularly to a kind of user's body feeling interaction demarcation side for deviateing depth camera
Method and system.
Background technology
The depth image of human body can be obtained using depth camera (3D cameras), can further be recognized using depth image
Go out human posture or action to realize man-machine interaction.In such as equipment relatively common at present, by depth camera (such as
Kinect, Astra etc.) it is connected with host device (such as game host, intelligent television), when human object is in depth camera
In the range of can surveying, the depth image containing human body is obtained by depth camera, depth image is transferred in host device, pass through processing
Device is carried out to depth image after human posture or action (such as gesture) identification, by the gesture recognized and default host command
Trigger the instruction to realize body feeling interaction after correspondence.
User with the main equipment such as TV, robot when carrying out body feeling interaction, often in face of these main equipments, together
When depth camera be placed on main equipment or integrated with main equipment.But in other applications, depth camera
Separated from main equipment and be in different orientation, such as control to be in using single depth camera multiple on diverse location
Main equipment etc., now when user plane carries out body feeling interaction to main equipment, user can then deviate depth camera, such as on a left side
It is not then that left and right is waved when the right side is waved, in the depth image obtained by depth camera, this allows for the user in interaction
It is intended to the difference between somatosensory recognition.
The content of the invention
There is provided one kind for the problem of present invention is in order to solve that when depth camera deviates with main equipment accurate instruction can not be obtained
Deviate the user's body feeling interaction scaling method and system of depth camera.
In order to solve the above problems, the technical solution adopted by the present invention is as described below:
A kind of user's body feeling interaction scaling method for deviateing depth camera, comprises the following steps:
S1:Receive the in the depth camera coordinate system when human body agent-oriention equipment that depth camera obtains sends instruction
One depth image;
S2:According to first depth image identify the human body just facing to;
S3:According to described just facing to the second depth be converted to first depth image in main equipment coordinate system
Image;
S4:Recognize the instruction of the second depth image representative and export the instruction.
Preferably, the human body just facing to the face that refers to the human body is positive or trunk front of the human body
Signified direction.
Preferably, the step S3 comprises the following steps:
S31:By it is described just facing to determine just facing to direction vector and calculate the direction vector in the depth
The angular separation spent in camera coordinates system;
S32:Transition matrix is calculated using the angular separation;
S33:First depth image is converted to by second depth image according to the transition matrix.
Preferably, recognize that the instruction that second depth image is represented includes in the step S4:
S41:Its corresponding posture or action are extracted according to second depth image;
S42:Recognize the posture or act corresponding instruction;
S43:The output instruction.
The technical solution adopted by the present invention also includes a kind of user's body feeling interaction scaling method for deviateing depth camera, including
Following steps:
T1:Receive the in the depth camera coordinate system when human body agent-oriention equipment that depth camera obtains sends instruction
One depth image, and prime or action are extracted according to first depth image;
T2:According to first depth image identify the human body just facing to;
T3:According to described just facing to the second appearance be converted to the prime or action in main equipment coordinate system
Gesture or action;
T4:Recognize the instruction of the second or action representative and export the instruction.
Preferably, the posture or action refer to gesture or hand motion.
Preferably, the human body just facing to the face that refers to the human body is positive or trunk front of the human body
Signified direction.
Preferably, the step T3 comprises the following steps:
T31:By it is described just facing to calculate it is described just facing to direction vector and calculate the direction vector in institute
State the angular separation in depth camera coordinate system;
T32:Transition matrix is calculated using the angular separation;
T33:The prime or action are converted to by the second or action according to the transition matrix.
The technical solution adopted by the present invention also includes a kind of user of the deviation depth camera using as described in any of the above
The system of body feeling interaction scaling method, including at least one depth camera and at least one main equipment, the depth camera bag
Image acquisition units, depth calculation unit and interface unit are included, the main equipment includes processor, memory, interface list
Member.
A kind of computer-readable recording medium, its demarcation that is stored with deviates the computer of user's body feeling interaction of depth camera
Program, the computer program is executed by processor to realize any of the above methods described.Beneficial effects of the present invention are:There is provided
A kind of user body feeling interaction scaling method for deviateing depth camera, is sent out using human body agent-oriention equipment is obtained to depth camera
Go out the first depth image when instructing extract human body just facing to and according to this just facing to re-scaling depth camera institute
Obtained the first depth image re-scales the prime extracted by the first depth image or action, obtains what human body was sent
Accurately instruction, is achieved in the orientation of putting regardless of depth camera, can realize preferably user-interaction experience.
Brief description of the drawings
Fig. 1 is the schematic diagram of a scenario of body feeling interaction of the embodiment of the present invention.
Fig. 2 is user's body feeling interaction scaling method schematic diagram that the embodiment of the present invention deviates depth camera.
Fig. 3 be the embodiment of the present invention deviate depth camera user's body feeling interaction scaling method in the first depth image convert
For the method schematic diagram of the second depth image.
Fig. 4 be the embodiment of the present invention deviate depth camera user's body feeling interaction scaling method in identification and output order
Method schematic diagram.
Fig. 5 is user's body feeling interaction calibration system schematic diagram that the embodiment of the present invention deviates depth camera.
Fig. 6 is the another user's body feeling interaction scaling method schematic diagram for deviateing depth camera of the embodiment of the present invention.
Fig. 7 is prime or dynamic in the another user's body feeling interaction scaling method for deviateing depth camera of the embodiment of the present invention
It is converted into the method schematic diagram of second or action.
Wherein, 1- TVs, 2- depth cameras, 3- user.
Embodiment
The present invention is described in detail by specific embodiment below in conjunction with the accompanying drawings, for a better understanding of this hair
It is bright, but following embodiments are not intended to limit the scope of the invention.In addition, it is necessary to the diagram provided in explanation, following embodiments
Only illustrate the basic conception of the present invention in a schematic way, the only display component relevant with the present invention rather than according to reality in accompanying drawing
Component count, shape during implementation and size are drawn, shape, quantity and the ratio of it is actual when implementing each component can for one kind with
The change of meaning, and its assembly layout form may also be increasingly complex.
Fig. 1 is the schematic diagram of a scenario of body feeling interaction according to embodiments of the present invention.Depth camera 2 is connected with TV 1 in figure,
Depth camera 2 is used to obtain the depth image of target area.
Current depth camera 2 mainly has three kinds of forms:Depth camera based on binocular vision, the depth phase based on structure light
Machine and the depth camera based on TOF (time flight method).It is briefly described below, no matter which kind of form may be used in
In embodiment.
Depth camera based on binocular vision is to utilize binocular vision technology, utilizes two cameras pair for being in different visual angles
The same space is taken pictures, the difference of pixel where same object and the depth where the object in the image that two cameras are shot
Degree is directly related, thus by calculating pixel deviations obtains depth information by image processing techniques.
Depth camera based on structure light to object space projection coding structure light pattern, then by camera by gathering mesh
The image of structured light patterns is contained in mark space, then handles the image and such as carries out matching meter with reference configuration light image
Calculation etc. can directly obtain depth information.
Depth camera based on TOF to object space by launching laser pulse, and laser pulse is connect after being reflected through target
The turnaround time of laser pulse is received after unit is received and recorded, the depth information of target is gone out by the Time Calculation.
The typically collection color camera of the first in these three methods, thus it is big by illumination effect, while obtaining depth information
Amount of calculation it is larger.Latter two typically utilizes infrared light, not by illumination effect, while amount of calculation is relatively small.Environment indoors
In, the use of structure light or TOF depth cameras is more preferably to select.
TV 1 in Fig. 1 is in general sense intelligent television or DTV, can be regarded as containing display, processing
The computing device of device and all multiplex roles, current most intelligent television operation ANDROID operating systems.Depth camera 2 one
As possess the interfaces such as USB, for being connected with computing device, can also be powered.Depth camera 2 is connected with TV 1 in Fig. 1,
The depth image got is transferred in TV, depth image handled by the software being stored in TV 1, such as
Image denoising pretreatment, skeletal extraction etc., the result of processing further become corresponding instruction to control answering in TV 1
With program, such as cursor is controlled to move, choose, page turning etc..
In the alternative embodiments of the present embodiment, TV can also be other main equipments, such as display, computer, machine
Device people etc..
In the present embodiment, user 3 just carries out body feeling interaction facing to main equipment TV 1, and body feeling interaction mode includes
Somatic sensation television game, gesture controlling cursor etc..Traditional body feeling interaction typically require depth camera 2 be in the front of TV 1 or on
Portion is fixed, this have the advantage that can make it that the front of depth camera 2 is positive parallel with TV 1, but there is also some
Scene, as shown in figure 1, depth camera 2 is in different orientation from TV 1 so that the front of depth camera 2 and the front of TV 1
Between have certain angle.
With two vectors to facilitate explanation in Fig. 1, wherein vector a represents user just facing to vectorial A represents depth camera
Xyz coordinate systems are the coordinate system of depth camera in direction, figure.There is the situation of certain angle for vector a and A in the present invention.
Under this situation, when user 3 makes in face of main equipment TV 1 moves in parallel operation, the operation is for TV 1
One two-dimentional plane operations, and for depth camera 2, the operation is but three-dimensional space movement, when depth camera with
The mobile result of the three-dimensional identified in the coordinate system of depth camera is output in TV 1, i.e., can cause the mesh in TV 1
Mark also performs three-dimensional manipulating, and the desired two-dimensional operation of non-user 3, which results in user 3 be intended to actually interact purpose it
Between gap, cause interactive experience poor.
As shown in Fig. 2 user body feeling interaction scaling method of the present embodiment using deviation depth camera of the present invention,
The instruction of user 3 can be accurately identified, following steps are specifically included:
S1:Receive the in the depth camera coordinate system when human body agent-oriention equipment that depth camera obtains sends instruction
One depth image;
Acquiescently, the coordinate of each pixel is in the coordinate system where depth camera in first depth image.
S2:According to first depth image identify the human body just facing to;
In the alternative embodiments of the present embodiment, this just also has any different facing to according to different applications.Such as one
A little trunks always face the application of main equipment, positive towards as just facing to being rational with trunk;And for
Some human face trunks are positive towards as just facing to then more preferably with face always for the application of main equipment
Rationally.Just facing to the computational methods of direction vector have a variety of, such as face, by recognize two eyes and
Three point coordinates of face determine face plane, so that it is determined that go out vertically with the plane vector as face just facing to direction
Vector.
S3:According to described just facing to the second depth be converted to first depth image in main equipment coordinate system
Image;
As shown in figure 3, the step S3 comprises the following steps:
S31:By it is described just facing to determine just facing to direction vector and calculate the direction vector in the depth
The angular separation spent in camera coordinates system;
S32:Transition matrix is calculated using the angular separation;
S33:First depth image is converted to by second depth image according to the transition matrix.According to described
Just facing to direction vector can obtain the vector and depth camera vector between angle theta, the angle is in three directions
Component be (α, beta, gamma), it is hereby achieved that transition matrix R:
The three dimensions point of each pixel in first depth image is multiplied with the transition matrix and can be obtained by the master
The second depth image in the coordinate system of body equipment, the depth image will not be with the azimuthal influence of depth camera, therefore can
To realize more preferable man-machine interaction.
S4:Recognize the instruction of the second depth image representative and export the instruction;
As shown in figure 4, recognizing that the instruction that second depth image is represented includes in the step S4:
S41:Its corresponding posture or action are extracted according to second depth image;
S42:Recognize the posture or act corresponding instruction;
S43:The output instruction.
It is worth noting that, the transition matrix only accounts for conversion, do not account for translating and scale, in the change of the present embodiment
In logical embodiment, transition matrix can also increase translation and scaling component.
In the alternative embodiments of the present embodiment, there is a kind of computer-readable recording medium, its demarcation that is stored with deviates
The computer program of user's body feeling interaction of depth camera, the computer program is executed by processor to realize Fig. 2, Fig. 3, figure
Method shown in 4.
As shown in figure 5, being user's body feeling interaction calibration system schematic diagram that the present embodiment deviates depth camera.System is by leading
Body equipment and depth camera composition, wherein main equipment include processor, memory, interface unit, and main equipment is also external
One display;Depth calculation unit, interface unit, image acquisition units are included in depth camera, for structure optical depth
Camera and TOF depth cameras, image acquisition units include optical projector and imaging sensor;And for binocular depth phase
Machine, image acquisition units include two imaging sensors.Connected between main equipment and depth camera by interface unit, this connects
Mouth unit is USB wired connections.
Can be for example as described below in the alternative embodiments of the present embodiment:Main equipment can include display, enter
During row body feeling interaction, user is typically toward display;Other wired forms can also be passed through between computing device and depth camera
Connection, or the wireless connection such as WIFI.
Depth camera is collected after the image of target area, and the depth map of target area is calculated by depth calculation unit
Picture, according to the depth camera of different principle, its calculation is also had any different, by taking structure light depth camera as an example.Structure optical depth
Structured light projection instrument projection structure light image into space in camera, the image by imaging sensor gather after be transmitted to depth calculation
Unit, is realized based on structure light trigonometry by structure light image calculates depth image.Using structure light image as speckle pattern
As exemplified by, need to be reference picture to the structure light image in one width known depth plane of collection in advance, then depth calculation list
Member is using the structure light image and reference picture currently obtained, and the deviation value for calculating each pixel by image matching algorithm (becomes
Shape), depth finally can be calculated using trigonometry principle, calculation formula is as follows:
Wherein, ZDRefer to the depth value of three dimensions point distance collection module, that is, depth data to be asked, B is collection camera
The distance between with structured light projection instrument, Z0Depth value for reference picture from collection module, f is Jiao of lens in collection camera
Away from.Wherein the parameter such as reference picture, B and f will be stored in memory in advance.
In the present embodiment, memory is flash memory., can be non-for other in the alternative embodiments of the present embodiment
Volatile memory, if depth camera is integrated into computing device, the memory can refer to the memory in computing device
Unite two into one.
Depth calculation unit is calculated after the depth image of target area, and depth image is transferred to calculating via interface unit
In equipment, depth image can be kept in memory, and then be handled by the processor of main equipment, depth image herein
Including information such as depth image pretreatment, human bioequivalence, skeletal extractions.
In the alternative embodiments of the present invention, depth image directly can also carry out real-time via the processor of main equipment
Processing, the program (such as depth image pretreatment, human bioequivalence, skeletal extraction etc.) of processing is stored in advance in main equipment
In memory.Processor will call these programs to handle depth image, may finally output control other applications
Instruction.
In the present embodiment, object is inconsistent in order to solve the true body-sensing of user set forth above and virtual manipulation asks
Topic, processor also needs to that the depth image that depth camera is obtained is further processed.The basic reason that problem is produced exists
In depth camera towards with main equipment towards it is inconsistent the problem of.
As shown in fig. 6, the present embodiment can also be solved the above problems using another described method of the present invention, first to depth
Image is changed again after posture or action are obtained after being further processed, and specifically includes following steps:
T1:Receive the in the depth camera coordinate system when human body agent-oriention equipment that depth camera obtains sends instruction
One depth image, and prime or action are extracted according to first depth image;The posture or the gesture that refers to of action or
Hand motion;
T2:According to first depth image identify the human body just facing to;
In the alternative embodiments of the present embodiment, this just also has any different facing to according to different applications.Such as one
A little trunks always face the application of main equipment, positive towards as just facing to being rational with trunk;And for
Some human face trunks are positive towards as just facing to then more preferably with face always for the application of main equipment
Rationally.Just facing to the computational methods of direction vector have a variety of, such as face, by recognize two eyes and
Three point coordinates of face determine face plane, so that it is determined that go out vertically with the plane vector as face just facing to direction
Vector.
T3:According to described just facing to the second appearance be converted to the prime or action in main equipment coordinate system
Gesture or action;
As shown in fig. 7, the step T3 comprises the following steps:
T31:By it is described just facing to calculate it is described just facing to direction vector and calculate the direction vector in institute
State the angular separation in depth camera coordinate system;
T32:Transition matrix is calculated using the angular separation;
T33:The prime or action are converted to by the second or action according to the transition matrix.
T4:Recognize the instruction of the second or action representative and export the instruction.
Compared with the method shown in Fig. 2, the amount of calculation required for method shown in Fig. 6 is smaller, and depth image is obtained first
Afterwards, in addition it is also necessary to prime or action are extracted from depth image;Next is also required to calculate transition matrix;Finally according to conversion
Matrix by prime or action be converted to relative to human body just facing to second or action.
Memory in computing device is used for storage program area and application program;Processor passes through to depth image
Corresponding instruction is sent after processing, application program is further controlled by the instruction.Display is used for the display of application program.
In the alternative embodiments of the present embodiment, there is a kind of computer-readable recording medium, its demarcation that is stored with deviates
The computer program of user's body feeling interaction of depth camera, the computer program is executed by processor to realize Fig. 6, Fig. 7 institute
Show method.
In the alternative embodiments of the present embodiment, the memory of main equipment realizes Fig. 2, Fig. 3, Fig. 4 and figure for storage
6th, the operating system and application program of method shown in Fig. 7, the processor is according to shown in Fig. 2, Fig. 3, Fig. 4 and Fig. 6, Fig. 7
Method performs the synchronous step, takes the instruction exported at first in two methods;Or processor is by obtained by two methods
Instruction exports final instruction again after being calibrated.
It is the part alternative embodiments of the present embodiment above, other accommodation modes are not changing side of the present invention
The scope that the present invention is protected is regarded as on the basis of the basic thought of method.
In large supermarket or market, selling the seller of TV needs on the face wall of present customers one different resolution, no
With the effect of many TVs of configuration.In order to which bandwagon effect is more preferably obvious, different configuration of many television presentations can be made identical
Picture, depth camera now can be placed in the orientation different from metope where television set, using of the present invention
It is any in two methods, many TVs are controlled simultaneously with a depth camera, while sending instructions to many TVs is used for
Image switching.
This depth camera is connected by way of interface with many TVs, in use, businessman can be towards TV place
Metope makes instruction, and the depth image of businessman is gathered by depth camera, using either method as described above, depth image is entered
Row processing obtains the instruction and exported, and the application program of many TVs is then further controlled by the instruction, realizes to many
Controlled while platform TV, the more convenient and quicker compared with the single debugging of remote control is used the need for tradition.
In the alternative embodiments of the present embodiment, other wired forms can also be passed through between computing device and depth camera
Connection, or the wireless connection such as WIFI.
In the alternative embodiments of the present embodiment, it can also be selectively controlled described by setting different instructions
Many TVs.
The present embodiment is a kind of usage scenario, but is not limited to the usage scenario, in the alternative embodiments of the present embodiment
In, other usage scenarios are can be extended to, such as control multiple interfaces etc. at the same time or separately with a depth camera
The scope of protection of the invention all should be belonged to.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to assert
The specific implementation of the present invention is confined to these explanations.For those skilled in the art, do not taking off
On the premise of from present inventive concept, some equivalent substitutes or obvious modification can also be made, and performance or purposes are identical, all should
When being considered as belonging to protection scope of the present invention.
Claims (10)
1. a kind of user's body feeling interaction scaling method for deviateing depth camera, it is characterised in that comprise the following steps:
S1:Receive first deep in the depth camera coordinate system when human body agent-oriention equipment that depth camera obtains sends instruction
Spend image;
S2:According to first depth image identify the human body just facing to;
S3:According to described just facing to the second depth map be converted to first depth image in main equipment coordinate system
Picture;
S4:Recognize the instruction of the second depth image representative and export the instruction.
2. deviate user's body feeling interaction scaling method of depth camera as claimed in claim 1, it is characterised in that the human body
Just facing to the direction that the face that refers to the human body is positive or trunk front of the human body is signified.
3. deviate user's body feeling interaction scaling method of depth camera as claimed in claim 1, it is characterised in that the step
S3 comprises the following steps:
S31:By it is described just facing to determine just facing to direction vector and calculate the direction vector in the depth phase
Angular separation in machine coordinate system;
S32:Transition matrix is calculated using the angular separation;
S33:First depth image is converted to by second depth image according to the transition matrix.
4. deviate user's body feeling interaction scaling method of depth camera as claimed in claim 1, it is characterised in that the step
Recognize that the instruction that second depth image is represented includes in S4:
S41:Its corresponding posture or action are extracted according to second depth image;
S42:Recognize the posture or act corresponding instruction;
S43:The output instruction.
5. a kind of user's body feeling interaction scaling method for deviateing depth camera, it is characterised in that comprise the following steps:
T1:Receive first deep in the depth camera coordinate system when human body agent-oriention equipment that depth camera obtains sends instruction
Image is spent, and prime or action are extracted according to first depth image;
T2:According to first depth image identify the human body just facing to;
T3:According to it is described just facing to by the prime or action be converted to second in main equipment coordinate system or
Action;
T4:Recognize the instruction of the second or action representative and export the instruction.
6. deviate user's body feeling interaction scaling method of depth camera as claimed in claim 5, it is characterised in that the posture
Or act the gesture referred to or hand motion.
7. deviate user's body feeling interaction scaling method of depth camera as claimed in claim 5, it is characterised in that the human body
Just facing to the direction that the face that refers to the human body is positive or trunk front of the human body is signified.
8. deviate user's body feeling interaction scaling method of depth camera as claimed in claim 5, it is characterised in that the step
T3 comprises the following steps:
T31:By it is described just facing to calculate it is described just facing to direction vector and calculate the direction vector in the depth
The angular separation spent in camera coordinates system;
T32:Transition matrix is calculated using the angular separation;
T33:The prime or action are converted to by the second or action according to the transition matrix.
9. a kind of user's body feeling interaction scaling method of deviation depth camera using as described in claim 1-8 is any is
System, it is characterised in that including at least one depth camera and at least one main equipment, the depth camera includes IMAQ
Unit, depth calculation unit and interface unit, the main equipment include processor, memory and interface unit.
10. a kind of computer-readable recording medium for including computer program, the computer program is computer-executed with reality
The existing any described methods of claim 1-8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710184711.2A CN107145822B (en) | 2017-03-24 | 2017-03-24 | User somatosensory interaction calibration method and system deviating from depth camera |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710184711.2A CN107145822B (en) | 2017-03-24 | 2017-03-24 | User somatosensory interaction calibration method and system deviating from depth camera |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107145822A true CN107145822A (en) | 2017-09-08 |
CN107145822B CN107145822B (en) | 2021-01-22 |
Family
ID=59783466
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710184711.2A Active CN107145822B (en) | 2017-03-24 | 2017-03-24 | User somatosensory interaction calibration method and system deviating from depth camera |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107145822B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038885A (en) * | 2017-11-29 | 2018-05-15 | 深圳奥比中光科技有限公司 | More depth camera scaling methods |
CN109126116A (en) * | 2018-06-01 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of body-sensing interactive approach and its system |
WO2019218521A1 (en) * | 2018-05-14 | 2019-11-21 | Boe Technology Group Co., Ltd. | Gesture recognition apparatus, control method thereof, and display apparatus |
WO2019232719A1 (en) * | 2018-06-06 | 2019-12-12 | Li Xiuqiu | Artistic intelligence television apparatus |
CN113096193A (en) * | 2021-04-30 | 2021-07-09 | 维沃移动通信(杭州)有限公司 | Three-dimensional somatosensory operation identification method and device and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011089538A1 (en) * | 2010-01-25 | 2011-07-28 | Naveen Chawla | A stereo-calibration-less multiple-camera human-tracking system for human-computer 3d interaction |
CN202003298U (en) * | 2010-12-27 | 2011-10-05 | 韩旭 | Three-dimensional uncalibrated display interactive device |
CN102221886A (en) * | 2010-06-11 | 2011-10-19 | 微软公司 | Interacting with user interface through metaphoric body |
CN102841679A (en) * | 2012-05-14 | 2012-12-26 | 乐金电子研发中心(上海)有限公司 | Non-contact man-machine interaction method and device |
-
2017
- 2017-03-24 CN CN201710184711.2A patent/CN107145822B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2011089538A1 (en) * | 2010-01-25 | 2011-07-28 | Naveen Chawla | A stereo-calibration-less multiple-camera human-tracking system for human-computer 3d interaction |
CN102221886A (en) * | 2010-06-11 | 2011-10-19 | 微软公司 | Interacting with user interface through metaphoric body |
CN202003298U (en) * | 2010-12-27 | 2011-10-05 | 韩旭 | Three-dimensional uncalibrated display interactive device |
CN102841679A (en) * | 2012-05-14 | 2012-12-26 | 乐金电子研发中心(上海)有限公司 | Non-contact man-machine interaction method and device |
Non-Patent Citations (1)
Title |
---|
郭星等: "一种大屏幕人机交互系统的实现方法", 《计算机工程与应用》 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038885A (en) * | 2017-11-29 | 2018-05-15 | 深圳奥比中光科技有限公司 | More depth camera scaling methods |
WO2019218521A1 (en) * | 2018-05-14 | 2019-11-21 | Boe Technology Group Co., Ltd. | Gesture recognition apparatus, control method thereof, and display apparatus |
US11314334B2 (en) | 2018-05-14 | 2022-04-26 | Boe Technology Group Co., Ltd. | Gesture recognition apparatus, control method thereof, and display apparatus |
CN109126116A (en) * | 2018-06-01 | 2019-01-04 | 成都通甲优博科技有限责任公司 | A kind of body-sensing interactive approach and its system |
WO2019232719A1 (en) * | 2018-06-06 | 2019-12-12 | Li Xiuqiu | Artistic intelligence television apparatus |
CN113096193A (en) * | 2021-04-30 | 2021-07-09 | 维沃移动通信(杭州)有限公司 | Three-dimensional somatosensory operation identification method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN107145822B (en) | 2021-01-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11394950B2 (en) | Augmented reality-based remote guidance method and apparatus, terminal, and storage medium | |
US11308347B2 (en) | Method of determining a similarity transformation between first and second coordinates of 3D features | |
CN107145822A (en) | Deviate the method and system of user's body feeling interaction demarcation of depth camera | |
US10410089B2 (en) | Training assistance using synthetic images | |
JP7026825B2 (en) | Image processing methods and devices, electronic devices and storage media | |
KR20170031733A (en) | Technologies for adjusting a perspective of a captured image for display | |
US20130069876A1 (en) | Three-dimensional human-computer interaction system that supports mouse operations through the motion of a finger and an operation method thereof | |
CN112652016A (en) | Point cloud prediction model generation method, pose estimation method and device | |
CN104656893A (en) | Remote interaction control system and method for physical information space | |
CN107133984A (en) | The scaling method and system of depth camera and main equipment | |
US20190340773A1 (en) | Method and apparatus for a synchronous motion of a human body model | |
US20160014391A1 (en) | User Input Device Camera | |
KR101256046B1 (en) | Method and system for body tracking for spatial gesture recognition | |
WO2015093130A1 (en) | Information processing device, information processing method, and program | |
CN111860252A (en) | Image processing method, apparatus and storage medium | |
Schütt et al. | Semantic interaction in augmented reality environments for microsoft hololens | |
WO2018006481A1 (en) | Motion-sensing operation method and device for mobile terminal | |
Zhang et al. | Virtual reality aided high-quality 3D reconstruction by remote drones | |
US9767580B2 (en) | Apparatuses, methods, and systems for 2-dimensional and 3-dimensional rendering and display of plenoptic images | |
CN111179341B (en) | Registration method of augmented reality equipment and mobile robot | |
CN112073640A (en) | Panoramic information acquisition pose acquisition method, device and system | |
Chamzas et al. | 3D augmented reality tangible user interface using commodity hardware | |
CN109816723A (en) | Method for controlling projection, device, projection interactive system and storage medium | |
WO2016185634A1 (en) | Information processing device | |
JP5520772B2 (en) | Stereoscopic image display system and display method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 11-13 / F, joint headquarters building, high tech Zone, 63 Xuefu Road, Yuehai street, Nanshan District, Shenzhen, Guangdong 518000 Applicant after: Obi Zhongguang Technology Group Co., Ltd Address before: A808, Zhongdi building, industry university research base, China University of Geosciences, No.8, Yuexing Third Road, Nanshan District, Shenzhen, Guangdong 518000 Applicant before: SHENZHEN ORBBEC Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |