CN103810480A - Method for detecting gesture based on RGB-D image - Google Patents

Method for detecting gesture based on RGB-D image Download PDF

Info

Publication number
CN103810480A
CN103810480A CN201410073064.4A CN201410073064A CN103810480A CN 103810480 A CN103810480 A CN 103810480A CN 201410073064 A CN201410073064 A CN 201410073064A CN 103810480 A CN103810480 A CN 103810480A
Authority
CN
China
Prior art keywords
image
rgb
gesture
depth
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410073064.4A
Other languages
Chinese (zh)
Other versions
CN103810480B (en
Inventor
张维忠
丁洁玉
赵志刚
张峰
李明
王青林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Micagent Technology Co ltd
Original Assignee
QINGDAO ANIMATION
Qingdao Broadcasting And Tv Wireless Media Group Co ltd
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QINGDAO ANIMATION, Qingdao Broadcasting And Tv Wireless Media Group Co ltd, Qingdao University filed Critical QINGDAO ANIMATION
Priority to CN201410073064.4A priority Critical patent/CN103810480B/en
Publication of CN103810480A publication Critical patent/CN103810480A/en
Application granted granted Critical
Publication of CN103810480B publication Critical patent/CN103810480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a method for detecting a gesture based on an RGB-D image. The method comprises the following steps of: step 1, acquiring the RGB-D image; step 2, segmenting the hands from a background; step 3, identifying the gesture; step 4, finding the optimal segmentation of the gesture. The detecting the gesture based on the RGB-D image provided by the invention is capable of effectively segmenting the areas of human hands, accurate in segmentation, capable of obtaining good gesture segmentation even if in the case that the hands are partially self-shielded or the interference of other people exists in the background, and good in algorithm robustness.

Description

Based on the gesture detecting method of RGB-D image
Technical field
The present invention relates to digital image processing techniques field, relate in particular to a kind of gesture detecting method based on RGB-D image.
Background technology
Man Machine Interface needs directly perceived as far as possible and nature.User and machine carry out alternately, do not need loaded down with trivial details equipment (as colour-coded or gloves) or device as telepilot, mouse and keyboard.Gesture can provide a simple communication way combining with machine intelligence.Can find there is the gesture system of successful Application at various research and industrial circle.For example: game control, virtual environment, Smart Home and Sign Language Recognition etc.
The quality of Hand Gesture Segmentation directly affects precision and the accuracy that follow-up gesture feature extracts, follows the tracks of, identifies.In recent years, researchist has proposed several different methods in the research of Hand Gesture Segmentation both at home and abroad, mainly comprises stencil matching method, method of difference, skin color segmentation method and constraint lambda limiting process etc.Stencil matching method is to be based upon on the basis of hand-type database, in database, the masterplate in images of gestures and hand-type data is compared.Hand-type is a nonrigid object, and the process computation amount of comparison is large, and difficulty is larger, is difficult to requirement of real time.Constrained method is the gloves by wearing different colours, or the contrast of outstanding hand and background, simplifies gesture region (prospect) and background are divided with this.But these constrained gesture data exchange convenience and freedom.Image difference method is to subtract each other to carry out Hand Gesture Segmentation by images of gestures and the static background image of motion, and the defect of the method is the generation that cannot overcome corresponding picture point skew on image.Skin color segmentation method is to carry out Hand Gesture Segmentation according to the Clustering features of the colour of skin, and it can be a greater impact the colour of skin with respect to the angle difference of light source because of gesture.For requiring quick and easy, the practical gesture identification based on vision, these methods of independent use have certain limitation, and cannot be accurately real-time effectively cuts apart gesture, has seriously affected segmentation effect.Patent CN103226708A is in Hand Gesture Segmentation, and the method that has also adopted depth image to combine with coloured image, is positioned at the foremost of human body but its prerequisite is supposition staff.In addition, also someone has proposed to have adopted similar approach, but its require first RGB camera and Depth camera are demarcated, this has increased complicacy and the triviality of algorithm.
Summary of the invention
Technical matters to be solved by this invention is to be to overcome the various defects that exist in gesture detecting method above-mentioned, a kind of gesture detecting method based on RGB-D image is provided, it can be partitioned into staff region effectively, have and cut apart accurately, even hand occur part from block or background in the Hand Gesture Segmentation that also can obtain while having other people to disturb, and algorithm robustness is good.
For solving the problems of the technologies described above, the invention provides a kind of gesture detecting method based on RGB-D image, it comprises:
The first step, obtains RGB-D image;
Second step is cut apart hand from background;
The 3rd step, identification gesture;
The 4th step, the optimum segmentation of searching gesture.
The described first step is specially utilizes depth transducer to obtain coloured image (RGB Image) stream and depth image (Depth Image) stream, be RGB-D image data stream, and the image that converts thereof into a frame frame is so that follow-up image processing.
Described second step is specially the pixel ratio by skeletal graph and depth image, and hand position is mapped to depth image, utilizes depth image information that hand is cut apart from background.
Described the 3rd step is specially the images of gestures of utilizing convex function optimization to cut apart RGB-D, thereby identifies rapidly and accurately gesture.
Described the 4th step is specially utilizes minimization function and function constraint thereof, solves model by Split Bregman fast algorithm, and RGB-D image is found to optimum segmentation.
Beneficial effect of the present invention:
The gesture detecting method of RGB-D image provided by the invention can be partitioned into staff region effectively, have and cut apart accurately, even hand occur part from block or background in the Hand Gesture Segmentation that also can obtain while having other people to disturb, and algorithm robustness is good.
Reference numeral
Fig. 1 a-1e is based on coloured image/depth image/RGB-D image segmentation result; Wherein, Fig. 1 a coloured image; Fig. 1 b depth image; Fig. 1 c color images result; The segmentation result of Fig. 1 d depth image; Fig. 1 e RGB-D image segmentation result;
Fig. 2 a-2e is based on coloured image/depth image/RGB-D image segmentation result in another kind of situation; Wherein, Fig. 2 a coloured image; Fig. 2 b depth image; Fig. 2 c color images result; The segmentation result of Fig. 2 d depth image; Fig. 2 e RGB-D image segmentation result.
Embodiment
The invention provides a kind of gesture detecting method based on RGB-D image, it comprises:
The first step, obtains RGB-D image;
Second step is cut apart hand from background;
The 3rd step, identification gesture;
The 4th step, the optimum segmentation of searching gesture.
The described first step is specially utilizes depth transducer to obtain coloured image (RGB Image) stream and depth image (Depth Image) stream, be RGB-D image data stream, and the image that converts thereof into a frame frame is so that follow-up image processing.
Utilize depth transducer can obtain depth image and RGB color image data simultaneously, can support real-time whole body and bone to follow the trail of, can identify a series of attitude, action, utilize in this application it to obtain gesture data information simultaneously.
The object of gestures detection is from original image, effectively to cut apart hand region, namely the staff region (prospect) in image and other (background area) made a distinction, and be very important element task of gesture identification.Depth transducer has analysis depth data and surveys the function of human body or player's profile.By it can obtain that color and depth data flow and the image that converts thereof into a frame frame so that follow-up image processing.To the image of input, require RGB image in pixel, to align and time synchronized with Depth depth image.Having obtained the image that meets above-mentioned condition to rear, input picture is carried out to pre-service, as filtering etc., reach the object that suppresses noise.
Described second step is specially the pixel ratio by skeletal graph and depth image, and hand position is mapped to depth image, utilizes depth image information that hand is cut apart from background.
Coloured image and depth image can be used for carrying out Hand Gesture Segmentation.The advantage of coloured image is clear, but it only comprises two-dimensional signal, and anti-interference is more weak.And depth image does not have cromogram image height in resolution, but it has comprised three-dimensional information, and strong interference immunity.Because skeletal graph can be followed the trail of the coordinate position of human hands, be therefore easy to determine the particular location of hand in skeletal graph.Then by the pixel ratio of skeletal graph and depth image, hand position is mapped to depth image, utilizes depth image information that hand is cut apart from background.Because depth image resolution is low and be subject to the interference of depth value same object, the effect of cutting apart is unsatisfactory.Therefore, the detection method in conjunction with depth image and coloured image has been proposed in this application.
Described the 3rd step is specially the images of gestures of utilizing convex function optimization to cut apart RGB-D, thereby identifies rapidly and accurately gesture.
For cutting apart optimizing process, the image that we define this problem is divided into a minimized functional:
E(u)=∫ Ωf(x)u(x)dx+∫ Ω|Du(x)| (1)
Wherein, u ∈ BV (IR d; 0,1}) and be the bounded variation of a binary function on indicator function, u=1 and u=0 are illustrated in the inside and outside of surperficial IRd, cut apart one group of closed boundary or the one group of occluding surface in three-dimensional segmentation situation in situation at two dimensional image.In formula (1), Part II is total variation.Wherein Du represents derivative of a distribution, and differentiable function u is summed up as
Figure BDA0000471229470000042
by lax scale-of-two constraint, the value of function u is between 0 and 1.This optimization problem becomes the (IR at convex set BV d; [0,1]) in try to achieve and minimize protruding formula (1).
By protruding optimization and threshold value, the form of functional is spatially set continuously, can realize global optimization.This thresholding theorem guarantees that solution u* resolution problem keeps global optimum to original binary mark problem.The global minimum of computing formula (1) is as follows: at convex set BV (IRd; [0,1]), when any value of θ ∈ (0,1), global minimum u* and the threshold value that is greater than minimum value u* in computing formula (1).
Due to from RGB-D Image Acquisition to extra depth information, so boundary length can be in absolute codomain | Du (x) | rather than measure in image area d (x).Functional (1) can be generalized to:
E(u)=∫ Ωf(x)u(x)dx+∫ Ωd(x)|Du(x)| (2)
Depth value d: Ω → IR, formula (2) has compensated the ill effect (due to perspective projection, object is far away, and less image appears in camera) causing in operating process.
Described the 4th step is specially utilizes minimization function and function constraint thereof, solves model by Split Bregman fast algorithm, and RGB-D image is found to optimum segmentation.
For the function constraint of RGB-D image, how we affect embedded meeting point corresponding to protruding majorized function by these constraint conditions of explanation by the square that utilizes depth information to retrain to cut apart simultaneously.We are with being defined in B=BV (Ω; [0,1]) convex function represent to be defined in whole image-region
Figure BDA0000471229470000043
bounded variation Closing Binary Marker function.Area-constrained: the shape of the corresponding region u of 0 rank square, can pass through formula (3) and calculate
Area(u):=∫ Ωd 2(x)u(x)dx (3)
Wherein d (x) has provided the degree of depth of pixel x.Suppose d (x)=KD (x), K is the focal length of camera, and D (x) is the degree of depth of the pixel measured.Make d 2(x) be the size of corresponding pixel projection in 3d space, the space of entirety is the view field in surface area rather than image.Adopt (Grenander, U., Chow with document, Y., Keenan, D.M.:Hands:A Pattern Theoretic Study of Biological Shapes.Springer, New York (1991)) method, process in the same way all pixels.
The absolute area of shape u is limited in constant c 1≤ c 2between, realize by retrain u in formula (4) set:
C 0={u∈β|c 1≤Area(u)≤c 2}
(4)
Set C 0be linearly dependent on u, therefore protruding constant c 2>=c 1>=0.
Conventionally, by c is set 1=c 2or the region that applies the upper bound and lower bound determines area accurately, or apply a soft range constraint, promote functional (1) by formula (5) as follows:
E total(u)=E(u)+λ(∫d 2udx-c) 2 (5)
Formula (5) increases soft-constraint weight λ > 0, makes the area shape of estimating approach c >=0.Formula (5) is also convex function.
Described Split Bregman fast algorithm is specially and maximizes a likelihood function is of equal value with the natural logarithm that maximizes it.First the application is applied to Split method during RGB-D image cuts apart, and sets up a following universal model:
min ω , u ∈ { 0,1 } { E ( ω , u ) = α 1 ∫ Ω Q 1 ( x , ω 1 ) udxdy + α 2 ∫ Ω Q 2 ( x , ω 2 ) ( 1 - u ) dxdy + γ ∫ Ω | ▿ u | dxdy } - - - ( 7 )
Wherein Q i=-lnP i, i=1,2, ω=(μ, σ)=Max (p i), i=1,2, u is that Closing Binary Marker function is used for representing curvilinear motion.
The application is incorporated into Split Bregman algorithm idea in the universal model that RGB-D image cuts apart, and on the basis of Split method, first introduces division variable w=[w 1, w 2] t, then introduce Bregman distance b=(b 1, b 2) t, the functional extreme value problem of formula (7) is converted into:
b k + 1 = b k + ▿ u k - w k - - - ( 8 )
( u k + 1 , w k + 1 ) = arg min w , φ ∈ [ 0,1 ] { E ( u , w ) = γ ∫ Ω | w | dxdy + u 2 ∫ Ω ( w - ▿ u - b k + 1 ) dx + ∫ Ω r ( u 1 , u 2 ) udxdy } - - - ( 9 )
Wherein r (u 1, u 2)=α 1q 1(x, ω 1)-α 2q 2(x, ω 2).Formula (9), for the energy functional of two variablees being asked to the problem of extreme value, conventionally adopts alternately and optimizes and realize.First, suppose that w is constant, the problems referred to above are converted into asks extreme-value problem to u:
min u E ( u ) = θ 2 ∫ Ω ( w - ▿ u - b k + 1 ) dxdy + ∫ Ω r ( u 1 , u 2 ) udxdy - - - ( 10 )
Then, suppose that u is constant, solve the extreme-value problem about w:
min w E ( w ) = γ ∫ Ω | w | dxdy + θ 2 ∫ Ω ( w - ▿ u - b k + 1 ) 2 dxdy - - - ( 11 )
Can be obtained the Euler-Lagrange equation of energy functional (10) by variational method:
r ( u 1 , u 2 ) - θ ▿ · ( ▿ u + b k + 1 - w k ) = 0 inΩ ( ▿ u + b k + 1 - w k ) · n r = 0 on ∂ Ω - - - ( 12 )
Formula (12) can adopt your iteration mechanism of quick Gauss's Saden to solve.Because the span of u after the protruding relaxing techniques of employing is [0,1], so need to adopt following projection pattern that u is tied within the scope of this:
u k+1=Max(Min(u k+1,1),0) (13)
Solve after energy functional (10), then solved energy functional (11).The Euler-Lagrange equation of formula (11) is:
w = ▿ u k + 1 + b k + 1 - γ θ w | w | - - - ( 14 )
Obtain its analytic solution by broad sense soft-threshold formula, its form is:
w k + 1 = Max ( | ▿ u k + 1 + b k + 1 | - γ θ , 0 ) ▿ u k + 1 + b k + 1 | ▿ u k + 1 + b k + 1 | - - - ( 15 )
Below adopt embodiment to describe embodiments of the present invention in detail, to the present invention, how application technology means solve technical matters whereby, and the implementation procedure of reaching technique effect can fully understand and implement according to this.
The present invention has shown the experiment comparing result of this method and other method.Test dividing method is demonstrated by Fig. 1 and two scenes of Fig. 2, and experiment is intended to cut apart individual gesture from crowd.As can be seen from the figure be better than cutting apart based on color image or depth image separately based on RGB-D Hand Gesture Segmentation.As shown in Fig. 1 (c), when only utilizing RGB color image information algorithm to be partitioned into staff, face and part wall information, fail to be partitioned into the gesture needing.Shown in Fig. 1 (d), while only utilizing depth image information, staff and the human body parts identical with the staff degree of depth are out divided.As can be seen here, in the time only considering a kind of in above-mentioned two situations, segmentation effect is all undesirable.As shown in Fig. 1 (e), in the time considering RGB and depth information, during based on RGB-D image information, the Region Segmentation of staff, by independent splitting, is cut apart difficult problem and is resolved simultaneously.Under complicated scene, the application's algorithm also has good robustness, as shown in Figure 2.In scene, add the new personage in different depth, also can well be partitioned in this case target gesture.
All above-mentioned these intellecture properties of primary enforcement, do not set restriction this new product of other forms of enforcement and/or new method.Those skilled in the art will utilize this important information, and foregoing is revised, to realize similar implementation status.But all modifications or transformation belong to the right of reservation based on new product of the present invention.
The above, be only preferred embodiment of the present invention, is not the restriction of the present invention being made to other form, and any those skilled in the art may utilize the technology contents of above-mentioned announcement to be changed or be modified as the equivalent embodiment of equivalent variations.But every technical solution of the present invention content that do not depart from, any simple modification, equivalent variations and the remodeling above embodiment done according to technical spirit of the present invention, still belong to the protection domain of technical solution of the present invention.

Claims (5)

1. the gesture detecting method based on RGB-D image, is characterized in that, comprising:
The first step, obtains RGB-D image;
Second step is cut apart hand from background;
The 3rd step, identification gesture;
The 4th step, the optimum segmentation of searching gesture.
2. gesture detecting method as claimed in claim 1, it is characterized in that: the described first step is specially utilizes depth transducer to obtain coloured image (RGB Image) stream and depth image (Depth Image) stream, be RGB-D image data stream, and the image that converts thereof into a frame frame is so that follow-up image processing.
3. gesture detecting method as claimed in claim 1 or 2, is characterized in that: described second step is specially the pixel ratio by skeletal graph and depth image, and hand position is mapped to depth image, utilizes depth image information that hand is cut apart from background.
4. the gesture detecting method as described in claims 1 to 3, is characterized in that: described the 3rd step is specially the images of gestures of utilizing convex function optimization to cut apart RGB-D, thereby identifies rapidly and accurately gesture.
5. the gesture detecting method as described in claim 1 to 4, is characterized in that: described the 4th step is specially utilizes minimization function and function constraint thereof, solves model by Split Bregman fast algorithm, and RGB-D image is found to optimum segmentation.
CN201410073064.4A 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image Active CN103810480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410073064.4A CN103810480B (en) 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410073064.4A CN103810480B (en) 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image

Publications (2)

Publication Number Publication Date
CN103810480A true CN103810480A (en) 2014-05-21
CN103810480B CN103810480B (en) 2017-01-18

Family

ID=50707222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410073064.4A Active CN103810480B (en) 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image

Country Status (1)

Country Link
CN (1) CN103810480B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346816A (en) * 2014-10-11 2015-02-11 京东方科技集团股份有限公司 Depth determining method and device and electronic equipment
CN106600640A (en) * 2016-12-12 2017-04-26 杭州视氪科技有限公司 RGB-D camera-based face recognition assisting eyeglass
CN108073851A (en) * 2016-11-08 2018-05-25 株式会社理光 A kind of method, apparatus and electronic equipment for capturing gesture identification
CN108122239A (en) * 2016-11-29 2018-06-05 Sap欧洲公司 Use the object detection in the image data of depth segmentation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706610B2 (en) * 2005-11-29 2010-04-27 Microsoft Corporation Segmentation of objects by minimizing global-local variational energy
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information
CN102982560A (en) * 2011-11-02 2013-03-20 微软公司 Surface segmentation according to RGB and depth image
US20130230237A1 (en) * 2012-03-05 2013-09-05 Thomson Licensing Method and apparatus for bi-layer segmentation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706610B2 (en) * 2005-11-29 2010-04-27 Microsoft Corporation Segmentation of objects by minimizing global-local variational energy
CN102982560A (en) * 2011-11-02 2013-03-20 微软公司 Surface segmentation according to RGB and depth image
US20130230237A1 (en) * 2012-03-05 2013-09-05 Thomson Licensing Method and apparatus for bi-layer segmentation
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
THOMAS POCK EL AT: ""A Convex Formulation of Continuous Multi-label Problems"", 《ECCV 2008, PART III, LNCS 5304》 *
TING LU EL AT: ""Image Matting with Color and Depth Information"", 《21ST INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR 2012)》 *
ZHOU REN EL AT: "《Robust Part-Based Hand Gesture Recognition Using Kinect Sensor》", 《IEEE TRANSACTIONS ON MULTIMEDIA》 *
沈世宏 等: ""基于Kinect的体感手势识别系统的研究"", 《第八届和谐人机环境联合学术会议(HHME2012)论文集CHCI》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346816A (en) * 2014-10-11 2015-02-11 京东方科技集团股份有限公司 Depth determining method and device and electronic equipment
CN104346816B (en) * 2014-10-11 2017-04-19 京东方科技集团股份有限公司 Depth determining method and device and electronic equipment
US9704251B2 (en) 2014-10-11 2017-07-11 Boe Technology Group Co., Ltd. Depth determination method, depth determination device and electronic device
CN108073851A (en) * 2016-11-08 2018-05-25 株式会社理光 A kind of method, apparatus and electronic equipment for capturing gesture identification
CN108073851B (en) * 2016-11-08 2021-12-28 株式会社理光 Grabbing gesture recognition method and device and electronic equipment
CN108122239A (en) * 2016-11-29 2018-06-05 Sap欧洲公司 Use the object detection in the image data of depth segmentation
CN108122239B (en) * 2016-11-29 2020-12-01 Sap欧洲公司 Object detection in image data using depth segmentation
CN106600640A (en) * 2016-12-12 2017-04-26 杭州视氪科技有限公司 RGB-D camera-based face recognition assisting eyeglass
CN106600640B (en) * 2016-12-12 2020-03-20 杭州视氪科技有限公司 Face recognition auxiliary glasses based on RGB-D camera

Also Published As

Publication number Publication date
CN103810480B (en) 2017-01-18

Similar Documents

Publication Publication Date Title
KR101865655B1 (en) Method and apparatus for providing service for augmented reality interaction
US9189855B2 (en) Three dimensional close interactions
CN104317391B (en) A kind of three-dimensional palm gesture recognition exchange method and system based on stereoscopic vision
US10466797B2 (en) Pointing interaction method, apparatus, and system
CN101398886B (en) Rapid three-dimensional face identification method based on bi-eye passiveness stereo vision
US9058661B2 (en) Method for the real-time-capable, computer-assisted analysis of an image sequence containing a variable pose
US20170371403A1 (en) Gesture recognition using multi-sensory data
US20150206003A1 (en) Method for the Real-Time-Capable, Computer-Assisted Analysis of an Image Sequence Containing a Variable Pose
CN110443205A (en) A kind of hand images dividing method and device
EP3345123B1 (en) Fast and robust identification of extremities of an object within a scene
JP6487642B2 (en) A method of detecting a finger shape, a program thereof, a storage medium of the program, and a system for detecting a shape of a finger.
CN110413816A (en) Colored sketches picture search
CN102663762B (en) The dividing method of symmetrical organ in medical image
CN103810480A (en) Method for detecting gesture based on RGB-D image
Chansri et al. Reliability and accuracy of Thai sign language recognition with Kinect sensor
CN106952292A (en) The 3D motion object detection method clustered based on 6DOF scene flows
CN104574435B (en) Based on the moving camera foreground segmentation method of block cluster
CN105488802A (en) Fingertip depth detection method and system
CN103093211A (en) Human motion tracking method based on deep nuclear information image feature
US10140509B2 (en) Information processing for detection and distance calculation of a specific object in captured images
KR101614798B1 (en) Non-contact multi touch recognition method and system using color image analysis
CN106599901B (en) Collaboration Target Segmentation and Activity recognition method based on depth Boltzmann machine
Itkarkar et al. A study of vision based hand gesture recognition for human machine interaction
Xu et al. MultiView-based hand posture recognition method based on point cloud
CN116704587B (en) Multi-person head pose estimation method and system integrating texture information and depth information

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201221

Address after: 518109 Room 301, building 2, Nanke Chuangyuan Valley, Gaofeng community, Dalang street, Longhua District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN MICAGENT TECHNOLOGY Co.,Ltd.

Address before: 266101 2319, room 23, block B, 1 Wei Yi Road, Laoshan District, Qingdao, Shandong.

Patentee before: Qingdao fruit science and technology service platform Co.,Ltd.

Effective date of registration: 20201221

Address after: 266101 2319, room 23, block B, 1 Wei Yi Road, Laoshan District, Qingdao, Shandong.

Patentee after: Qingdao fruit science and technology service platform Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Ningxia City Road No. 308

Patentee before: QINGDAO University

Patentee before: QINGDAO BROADCASTING AND TV WIRELESS MEDIA GROUP Co.,Ltd.

Patentee before: QINGDAO ANIMATION

TR01 Transfer of patent right