CN103810480B - Method for detecting gesture based on RGB-D image - Google Patents

Method for detecting gesture based on RGB-D image Download PDF

Info

Publication number
CN103810480B
CN103810480B CN201410073064.4A CN201410073064A CN103810480B CN 103810480 B CN103810480 B CN 103810480B CN 201410073064 A CN201410073064 A CN 201410073064A CN 103810480 B CN103810480 B CN 103810480B
Authority
CN
China
Prior art keywords
image
segmentation
rgb
function
omega
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410073064.4A
Other languages
Chinese (zh)
Other versions
CN103810480A (en
Inventor
张维忠
丁洁玉
赵志刚
张峰
李明
王青林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Micagent Technology Co ltd
Original Assignee
QINGDAO ANIMATION
Qingdao Broadcasting And Tv Wireless Media Group Co ltd
Qingdao University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by QINGDAO ANIMATION, Qingdao Broadcasting And Tv Wireless Media Group Co ltd, Qingdao University filed Critical QINGDAO ANIMATION
Priority to CN201410073064.4A priority Critical patent/CN103810480B/en
Publication of CN103810480A publication Critical patent/CN103810480A/en
Application granted granted Critical
Publication of CN103810480B publication Critical patent/CN103810480B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a method for detecting a gesture based on an RGB-D image. The method comprises the following steps of: step 1, acquiring the RGB-D image; step 2, segmenting the hands from a background; step 3, identifying the gesture; step 4, finding the optimal segmentation of the gesture. The detecting the gesture based on the RGB-D image provided by the invention is capable of effectively segmenting the areas of human hands, accurate in segmentation, capable of obtaining good gesture segmentation even if in the case that the hands are partially self-shielded or the interference of other people exists in the background, and good in algorithm robustness.

Description

Gesture detecting method based on rgb-d image
Technical field
The present invention relates to digital image processing techniques field, more particularly, to a kind of gestures detection side based on rgb-d image Method.
Background technology
Man Machine Interface needs as directly perceived as possible and natural.User and machine interact the equipment it is not necessary to loaded down with trivial details (as colour-coded or glove) or device are as remote control, mouse and keyboard.Gesture can provide one to combine with machine intelligence Simple communication way.It is found that there is the gesture system of successful Application in various researchs and industrial circle.For example: game control System, virtual environment, smart home and Sign Language Recognition etc..
The quality of Hand Gesture Segmentation directly affects follow-up gesture feature extraction, the precision followed the tracks of, identify and accuracy.In recent years Come, domestic and international research worker proposes multiple methods in the research of Hand Gesture Segmentation, main include stencil matching method, calculus of finite differences, Skin color segmentation method and constraint lambda limiting process etc..On the basis of stencil matching method is built upon hand-type data base, by handss in data base Masterplate in gesture image and hand-type data compares.Hand-type is a nonrigid object, and the process of comparison is computationally intensive, and difficulty is relatively Greatly it is difficult to meet requirement of real-time.Constrained method is the glove by wearing different colours, or prominent hand and background Contrast, to be simplified with this and gesture area (prospect) and background is divided.But the exchange of these constrained gesture data Convenience and freedom.Image difference method is to be carried out gesture and divided by the images of gestures of motion and static background image subtraction Cut, the defect of the method is to overcome the generation of corresponding picture point skew on image.Skin color segmentation method is the cluster according to the colour of skin Carrying out Hand Gesture Segmentation, it can make the colour of skin be a greater impact because gesture is different with respect to the angle of light source to characteristic.For Require the gesture identification of quick and easy, practical view-based access control model, individually have certain limitation using these methods, no Method is accurately effectively split to gesture in real time, has severely impacted segmentation effect.A is in gesture for patent cn 103226708 In segmentation, also use the method that depth image is combined with coloured image, but its premise assumes that staff is located at human body Foremost.In addition, also it is proposed that employing similar approach, but it requires to enter rower to rgb camera and depth camera first Fixed, which increase complexity and the triviality of algorithm.
Content of the invention
The technical problem to be solved be to overcome various present in gesture detecting method above-mentioned Defect, provides a kind of gesture detecting method based on rgb-d image, and it can be partitioned into staff region effectively, has segmentation accurate Really, though hand occur part from block or background in have other people to disturb when the Hand Gesture Segmentation that also can obtain, and calculation Method robustness is good.
For solving above-mentioned technical problem, the invention provides a kind of gesture detecting method based on rgb-d image, its bag Include:
The first step, obtains rgb-d image;
Second step, splits hand from background;
3rd step, using Optimization Problems of Convex Functions segmentation;
4th step, finds the optimum segmentation of gesture.
The described first step is specially and utilizes depth transducer to obtain coloured image (rgb image) stream and depth image (depth image) flows, i.e. rgb-d image data stream, and convert thereof into the image of a frame frame in order to follow-up image at Reason.
Described second step, specifically by the pixel ratio of skeletal graph and depth image, hand position is mapped to depth map Hand is split from background by picture using deep image information.
Described 3rd step is specially using convex function come the images of gestures of Optimized Segmentation rgb-d.
Described 4th step is specially using minimizing function and its function constraint, by split bregman fast algorithm Solve model, optimum segmentation is found to rgb-d image.
Beneficial effects of the present invention:
The gesture detecting method of the rgb-d image that the present invention provides can be partitioned into staff region effectively, has segmentation accurate Really, though hand occur part from block or background in have other people to disturb when the Hand Gesture Segmentation that also can obtain, and calculation Method robustness is good.
Reference
Fig. 1 a-1e is based on coloured image/depth image/rgb-d image segmentation result;Wherein, Fig. 1 a coloured image;Figure 1b depth image;Fig. 1 c color images result;The segmentation result of Fig. 1 d depth image;Fig. 1 ergb-d image segmentation result;
Fig. 2 a-2e is based on coloured image/depth image/rgb-d image segmentation result in the case of another kind;Wherein, scheme 2a coloured image;Fig. 2 b depth image;Fig. 2 c color images result;The segmentation result of Fig. 2 d depth image;Fig. 2 e rgb- D image segmentation result.
Specific embodiment
The invention provides a kind of gesture detecting method based on rgb-d image, comprising:
The first step, obtains rgb-d image;
Second step, splits hand from background;
3rd step, using Optimization Problems of Convex Functions segmentation;
4th step, finds the optimum segmentation of gesture.
The described first step is specially and utilizes depth transducer to obtain coloured image, i.e. rgb image stream and depth image, that is, Depth image flows, i.e. rgb-d image data stream, and converts thereof into the image of a frame frame in order to follow-up image procossing.
Depth image and rgb color image data can be obtained using depth transducer, it would be preferable to support complete in real time simultaneously Body and skeleton are followed the trail of, and can identify a series of attitude, action simultaneously, are utilized in this application obtain gesture data letter Breath.
The purpose of gestures detection is effective Ground Split hand region from original image, that is, the staff area in image Domain (prospect) is made a distinction with other (background area), is one critically important element task of gesture identification.Depth sensing utensil The function of having analysis depth data and detect human body or player's profile.Color and depth data stream can be obtained simultaneously by it The image converting thereof into a frame frame is in order to follow-up image procossing.Image to input is it is desirable to rgb image is deep with depth Degree image aligns and time synchronized in pixel.Obtaining the image meeting above-mentioned condition to rear, input picture is being carried out pre- Process, such as filtering etc., reach the purpose of suppression noise.
Described second step, specifically by the pixel ratio of skeletal graph and depth image, hand position is mapped to depth map Hand is split from background by picture using deep image information.
Coloured image and depth image may serve to carry out Hand Gesture Segmentation.The advantage of coloured image is clear, but it is only Comprise two-dimensional signal, and anti-interference is weaker.And depth image does not have cromogram image height in resolution, but it contains three Dimension information, and strong interference immunity.Because skeletal graph can follow the trail of the coordinate position of human hands, therefore it is easily determined hand in bone The particular location of bone in figure.Then pass through the pixel ratio of skeletal graph and depth image, hand position is mapped to depth image, profit With deep image information, hand is split from background.Because depth image resolution is low and is easily subject to depth value same object Interference, the effect of segmentation is unsatisfactory.Therefore, propose the detection side with reference to depth image and coloured image in this application Method.
Described 3rd step is specially using convex function come the images of gestures of Optimized Segmentation rgb-d.
For segmentation optimization process, the image segmentation that we define this problem is the functional of a minimum:
E (u)=∫ωf(x)u(x)dx+∫ω|du(x)| (1)
Wherein, u ∈ bv (ird;{ 0,1 }) be binary function on an indicator function bounded variation, u=1 and u=0 Represent in surface irdInside and outside, that is, two dimensional image segmentation in the case of one group of closed boundary or in three-dimensional segmentation feelings One group of occluding surface under condition.In formula (1), Part II is total variation.Wherein du represents derivative of a distribution, and differentiable function u sums up ForBy lax binary system constraint, the value of function u is between zero and one.This optimization problem is changed in convex set bv (ird;[0,1]) in try to achieve minimum convex formula (1).
By the convex form optimizing and threshold value, spatially continuously arranging functional, it is possible to achieve global optimization.This thresholding Theorem guarantees that solution u* resolution problem keeps global optimum to original binary labelling problem.The overall situation of computing formula (1) Minima is as follows: in convex set bv (ird;[0,1]), during θ ∈ (0,1) any value, global minimum u* and big in computing formula (1) Threshold value in minima u*.
Due to from rgb-d Image Acquisition to extra depth information, so boundary length can be in absolute codomain | du (x) | Rather than measure in image area d (x).Functional (1) can be generalized to:
E (u)=∫ωf(x)u(x)dx+∫ωd(x)|du(x)| (2)
Depth value d: ω → ir, the ill effect that formula (2) compensate for causing in operating process is (due to perspective projection, right As more remote, the camera less image of appearance).
Described 4th step is specially using minimizing function and its function constraint, by split bregman fast algorithm Solve model, optimum segmentation is found to rgb-d image.
For the function constraint of rgb-d image, we will constrain the square of segmentation using depth information, and this will be described simultaneously How a little constraintss affect the embedded corresponding meeting point of convex majorized function.We are with being defined on b=bv (ω;[0,1]) Convex function represent and be defined on whole image regionBounded variation Closing Binary Marker function.Area-constrained: 0 rank square The shape of corresponding region u, can be calculated by formula (3)
Area (u) :=∫ωd2(x)u(x)dx (3)
Wherein d (x) gives the depth of pixel x.Assume d (x)=kd (x), k is the focal length of camera, and d (x) is to measure Pixel depth.Make d2X () is the size that corresponding pixel projects in 3d space, overall space be surface area rather than View field in image.Using (grenander, u., chow, y., keenan, d.m.:hands:a with document Pattern theoretic study of biological shapes.springer, new york (1991)) method, with Same mode processes all of pixel.
The absolute area of shape u is limited in constant c1≤c2Between, realized by constraint u in formula (4) set:
c0=u ∈ β | c1≤area(u)≤c2}
(4)
Set c0It is linearly dependent on u, therefore convex constant c2≥c1≥0.
Generally, by arranging c1=c2Or apply the region of the upper bound and lower bound and to determine accurate area, or apply one Soft zone region constraint, lifts functional (1) by formula (5) as follows:
etotal(u)=e (u)+λ (∫ d2udx-c)2(5)
Formula (5) increases soft-constraint weight λ > 0 so that the area shape estimated is close to c >=0.Formula (5) is also convex letter Number.
Described split bregman fast algorithm is specially and maximizes a likelihood function with its natural logrithm of maximization It is of equal value.The application first split method is applied in rgb-d image segmentation, sets up a following universal model:
min ω , u &element; { 0 , 1 } { e ( ω , u ) = α 1 &integral; ω q 1 ( x , ω 1 ) u d x d y + α 2 &integral; ω q 2 ( x , ω 2 ) ( 1 - u ) d x d y + γ &integral; ω | ▿ u | d x d y } - - - ( 7 )
Wherein qi=-ln pi, i=1,2, ω=(μ, σ)=max (pi), i=1,2, u are used for table for Closing Binary Marker function Show curvilinear motion.
Split bregman algorithm idea is incorporated in the universal model of rgb-d image segmentation the application, that is, exist Division variable w=[w is first introduced on the basis of split method1, w2]t, it is re-introduced into bregman apart from b=(b1, b2)t, by formula (7) functional extreme value problem is converted into:
b k + 1 = b k + ▿ u k - w k - - - ( 8 )
( u k + 1 , w k + 1 ) = arg min w , φ &element; [ 0 , 1 ] { e ( u , w ) = γ &integral; ω | w | d x d y + u 2 &integral; ω ( w - ▿ u - b k + 1 ) d x + &integral; ω r ( u 1 , u 2 ) u d x d y } - - - ( 9 )
Wherein r (u1, u2)=α1q1(x, ω1)-α2q2(x, ω2).Formula (9) is that the energy functional to two variables asks pole The problem of value, generally adopts alternative optimization to realize.First, it is assumed that w is constant, the problems referred to above are converted into seeks extreme-value problem to u:
min u e ( u ) = θ 2 &integral; ω ( w - ▿ u - b k + 1 ) d x d y + &integral; ω r ( u 1 , u 2 ) u d x d y - - - ( 10 )
Then it is assumed that u is constant, solution is with regard to the extreme-value problem of w:
min w e ( w ) = γ &integral; ω | w | d x d y + θ 2 &integral; ω ( w - ▿ u - b k + 1 ) 2 d x d y - - - ( 11 )
Can get the euler-lagrange equation of energy functional (10) by variational method:
r ( u 1 , u 2 ) - θ ▿ · ( ▿ u + b k + 1 - w k ) = 0 i n ω ( ▿ u + b k + 1 - w k ) · n → = 0 o n ∂ ω - - - ( 12 )
Formula (12) can be solved using your iterator mechanism of quick Gauss Saden.Due to using u after convex relaxing techniquess Span is [0,1], so u need to be tied in the range of this using following projection pattern:
uk+1=max (min (uk+1, 1), 0) (13)
After having solved energy functional (10), then solve energy functional (11).The euler-lagrange side of formula (11) Cheng Wei:
w = ▿ u k + 1 + b k + 1 - γ θ w | w | - - - ( 14 )
Its analytic solutions is obtained by broad sense soft-threshold formula, its form is:
w k + 1 = m a x ( | ▿ u k + 1 + b k + 1 | - γ θ , 0 ) ▿ u k + 1 + b k + 1 | ▿ u k + 1 + b k + 1 | - - - ( 15 )
Hereinafter embodiments of the present invention are described in detail using embodiment, whereby to the present invention how application technology means To solve technical problem, and reach realizing process and fully understanding and implement according to this of technique effect.
Present invention show the Experimental comparison results of this method and other methods.Test dividing method is by Fig. 1 and Fig. 2 two Scene, to demonstrate, tests the gesture being intended to split individuality from crowd.As can be seen from the figure it is better than based on rgb-d Hand Gesture Segmentation It is based solely on the segmentation of color image or depth image.As shown in Fig. 1 (c), when merely with the segmentation of rgb color image information algorithm Go out staff, face and part wall information, fail to be partitioned into the gesture of needs.Shown in Fig. 1 (d), merely with depth image letter During breath, staff and divided out with staff depth identical human body parts.As can be seen here, when only considering above two feelings During one of condition, segmentation effect is all undesirable.As shown in Fig. 1 (e), when considering rgb and depth information simultaneously, that is, it is based on During rgb-d image information, the region segmentation of staff is individually split, and the difficult problem of segmentation is resolved.Multiple Under miscellaneous scene, the application algorithm also has good robustness, as shown in Figure 2.Add in the scene and be in different depth New persona, also can be partitioned into target gesture in this case well.
All above-mentioned this intellectual properties of primary enforcement, do not set this new product of enforcement limiting other forms And/or new method.Those skilled in the art will be using this important information, and the above is changed, to realize similar execution feelings Condition.But, all modifications or transformation belong to the right of reservation based on new product of the present invention.
The above, be only presently preferred embodiments of the present invention, is not the restriction that the present invention is made with other forms, appoints What those skilled in the art possibly also with the disclosure above technology contents changed or be modified as equivalent variations etc. Effect embodiment.But every without departing from technical solution of the present invention content, according to the present invention technical spirit to above example institute Any simple modification, equivalent variations and the remodeling made, still falls within the protection domain of technical solution of the present invention.

Claims (1)

1. a kind of gesture detecting method based on rgb-d image is it is characterised in that include:
The first step, obtains rgb-d image;
Second step, splits hand from background;
3rd step, using Optimization Problems of Convex Functions segmentation;
4th step, finds the optimum segmentation of gesture;
The described first step is specially and utilizes depth transducer to obtain coloured image, i.e. rgb image stream and depth image, that is, Depth image flows, i.e. rgb-d image data stream, and converts thereof into the image of a frame frame in order to follow-up image procossing;
Described second step, specifically by the pixel ratio of skeletal graph and depth image, hand position is mapped to depth image, profit With deep image information, hand is split from background;
Described 3rd step is specially using convex function come the images of gestures of Optimized Segmentation rgb-d;
For segmentation optimization process, defining image segmentation is a functional minimizing:
E (u)=∫ωf(x)u(x)dx+∫ωd(x)|du(x)| (2)
Wherein, u ∈ bv (ird;{ 0,1 }) be binary function on an indicator function bounded variation, u=1 and u=0 represent Surface irdInside and outside, that is, two dimensional image segmentation in the case of one group of closed boundary or in the case of three-dimensional segmentation One group of occluding surface, in formula (2), Part II is total variation;Wherein du represents derivative of a distribution, and differentiable function u is attributed toBy lax binary system constraint, the value of function u is between zero and one;Segmentation is optimized in convex set bv (ird; [0,1]) try to achieve the convex formula of minimum (, 2) in;Depth value d: ω → ir;
Described 4th step is specially using minimizing function and its function constraint, is solved by split bregman fast algorithm Model, finds optimum segmentation to rgb-d image;
Described split bregman fast algorithm is specially one likelihood function of maximization Valency, first split method is applied in rgb-d image segmentation, sets up a following universal model:
m i n ω , u &element; { 0 , 1 } { e ( ω , u ) = α 1 &integral; ω q 1 ( x , ω 1 ) u d x d y + α 2 &integral; ω q 2 ( x , ω 2 ) ( 1 - u ) d x d y + γ &integral; ω | ▿ u | d x d y } - - - ( 7 )
Wherein qi=-lnpi, i=1,2, ω=(μ, σ)=max (pi), i=1,2, u are used for representing curve for Closing Binary Marker function Motion.
CN201410073064.4A 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image Active CN103810480B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410073064.4A CN103810480B (en) 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410073064.4A CN103810480B (en) 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image

Publications (2)

Publication Number Publication Date
CN103810480A CN103810480A (en) 2014-05-21
CN103810480B true CN103810480B (en) 2017-01-18

Family

ID=50707222

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410073064.4A Active CN103810480B (en) 2014-02-28 2014-02-28 Method for detecting gesture based on RGB-D image

Country Status (1)

Country Link
CN (1) CN103810480B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104346816B (en) * 2014-10-11 2017-04-19 京东方科技集团股份有限公司 Depth determining method and device and electronic equipment
CN108073851B (en) * 2016-11-08 2021-12-28 株式会社理光 Grabbing gesture recognition method and device and electronic equipment
US10282639B2 (en) * 2016-11-29 2019-05-07 Sap Se Object detection in image data using depth segmentation
CN106600640B (en) * 2016-12-12 2020-03-20 杭州视氪科技有限公司 Face recognition auxiliary glasses based on RGB-D camera

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706610B2 (en) * 2005-11-29 2010-04-27 Microsoft Corporation Segmentation of objects by minimizing global-local variational energy
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information
CN102982560A (en) * 2011-11-02 2013-03-20 微软公司 Surface segmentation according to RGB and depth image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2637139A1 (en) * 2012-03-05 2013-09-11 Thomson Licensing Method and apparatus for bi-layer segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7706610B2 (en) * 2005-11-29 2010-04-27 Microsoft Corporation Segmentation of objects by minimizing global-local variational energy
CN102982560A (en) * 2011-11-02 2013-03-20 微软公司 Surface segmentation according to RGB and depth image
CN102903110A (en) * 2012-09-29 2013-01-30 宁波大学 Segmentation method for image with deep image information

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"A Convex Formulation of Continuous Multi-label Problems";Thomas Pock el at;《ECCV 2008, Part III, LNCS 5304》;20081231;全文 *
"Image Matting with Color and Depth Information";Ting Lu el at;《21st International Conference on Pattern Recognition (ICPR 2012)》;20121115;全文 *
"基于Kinect的体感手势识别系统的研究";沈世宏 等;《第八届和谐人机环境联合学术会议(HHME2012)论文集CHCI》;20130410;参见第3.1小节 *
《Robust Part-Based Hand Gesture Recognition Using Kinect Sensor》;Zhou Ren el at;《IEEE TRANSACTIONS ON MULTIMEDIA》;20130831;第15卷(第5期);全文 *

Also Published As

Publication number Publication date
CN103810480A (en) 2014-05-21

Similar Documents

Publication Publication Date Title
CN106055091B (en) A kind of hand gestures estimation method based on depth information and correcting mode
CN105739702B (en) Multi-pose finger tip tracking for natural human-computer interaction
Yao et al. Contour model-based hand-gesture recognition using the Kinect sensor
CN104063677B (en) For estimating the device and method of human body attitude
JP5887775B2 (en) Human computer interaction system, hand-to-hand pointing point positioning method, and finger gesture determination method
Hasan et al. RETRACTED ARTICLE: Static hand gesture recognition using neural networks
CN103941866B (en) Three-dimensional gesture recognizing method based on Kinect depth image
CN104115192B (en) Three-dimensional closely interactive improvement or associated improvement
CN102609683B (en) Automatic labeling method for human joint based on monocular video
Kulshreshth et al. Poster: Real-time markerless kinect based finger tracking and hand gesture recognition for HCI
CN103714322A (en) Real-time gesture recognition method and device
CN102567703A (en) Hand motion identification information processing method based on classification characteristic
CN103810480B (en) Method for detecting gesture based on RGB-D image
CN103995595A (en) Game somatosensory control method based on hand gestures
CN105929947A (en) Scene situation perception based man-machine interaction method
Sokhib et al. A combined method of skin-and depth-based hand gesture recognition.
Davis et al. Toward 3-D gesture recognition
Elakkiya et al. Intelligent system for human computer interface using hand gesture recognition
Chen et al. Depth-based hand gesture recognition using hand movements and defects
CN107918507A (en) A kind of virtual touchpad method based on stereoscopic vision
Kondori et al. Direct hand pose estimation for immersive gestural interaction
Półrola et al. Real-time hand pose estimation using classifiers
KR101614798B1 (en) Non-contact multi touch recognition method and system using color image analysis
CN108108648A (en) A kind of new gesture recognition system device and method
Itkarkar et al. A study of vision based hand gesture recognition for human machine interaction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20201221

Address after: 518109 Room 301, building 2, Nanke Chuangyuan Valley, Gaofeng community, Dalang street, Longhua District, Shenzhen City, Guangdong Province

Patentee after: SHENZHEN MICAGENT TECHNOLOGY Co.,Ltd.

Address before: 266101 2319, room 23, block B, 1 Wei Yi Road, Laoshan District, Qingdao, Shandong.

Patentee before: Qingdao fruit science and technology service platform Co.,Ltd.

Effective date of registration: 20201221

Address after: 266101 2319, room 23, block B, 1 Wei Yi Road, Laoshan District, Qingdao, Shandong.

Patentee after: Qingdao fruit science and technology service platform Co.,Ltd.

Address before: 266071 Shandong city of Qingdao province Ningxia City Road No. 308

Patentee before: QINGDAO University

Patentee before: QINGDAO BROADCASTING AND TV WIRELESS MEDIA GROUP Co.,Ltd.

Patentee before: QINGDAO ANIMATION

TR01 Transfer of patent right