CN102058983A - Intelligent toy based on video analysis - Google Patents

Intelligent toy based on video analysis Download PDF

Info

Publication number
CN102058983A
CN102058983A CN 201010538313 CN201010538313A CN102058983A CN 102058983 A CN102058983 A CN 102058983A CN 201010538313 CN201010538313 CN 201010538313 CN 201010538313 A CN201010538313 A CN 201010538313A CN 102058983 A CN102058983 A CN 102058983A
Authority
CN
China
Prior art keywords
module
head pose
people
face
expression
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN 201010538313
Other languages
Chinese (zh)
Other versions
CN102058983B (en
Inventor
谢东海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Zhonggan Microelectronics Co Ltd
Original Assignee
Wuxi Vimicro Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Vimicro Corp filed Critical Wuxi Vimicro Corp
Priority to CN201010538313A priority Critical patent/CN102058983B/en
Publication of CN102058983A publication Critical patent/CN102058983A/en
Application granted granted Critical
Publication of CN102058983B publication Critical patent/CN102058983B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses an intelligent toy, comprising a motion parameter database, a driving module, a mechanical motion module, a video capture module, a human face detection module and an expression identification module. The motion parameter database stores preset motion parameters corresponding to a plurality of expressions. The video capture module is used for capturing video images. The human face detection module is used for carrying out human face detection on the captured images. The expression identification module is used for carrying out expression identification on the detected human face information. The driving module is used for finding out the motion parameter corresponding to the expression from the motion parameter database according to an expression identification result and driving the mechanism motion module to do corresponding actions according to the motion parameter. Therefore, the interactivity and the recreational property of the toy can be enhanced.

Description

Intelligent toy based on video analysis
[technical field]
The present invention relates to the intelligent toy field, particularly a kind of intelligent toy based on video analysis.
[background technology]
Intelligent toy is other branch of toys, and it combines some IT technology and traditional toy, makes toy have more function, even can carry out some simply interactions with the people.
An intelligent toy in the market wherein class has selected and playing function, can move according to the program of setting, such as, predefined sound of storage or operation program in the built-in chip of toy, when child pressed the button of the difference in functionality on the toy, toy can be play corresponding sound or make corresponding action, to attract child's notice, excite the interest of child to toy, and then training child's the sense of hearing and observation ability.
Also have a class intelligent toy to have recognition function, be based on speech recognition technology at present mostly, promptly embed the chip of a speech recognition in interior, toy can identify simple instruction, and then carries out the corresponding program of storing in the chip.Such as, a kind of electric toy car with speech identifying function, after opening switch, it just can advance, retreat, turn left according to what child sent, and corresponding action is carried out in instructions such as turning right, stop.Because speech recognition can only be discerned some spoken commands, recognition function is single, and therefore, and the interaction between the children is also more limited.
For can and children realize better interaction, need on intelligent toy, increase more recognition function, realizing and child carries out better interaction, thereby make child with the exchange and interdynamic of toy in acquisition happy, the raising IQ.
Therefore, be necessary to propose a kind of improved intelligent toy and solve the problems referred to above.
[summary of the invention]
The object of the present invention is to provide a kind of intelligent toy based on video analysis, it can reach better interaction effect with child.
In order to reach purpose of the present invention, according to an aspect of the present invention, the invention provides a kind of intelligent toy, it comprises motion parameter data storehouse, driver module, mechanical movement module, video acquisition module, people's face detection module, Expression Recognition module, and described motion parameter data stock contains the corresponding kinematic parameter of a plurality of expressions; Described video acquisition module is used to gather video image; Described people's face detection module carries out people's face to the image that collects and detects, and detected people's face information is offered described Expression Recognition module; Described Expression Recognition module is carried out Expression Recognition to detected people's face information, and the Expression Recognition result is offered described driver module; Described driver module finds the kinematic parameter of corresponding expression with it according to the Expression Recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
Further, described intelligent toy is Teddy bear or rag baby.
Further, described expression comprises happiness, indignation, sadness and surprised.
Further, the corresponding kinematic parameter of each expression comprises eyes parameter, face parameter, nose parameter and eyebrow parameter.
Further, it also comprises the head pose identification module, described motion parameter data stock contains the kinematic parameter of each head pose correspondence, and described head pose identification module carries out head pose identification to detected people's face information, and the head pose recognition result is offered described driver module; Described driver module finds kinematic parameter with its corresponding head according to the head pose recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
According to a further aspect in the invention, the invention provides a kind of intelligent toy, it comprises motion parameter data storehouse, driver module, mechanical movement module, video acquisition module, people's face detection module, head pose identification module.Described motion parameter data stock contains the kinematic parameter of a plurality of head pose correspondences; Described video acquisition module is used to gather video image; Described people's face detection module carries out people's face to the image that collects and detects, and detected people's face information is offered described head pose identification module; Described head pose identification module carries out head pose identification to detected people's face information, and the head pose recognition result is offered described driver module; Described driver module finds kinematic parameter with its corresponding head pose according to the head pose recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
Further, described intelligent toy is Teddy bear or rag baby.
Further, described head pose comprises head pose left and right, upper and lower four direction.
Further, it also comprises the Expression Recognition module, described motion parameter data stock contains the corresponding kinematic parameter of a plurality of expressions, and described Expression Recognition module is carried out Expression Recognition to detected people's face information, and the Expression Recognition result is offered described driver module; Described driver module finds the kinematic parameter of corresponding expression with it according to the Expression Recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
Compared with prior art, the present invention is by introducing video acquisition and video analysis technology on intelligent toy, by being built in the intelligent video analysis module in the toy, information such as identification children's expression, attitude and then make corresponding reaction is with the interactivity that strengthens toy and recreational.
[description of drawings]
In conjunction with reaching ensuing detailed description with reference to the accompanying drawings, the present invention will be more readily understood, the structure member that wherein same Reference numeral is corresponding same, wherein:
Fig. 1 among the present invention based on the intelligent toy block diagram in one embodiment of video analysis; With
Fig. 2 is the expression schematic diagram of intelligent toy among the present invention.
[specific embodiment]
For above-mentioned purpose of the present invention, feature and advantage can be become apparent more, the present invention is further detailed explanation below in conjunction with the drawings and specific embodiments.
Please refer to shown in Figure 1ly, it shows intelligent toy 100 block diagram in one embodiment based on video analysis among the present invention.Described intelligent toy comprises video acquisition module 110, people's face detection module 120, Expression Recognition module 130, head pose identification module 140, driver module 150, motion parameter data storehouse 160 and mechanical movement module 170.In one embodiment, described intelligent toy can only comprise one of them in Expression Recognition module 130 and the head pose identification module 140, and does not comprise wherein another.
Described video acquisition module 110 is used for the certain zone before the toy is taken, and to obtain user's moving image, it can be made of with relevant host-host protocol camera chip.Camera chip is comparative maturity, and USB interface-based camera can obtain good imaging effect.In one embodiment, this module can be placed on a rational position in the toy, such as, camera is embedded in the eyes of toy, the position that face, nose etc. are hidden relatively.
Described people's face detection module 120 is used for detecting people's face position from the image of gathering, and therefrom extracts people's face information, for follow-up intellectual analysis is prepared.In one embodiment, use training method to obtain people's face and non-face grader, utilize grader to come from image, to extract people's face position based on Adaboost.Adaboost needs to gather more people's face sample and non-face sample in advance and is used as positive negative sample, at first to align negative sample before the training and carry out preliminary treatment, and with positive and negative samples pictures normalization, from normalized picture, extract characteristic vector then, and from characteristic vector, pick out the strongest feature of classification capacity and form grader.Human face detection tech is very ripe, this technology can reach the effect of real-time detection and tracking at present, and in another embodiment, this tracking step realizes in the following ways: before not obtaining tracking target, every two field picture is searched for, detected people's face and whether exist; If certain two field picture detects one or more people's face, then in two ensuing two field pictures, follow the tracks of these people's faces, and people's face of following the tracks of in this two two field picture is detected and verifies, judge whether the testing result of front is true man's faces; After having only certain position three frame all to detect people's face, algorithm thinks that just this people from position face exists, and continues facial image is judged identification.In this tracking step,, select one of them to follow the tracks of if having a plurality of people's faces in the scene.In subsequent frame, continue to follow the tracks of this people's face,, then stop tracking if back one frame is low excessively with the similarity of the tracking results of former frame in the consecutive frame; If certain tracking target region is long-time for detecting positive homo erectus's face, think that then the tracking value of this target is little, stop to follow the tracks of this target, after previous tracking target stops to follow the tracks of, again carrying out people's face in successive image detects, up to finding new people's face, follow the tracks of new people's face, repeat the step of face tracking.
Described Expression Recognition module 130 is used for described people's face detection module 120 detected people's face information are carried out Expression Recognition.The method of Expression Recognition is based on all generally that statistics finishes, and promptly extracts characteristic vector, training classifier then from facial image.Feature Extraction is identified as the key that loses, and the feature that is used for Expression Recognition at present can be divided into two kinds: local feature and global feature.People's face portion Expression Recognition based on local feature is to utilize everyone facial characteristics, carry out feature extraction such as position, size and the mutual alignment thereof of eyebrow, eyes, face, nose and face contour etc. different, reach the purpose of people's face portion Expression Recognition.Identification based on people's face global feature is from whole facial image, proposes to have reflected whole feature, realizes people's face portion Expression Recognition.The present Expression Recognition of people's face generally is the several frequently seen expression of identification, comprise: neutral expression, laugh at, indignation, sadness, surprised, before identification, at first gather a large amount of samples, can be by USB camera record down by picker's expression video, from video file, isolate comprise human face expression image as the initial sample that is used for adding up.
At open application number be: in 200510135670.5 the patent application, mentioned a kind of human facial expression recognition method and apparatus based on video, it is based on the global feature of people's face, people's face chin profile according to automatic extraction generates a standard face, adopt the algorithm of AdaBoost to select the most effective feature then, obtain sane recognition result.The step that this method comprises is as follows:
From the video data of USB camera input, gather the facial expression image data of people's face, this view data is done preliminary treatment;
Extract real-time people's face is the position in the image after preliminary treatment;
According to the human eye grader human eye in the people's face in the image of determining is made the location;
Comprise the image-region of people's face according to the information extraction of the position of the human eye of determining and people's face grader, carry out normalized;
Human face is located;
According to the position of the location of human face being determined people's face chin, determine the human face region in the image, on the facial contour that extracts, indicate many parallel straight lines from top to bottom, wherein straight line has been determined the position of people's face chin, standard feature in people's face grader also indicates the parallel lines of similar number on the face from top to bottom, and a wherein corresponding line has been determined the position of face chin equally;
Along the angle of inclination of calculating, according to the relation between the corresponding lines, with the detected human face region of video resample for the standard feature little facial image consistent of being bold with angle, that is, and generating feature face, and as classification samples;
Calculate the Gabor feature of described eigenface image based on described classification samples;
Utilize the AdaBoost algorithm that the Gabor feature that calculates is selected;
By the latent structure support vector machine classifier of selecting;
Grader according to structure draws the human face expression recognition result.
In one embodiment, described Expression Recognition module is carried out Expression Recognition based on the human face expression recognition technology of this patent.In toy system, the realization of this module is fairly simple, only need be in system with the module loading that trains, and differentiate with grader then and get final product.
Described head pose identification module 140 is used for described people's face detection module 120 detected people's face information are carried out head pose identification.Effect based on the gesture recognition module of video is the head pose that identifies the people in the video, and promptly people's head is to the left in the video, turns right and sees, still up or look down.The method of gesture recognition generally has two kinds: first kind of method that is based on statistics, this method is gathered people's face information of different attitudes, train the grader (can distinguish grader left and right, upper and lower four direction) of a multiclass then, judge the attitude of people's face according to sorting result such as training.Method based on training can only be discerned four kinds of directions, can not accurately discern the angle of attitude.Second kind of method that is based on geometry deformation, the principle of this method is when human face posture changes, people organ characteristic on the face puts, and corresponding distortion can take place, and can come match is carried out in the variation of characteristic point with certain deformation equation, judges attitude according to the result of match.At open application number be: disclose the real-time estimating system of a kind of head pose in 200610012233.9 the patent application based on AAM, it determines the initial profile position of human face in the video according to the result of face tracking and face characteristic point location, accurately orient the actual position of organ then based on the AAM model, because the location of profile has globality, therefore, more accurate with the attitude that the result who locatees comes inverse to go out, and can obtain continuous attitude.Comprise the steps:
(1) according to the facial image sample of the different head attitude of gathering, training obtains ASM model and AAM gray level model, wherein, obtains an ASM mean profile face according to the ASM model, obtains an average gray face according to the AAM gray level model;
(2) according to described ASM model and AAM gray level model, the gradient matrix and the Hessain matrix that need when calculating the facial contour location, and, obtain pretreated model according to described ASM model, AAM gray level model, gradient matrix and Hessain matrix;
(3) human face image sequence of acquisition input, the pedestrian's face of going forward side by side detects and follows the tracks of, the instruction that obtains facial contour according to detection and tracking is omited the position, and ASM mean profile face is corresponded to the rough position of facial contour, obtain the position of people's face initial profile, according to the position of initial profile,, obtain the image-region of being bold and conforming to for a short time with the ASM mean profile by image sequence is resampled;
(4) in this image-region, the gradient matrix and the Hessian matrix that comprise according to described pretreated model, at first position, based on the ASM model parameter profile is accurately located then, and calculate corresponding ASM parameter based on global similarity transfer pair facial contour;
The relation of people's face angle of determining during (5) according to described ASM parameter and sample training estimates the attitude of head.
In one embodiment, described head pose identification module carries out gesture recognition based on the head pose recognition technology of this patent, and described head pose comprises to the left, turns right and see, look up and look down.In toy system, the realization of this module is fairly simple, only need be in system with the module loading that trains, and differentiate with grader then and get final product.
Described mechanical movement module 170 is used to imitate user or user's expression and head pose, and it comprises the movable element in the toy, and specified action is finished in the relative motion that described motion assembly can be stipulated by cooperatively interacting.Facial expression mainly shows as the variation of eye, eyebrow, mouth, nose.In general, facial each organ is an organic whole, expresses with a kind of emotion by mutual coordination.Because described intelligent toy needs apish expression and head pose, thus this toy can have with physiognomy like face structure, such as, this intelligent toy is a rag baby, and it has eyes, eyebrow, face, face mask etc. can be made face's assembly of different expressions.In one embodiment, described Expression Recognition module can discern laugh at, indignation, sad, surprised four kinds of facial expressions, corresponding described rag baby also can make laugh at, indignation, sad, surprised four kinds of expressions, and eyes, eyebrow, the face motion assembly of setting described rag baby are the kinematic parameter of facial expression, certain variation can take place in each assembly in different facial expressions, be that different facial expression correspondences different kinematic parameter values, as shown in Figure 2.Because facial expression can relate to the variation of facial muscles, in another embodiment, described intelligent toy is the rag baby of high emulation, and the facial muscles assembly of emulation wherein also is housed, and the facial muscles assembly also can be set at the kinematic parameter of facial expression.
The realization principle of toy attitude is also like this.In one embodiment, described head pose identification module can be discerned head pose left and right, upper and lower four direction, and corresponding rag baby comprises the connection joint assembly of Head and Neck, and it can drive the motion that head carries out this four direction.The connection joint assembly of described Head and Neck can be set at the kinematic parameter of head pose.In another embodiment, described toy is the rag baby of high emulation, emulation musculi colli and cervical vertebra assembly wherein also are housed, and it can drive the motion that head is made four direction, can be the kinematic parameter of head pose with emulation musculi colli and cervical vertebra module sets.
Here it should be noted that described intelligent toy is not only the doll that can make facial expression and head pose, also can be the toy of making other moulding of facial expression and head pose.
Described kinematic parameter memory module 160 is used to store the kinematic parameter of the toy part of predefined each expression or head pose correspondence.When toy designs, the kinematic parameter of toy assembly that can be in advance that each expression is corresponding is obtained.In one embodiment, described kinematic parameter memory module stores the title and the corresponding kinematic parameter of predefined expression or head pose.Such as, the corresponding kinematic parameter of each expression can comprise eyes parameter, face parameter, nose parameter and eyebrow parameter.
Described driver module 150 finds the corresponding kinematic parameter of this expression or the kinematic parameter of this attitude correspondence according to the recognition result of Expression Recognition module and head pose identification module from described motion parameter data storehouse, and makes corresponding expression and head pose according to described driving parameter mechanical movement module.
Because described intelligent toy can be discerned user's expression and head pose, and can make corresponding expression and head pose, can realize better interaction with child, excite child's interest and curiosity.
For the effect that makes imitation more vividly can increase the speech play module in the facial expression imitation, in one embodiment, store the corresponding sound of each expression, such as, imitation is when laughing at, play " chuckle " and laugh play toot sob when crying
Described mechanical movement module 170 is made up of the toy member, finishes specified action by cooperatively interacting.In one embodiment, described mechanical module can be made four kinds of common expressions.
Above-mentioned explanation has fully disclosed the specific embodiment of the present invention.It is pointed out that and be familiar with the scope that any change that the person skilled in art does the specific embodiment of the present invention does not all break away from claims of the present invention.Correspondingly, the scope of claim of the present invention also is not limited only to the described specific embodiment.

Claims (9)

1. intelligent toy, it comprises motion parameter data storehouse, driver module and mechanical movement module, it is characterized in that: it also comprises video acquisition module, people's face detection module, Expression Recognition module,
Described motion parameter data stock contains the corresponding kinematic parameter of a plurality of expressions;
Described video acquisition module is used to gather video image;
Described people's face detection module carries out people's face to the image that collects and detects, and detected people's face information is offered described Expression Recognition module;
Described Expression Recognition module is carried out Expression Recognition to detected people's face information, and the Expression Recognition result is offered described driver module;
Described driver module finds the kinematic parameter of corresponding expression with it according to the Expression Recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
2. intelligent toy as claimed in claim 1 is characterized in that, described intelligent toy is Teddy bear or rag baby.
3. intelligent toy as claimed in claim 1 is characterized in that, described expression comprises happiness, indignation, sadness and surprised.
4. intelligent toy as claimed in claim 1 is characterized in that, the corresponding kinematic parameter of each expression comprises eyes parameter, face parameter, nose parameter and eyebrow parameter.
5. intelligent toy as claimed in claim 1 is characterized in that it also comprises the head pose identification module,
Described motion parameter data stock contains the kinematic parameter of each head pose correspondence,
Described head pose identification module carries out head pose identification to detected people's face information, and the head pose recognition result is offered described driver module;
Described driver module finds kinematic parameter with its corresponding head according to the head pose recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
6. intelligent toy, it comprises motion parameter data storehouse, driver module and mechanical movement module, it is characterized in that: it also comprises video acquisition module, people's face detection module, head pose identification module,
Described motion parameter data stock contains the kinematic parameter of a plurality of head pose correspondences;
Described video acquisition module is used to gather video image;
Described people's face detection module carries out people's face to the image that collects and detects, and detected people's face information is offered described head pose identification module;
Described head pose identification module carries out head pose identification to detected people's face information, and the head pose recognition result is offered described driver module;
Described driver module finds kinematic parameter with its corresponding head pose according to the head pose recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
7. intelligent toy as claimed in claim 6 is characterized in that, described intelligent toy is Teddy bear or rag baby.
8. intelligent toy as claimed in claim 6 is characterized in that, described head pose comprises head pose left and right, upper and lower four direction.
9. intelligent toy as claimed in claim 6 is characterized in that it also comprises the Expression Recognition module,
Described motion parameter data stock contains the corresponding kinematic parameter of a plurality of expressions,
Described Expression Recognition module is carried out Expression Recognition to detected people's face information, and the Expression Recognition result is offered described driver module;
Described driver module finds the kinematic parameter of corresponding expression with it according to the Expression Recognition result from described motion parameter data storehouse, and drives described mechanical movement module according to this kinematic parameter and make corresponding action.
CN201010538313A 2010-11-10 2010-11-10 Intelligent toy based on video analysis Active CN102058983B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201010538313A CN102058983B (en) 2010-11-10 2010-11-10 Intelligent toy based on video analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201010538313A CN102058983B (en) 2010-11-10 2010-11-10 Intelligent toy based on video analysis

Publications (2)

Publication Number Publication Date
CN102058983A true CN102058983A (en) 2011-05-18
CN102058983B CN102058983B (en) 2012-08-29

Family

ID=43994598

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201010538313A Active CN102058983B (en) 2010-11-10 2010-11-10 Intelligent toy based on video analysis

Country Status (1)

Country Link
CN (1) CN102058983B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102722246A (en) * 2012-05-30 2012-10-10 南京邮电大学 Human face information recognition-based virtual pet emotion expression method
CN102955565A (en) * 2011-08-31 2013-03-06 德信互动科技(北京)有限公司 Man-machine interaction system and method
CN103861287A (en) * 2012-12-12 2014-06-18 李晓 Toy synchronization method, toy synchronization file and toy
CN104008677A (en) * 2014-06-05 2014-08-27 孟奇志 Remote teaching system and method for developing infant intelligence
CN104436692A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Intelligent toy based on target expression detection and recognizing method
CN104767980A (en) * 2015-04-30 2015-07-08 深圳市东方拓宇科技有限公司 Real-time emotion demonstrating method, system and device and intelligent terminal
CN105031938A (en) * 2015-07-07 2015-11-11 安徽瑞宏信息科技有限公司 Intelligent toy based on target expression detection and recognition method
CN105159452A (en) * 2015-08-28 2015-12-16 成都通甲优博科技有限责任公司 Control method and system based on estimation of human face posture
CN106326980A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Robot and method for simulating human facial movements by robot
CN106326981A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Method and device of robot for automatically creating personalized virtual robot
CN106444586A (en) * 2016-11-29 2017-02-22 深圳市赛亿科技开发有限公司 Intelligent playmate robot
CN107102540A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 A kind of method and intelligent robot for waking up intelligent robot
CN108673498A (en) * 2018-05-04 2018-10-19 安徽三弟电子科技有限责任公司 A kind of dance robot control system based on camera monitoring identification
CN110639213A (en) * 2019-11-19 2020-01-03 徐州华邦益智工艺品有限公司 Plush teddy bear capable of laughing
CN115089975A (en) * 2022-02-24 2022-09-23 上海怡佳雨科技有限公司 Intelligent toy

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019193A1 (en) * 2000-02-28 2002-02-14 Maggiore Albert P. Expression-varying device
US20030162472A1 (en) * 2000-05-18 2003-08-28 Jacqui Dancer Doll
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN1866271A (en) * 2006-06-13 2006-11-22 北京中星微电子有限公司 AAM-based head pose real-time estimating method and system
CN201551834U (en) * 2009-12-15 2010-08-18 广东骅威玩具工艺股份有限公司 Growing sound identification emotional doll

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020019193A1 (en) * 2000-02-28 2002-02-14 Maggiore Albert P. Expression-varying device
US20030162472A1 (en) * 2000-05-18 2003-08-28 Jacqui Dancer Doll
CN1794265A (en) * 2005-12-31 2006-06-28 北京中星微电子有限公司 Method and device for distinguishing face expression based on video frequency
CN1866271A (en) * 2006-06-13 2006-11-22 北京中星微电子有限公司 AAM-based head pose real-time estimating method and system
CN201551834U (en) * 2009-12-15 2010-08-18 广东骅威玩具工艺股份有限公司 Growing sound identification emotional doll

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102955565A (en) * 2011-08-31 2013-03-06 德信互动科技(北京)有限公司 Man-machine interaction system and method
CN102722246A (en) * 2012-05-30 2012-10-10 南京邮电大学 Human face information recognition-based virtual pet emotion expression method
CN103861287A (en) * 2012-12-12 2014-06-18 李晓 Toy synchronization method, toy synchronization file and toy
CN104436692A (en) * 2013-11-25 2015-03-25 安徽寰智信息科技股份有限公司 Intelligent toy based on target expression detection and recognizing method
CN104008677A (en) * 2014-06-05 2014-08-27 孟奇志 Remote teaching system and method for developing infant intelligence
CN104767980A (en) * 2015-04-30 2015-07-08 深圳市东方拓宇科技有限公司 Real-time emotion demonstrating method, system and device and intelligent terminal
CN105031938A (en) * 2015-07-07 2015-11-11 安徽瑞宏信息科技有限公司 Intelligent toy based on target expression detection and recognition method
CN105159452A (en) * 2015-08-28 2015-12-16 成都通甲优博科技有限责任公司 Control method and system based on estimation of human face posture
CN105159452B (en) * 2015-08-28 2018-01-12 成都通甲优博科技有限责任公司 A kind of control method and system based on human face modeling
WO2017143948A1 (en) * 2016-02-23 2017-08-31 芋头科技(杭州)有限公司 Method for awakening intelligent robot, and intelligent robot
CN107102540A (en) * 2016-02-23 2017-08-29 芋头科技(杭州)有限公司 A kind of method and intelligent robot for waking up intelligent robot
CN106326981A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Method and device of robot for automatically creating personalized virtual robot
CN106326980A (en) * 2016-08-31 2017-01-11 北京光年无限科技有限公司 Robot and method for simulating human facial movements by robot
CN106326981B (en) * 2016-08-31 2019-03-26 北京光年无限科技有限公司 Robot automatically creates the method and device of individualized virtual robot
CN106444586A (en) * 2016-11-29 2017-02-22 深圳市赛亿科技开发有限公司 Intelligent playmate robot
CN108673498A (en) * 2018-05-04 2018-10-19 安徽三弟电子科技有限责任公司 A kind of dance robot control system based on camera monitoring identification
CN110639213A (en) * 2019-11-19 2020-01-03 徐州华邦益智工艺品有限公司 Plush teddy bear capable of laughing
CN115089975A (en) * 2022-02-24 2022-09-23 上海怡佳雨科技有限公司 Intelligent toy

Also Published As

Publication number Publication date
CN102058983B (en) 2012-08-29

Similar Documents

Publication Publication Date Title
CN102058983B (en) Intelligent toy based on video analysis
US9943755B2 (en) Device for identifying and tracking multiple humans over time
CA2748037C (en) Method and system for gesture recognition
Jalal et al. Shape and motion features approach for activity tracking and recognition from kinect video camera
JP4764273B2 (en) Image processing apparatus, image processing method, program, and storage medium
CN105809144B (en) A kind of gesture recognition system and method using movement cutting
CN100397410C (en) Method and device for distinguishing face expression based on video frequency
US9333409B2 (en) Virtual golf simulation apparatus and sensing device and method used for the same
Jalal et al. Depth Silhouettes Context: A new robust feature for human tracking and activity recognition based on embedded HMMs
Jalal et al. Improved behavior monitoring and classification using cues parameters extraction from camera array images
US8929612B2 (en) System for recognizing an open or closed hand
CN109460734B (en) Video behavior identification method and system based on hierarchical dynamic depth projection difference image representation
US10186041B2 (en) Apparatus and method for analyzing golf motion
Rudoy et al. Viewpoint selection for human actions
CN102576466A (en) Systems and methods for tracking a model
JP7373589B2 (en) Pose similarity discrimination model generation method and pose similarity discrimination model generation device
WO2012117392A1 (en) Device, system and method for determining compliance with an instruction by a figure in an image
CN106648078A (en) Multimode interaction method and system applied to intelligent robot
CN111860394A (en) Gesture estimation and gesture detection-based action living body recognition method
Liang et al. 3D motion trail model based pyramid histograms of oriented gradient for action recognition
CN116030516A (en) Micro-expression recognition method and device based on multi-task learning and global circular convolution
CN104751144A (en) Frontal face quick evaluation method for video surveillance
Mansur et al. Action recognition using dynamics features
Matta et al. A behavioural approach to person recognition
CN211349295U (en) Interactive motion system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CP01 Change in the name or title of a patent holder

Address after: 214028 10-storey Building 530 Qingjia Road, Taihu International Science Park, Wuxi New District, Jiangsu Province

Patentee after: WUXI ZHONGGAN MICROELECTRONIC CO., LTD.

Address before: 214028 10-storey Building 530 Qingjia Road, Taihu International Science Park, Wuxi New District, Jiangsu Province

Patentee before: Wuxi Vimicro Co., Ltd.

CP01 Change in the name or title of a patent holder