CN106558052A - A kind of interaction data for intelligent robot processes output intent and robot - Google Patents

A kind of interaction data for intelligent robot processes output intent and robot Download PDF

Info

Publication number
CN106558052A
CN106558052A CN201610884639.XA CN201610884639A CN106558052A CN 106558052 A CN106558052 A CN 106558052A CN 201610884639 A CN201610884639 A CN 201610884639A CN 106558052 A CN106558052 A CN 106558052A
Authority
CN
China
Prior art keywords
user
clothes
color
garment coordination
robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610884639.XA
Other languages
Chinese (zh)
Inventor
畅敬佩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Guangnian Wuxian Technology Co Ltd
Original Assignee
Beijing Guangnian Wuxian Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Guangnian Wuxian Technology Co Ltd filed Critical Beijing Guangnian Wuxian Technology Co Ltd
Priority to CN201610884639.XA priority Critical patent/CN106558052A/en
Publication of CN106558052A publication Critical patent/CN106558052A/en
Pending legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

The present invention proposes a kind of interaction data for intelligent robot and processes output intent and a kind of intelligent robot.The method of the present invention includes:Receive and parse through that user sends it is multi-modal interactively enter, judge whether multi-modal the interactively entering is asked comprising garment coordination evaluation;When it is described it is multi-modal interactively enter and ask comprising garment coordination evaluation when the clothes of the user are scanned, obtain current clothes parameter;The current clothes parameter is parsed, garment coordination suggestion is generated and export.The method according to the invention, robot can be evaluated to user's garment coordination and be exported corresponding garment coordination suggestion;Compared to prior art, the method for the present invention greatly strengthen the interaction capabilities of robot so that the more excellent Consumer's Experience of robot.

Description

A kind of interaction data for intelligent robot processes output intent and robot
Technical field
The invention belongs to field in intelligent robotics, more particularly to a kind of interaction data process output for intelligent robot Method and robot.
Background technology
With the continuous development of robot technology, intelligent robot is more and more employed the family life with the mankind In.
With deepen continuously application of the intelligent robot in the family life of the mankind, user is needed for the interaction of robot Ask and also improve constantly, the interactive species of serious hope is also varied.Therefore in order to further expand the range of application of robot, improve The Consumer's Experience of robot, needs robot constantly to lift the interaction capabilities of oneself, meets the demand of user.Especially, need Robot is wanted to be continuously increased the application function of suitable family life.
The content of the invention
The present invention proposes a kind of interaction data for intelligent robot and processes output intent, including:
Receive and parse through that user sends it is multi-modal interactively enter, judge described multi-modal whether interactively enter comprising clothes Request is evaluated in collocation;
When it is described it is multi-modal interactively enter and ask comprising garment coordination evaluation when the clothes of the user are scanned, obtain Take current clothes parameter;
The current clothes parameter is parsed, garment coordination suggestion is generated and export.
In one embodiment, the current clothes parameter includes the color of clothes, and the current clothes parameter of the acquisition includes:
Obtain current image date;
Recognize from the current image date and extract user images;
Parse the color that the user images obtain the current clothes of user.
In one embodiment, the color that the user images obtain the current clothes of user is parsed, wherein:
The user images are divided into into top half and the latter half;
Color detection is carried out to top half and the latter half to obtain the clothes of the upper part of the body and the lower part of the body of user respectively Color.
In one embodiment, the current clothes parameter is parsed, generates and export garment coordination suggestion, including:
Whether matched with the color of the clothes of the lower part of the body according to default collocation rule judgment above the waist;
If matching, corresponding multi-modal garment coordination evaluation is generated and exports;
If mismatching, generate and export corresponding multi-modal garment coordination evaluation and further multi-modal clothes are taken With suggestion.
In one embodiment, recognize from the current image date and extract user images, wherein:
When the user images extract failure, point out to user output.
The invention allows for a kind of intelligent robot, the robot includes:
Garment coordination command analysis module, its be configured to receive and parse through user sends it is multi-modal interactively enter, judge Whether multi-modal the interactively entering is asked comprising garment coordination evaluation;
Clothes parameter collection module, its be configured to when it is described it is multi-modal interactively enter ask comprising garment coordination evaluation when pair The clothes of the user are scanned, and obtain current clothes parameter;
Garment coordination evaluates output module, and which is configured to the parsing current clothes parameter, generates and exports garment coordination Suggestion.
In one embodiment, the clothes parameter collection module is configured to obtain the current clothes ginseng comprising clothing color Number, the clothes parameter collection module include:
Image acquisition units, which is configured to obtain current image date;
User identification unit, which is configured to from the current image date recognize and extract user images;
Clothing color acquiring unit, which is configured to parse the color that the user images obtain the current clothes of user.
In one embodiment, the clothing color acquiring unit includes:
Image segmentating device, which is configured to for the user images to be divided into top half and the latter half;
Color detector, which is configured to respectively top half and the latter half be carried out color detection to obtain the upper of user The color of the clothes of half body and the lower part of the body.
In one embodiment, the garment coordination is evaluated output module and is included:
Color-match judging unit, which is configured to the clothes according to the default collocation rule judgment upper part of the body and the lower part of the body Whether color matches;
Output unit, which is configured to:
Generate and export corresponding multi-modal garment coordination to comment when the judged result of color-match judging unit is to match Valency;
Corresponding multi-modal garment coordination is generated and is exported when the judged result of color-match judging unit is to mismatch Evaluate and further multi-modal garment coordination suggestion.
In one embodiment, the user identification unit is configured to when the user images extract failure, to the use Family output prompting.
The method according to the invention, robot can be evaluated to user's garment coordination and be exported corresponding garment coordination Suggestion;Compared to prior art, the method for the present invention greatly strengthen the interaction capabilities of robot so that robot is more excellent Consumer's Experience.
The further feature or advantage of the present invention will be illustrated in the following description.Also, the Partial Feature of the present invention or Advantage will be become apparent by specification, or will be appreciated that by implementing the present invention.The purpose of the present invention and part Advantage can be realized or be obtained by specifically noted step in specification, claims and accompanying drawing.
Description of the drawings
Accompanying drawing is used for providing a further understanding of the present invention, and constitutes a part for specification, the reality with the present invention Apply example to be provided commonly for explaining the present invention, be not construed as limiting the invention.In the accompanying drawings:
Fig. 1 is method flow diagram according to an embodiment of the invention;
Fig. 2~Fig. 5 is the partial process view of method according to embodiments of the present invention;
Fig. 6 is robot system architecture's sketch according to an embodiment of the invention;
Fig. 7~Fig. 9 is the part-structure sketch of robot system according to embodiments of the present invention.
Specific embodiment
Describe embodiments of the present invention below with reference to drawings and Examples in detail, whereby enforcement personnel of the invention Can fully understand the present invention how application technology means solving technical problem, and reach technique effect realize process and according to Realize that process is embodied as the present invention according to above-mentioned.As long as each embodiment it should be noted that do not constitute conflict, in the present invention And each feature in each embodiment can be combined with each other, the technical scheme for being formed protection scope of the present invention it It is interior.
With deepen continuously application of the intelligent robot in the family life of the mankind, user is needed for the interaction of robot Ask and also improve constantly, the interactive species of serious hope is also varied.Therefore in order to further expand the range of application of robot, improve The Consumer's Experience of robot, needs robot constantly to lift the interaction capabilities of oneself, meets the demand of user.Especially, need Robot is wanted to be continuously increased the application function of suitable family life.
For the application function of expanding machinery people, the interaction capabilities of robot are improved, the present invention proposes a kind of for intelligence The interaction data of energy robot processes output intent.In the prior art, most robot image collecting function is (for example Camera) and interaction output function (such as voice output module).Therefore, in an embodiment of the present invention, do not increasing as far as possible Plus on the basis of robot hardware's cost, the image collecting function and interaction output function construction robot based on robot is new Application function, expand robot interaction capabilities.
Specifically, in daily life, may require that sometimes whether other people evaluate oneself current clothing matching proper simultaneously Wish which provides further clothing matching suggestion.In an embodiment of the present invention, the interaction scenarios of person to person are simulated, will be above-mentioned Evaluation side in interaction scenarios replaces with robot, i.e., evaluate garment coordination using robot whether proper and provide further Clothing matching suggestion, not only can so cause the evaluation suggestion of garment coordination can also in the occasion for not having other users Realize, facilitate user;And the range of application of robot is expanded, improve the interaction capabilities of robot so that robot The level that personalizes further improve, so as to effectively increase Consumer's Experience.
Next describe the detailed process of method according to embodiments of the present invention based on accompanying drawing in detail, in the flow chart of accompanying drawing The step of illustrating can be performed in comprising the such as computer system of one group of computer executable instructions.Although in flow charts The logical order of each step is shown, but in some cases, to perform shown different from order herein or can be retouched The step of stating.
In an embodiment of the present invention, as shown in figure 1, robot receive user send multi-modal interactively enter (voice Input, gesture input and word input etc.) (step S110);User input (step is parsed after receiving user input S120), determine the concrete meaning of user input;Whether (step is asked comprising garment coordination evaluation in judging user input S130)。
Multi-modal as user is interactively entered and is asked (such as beyond user is for clothes not comprising garment coordination evaluation Thing is discussed) when, user input (step S111) return to step S110 are responded according to common interactive strategy, continue to connect Receive new multi-modal of user to interactively enter.
Multi-modal as user is interactively entered comprising garment coordination evaluation request (such as user speech inquiry " my clothes Collocation how") when, robot enters garment coordination and evaluates application function (calling corresponding garment coordination to evaluate application module). Specifically, first the clothes of user are scanned, obtain current clothes parameter (step S140);Then parse current clothes ginseng Number (step S150);Garment coordination suggestion (step S160) is generated according to analysis result;Most rear line output garment coordination meaning See (step S170).
In the flow process shown in Fig. 1, one of committed step is to obtain current clothes parameter.Current clothes parameter is used to describe The current garment coordination state of user.In daily life, garment coordination is mainly the collocation of clothing color, therefore, at this In a bright embodiment, current clothes parameter is mainly used in describing the current clothing color of user.
Specifically, as shown in Fig. 2 in an embodiment of the present invention, during current clothes parameter is obtained, obtain first Take current image date (step S241);Then user images (step S242) are extracted from current image date;Finally solve User images are analysed so as to obtain the color (step S243) of the current clothes of user.
Further, in actual environment, in step S242 shown in Fig. 2, robot extracts user images and may lose Lose.For example:
Visual dead angles or user of the user in robot are blocked by other objects, the view data that robot gets In and do not include user images, therefore robot cannot extract user images from the view data for getting;
The part of user is blocked by other objects or user is currently at a kind of posture for being difficult to and recognizing, robot The user images for being sufficient for clothing color analysis cannot be extracted from the view data for getting.
For the situation of robot identifying user failure, in an embodiment of the present invention, when user images extract failure, Point out to user's output, remind user to change position or posture.
In one embodiment, as shown in figure 3, robot obtains current image date (step S341) first;Then from work as User images (step S342) are extracted in front view data;Solve when step S342 successful execution (successfully extracting user images) User images are analysed so as to obtain the color (step S344) of the current clothes of user;(use cannot be extracted when step S342 performs failure Family image) when point out to user's output, remind user to change position or posture (step S343);Then reacquire current figure As data (return to step S341).
When clothing color collocation is actually carried out, generally the clothing color by the clothing color of the upper part of the body with the lower part of the body is carried out Collocation.Therefore, in an embodiment of the present invention, current clothes parameter is configured to the clothing color for describing user's upper part of the body And the clothing color of the lower part of the body.I.e., in one embodiment, in step S243 shown in Fig. 2, will use during parsing user images Family image segmentation is top half and the latter half, then obtains above-mentioned two-part clothing color respectively.
Specifically, as shown in figure 4, after user images are extracted (step S442), user images are divided into the first half Divide and the latter half (step S443).Then respectively top half and the latter half are carried out color detection to obtain the upper of user The color (step S444) of the clothes of half body and the lower part of the body.
Further, in one embodiment, user images are partly accounted for into 4 parts according in step S443, lower part accounts for 6 The ratio of part, is divided into two pictures.
Specifically, in an application example, as shown in figure 5, robot realizes man-machine interaction with user based on interactive application, In interaction, interactively entering for interactive application parsing user gets garment coordination evaluation instruction (step S541);Now Interactive application starts clothes detection (step S542) to the main system application of robot.
Clothing color detection obtains pictorial information (step S543) after starting first, specifically, in the present embodiment, obtains Preview picture information in robot camera, immediate data are the byte arrays of Yuv forms, are then changed into byte arrays Meet the bitmap (picture) of form, aforesaid way does not limit to.
Next humanoid detection is called, detects whether current image includes humanoid (step S544).Specifically, humanoid inspection Survey can return a set, and when human body is not detected by, collection is combined into sky;After having human body to be detected, in the set of return Have position of human body matrix.
Judge whether the set that humanoid detection is returned is empty (step S545), exports signal language to user when collection is combined into space-time (for example, exporting to user, I does not see you, a position or posture please be change) (step S546);When set is not space-time, according to people The matrix coordinate of body position matrix, intercepts out the picture bitmap2 (user images) of human body from current image (bitmap) (step S547).
Bitmap2 (user images) is accounted for into 4 parts according to upper part, lower part accounts for 6 parts of ratio, be divided into two pictures (on Body and the lower part of the body) (step S548).Respectively by the interface of the picture of upper body and the lower part of the body incoming color detection, obtain respectively above the waist and The color (step S549) of lower part of the body clothes.
Further, in one embodiment, after user's upper part of the body clothing color and lower part of the body clothing color is got, Robot is generated according to user's upper part of the body clothing color and lower part of the body clothing color and exports garment coordination suggestion.Specifically, Whether matched with the color of the clothes of the lower part of the body according to default collocation rule judgment above the waist;If matching, generates and exports Corresponding multi-modal garment coordination evaluation (for example, " you praise at the garment coordination of today very much " or " your garment coordination of today is very complete It is beautiful ");If mismatching, corresponding multi-modal garment coordination evaluation and further multi-modal garment coordination are generated and export Suggestion (for example, " you are not suitable for a bit the garment coordination of today, it should which jacket is changed into XX colors ").
Method according to the invention it is possible to so that robot carries out evaluation and recommendations to the collocation of the clothing color of user, from And the interaction capabilities of enhancing robot, substantially increase Consumer's Experience.
Based on the method for the present invention, the invention allows for a kind of intelligent robot.As shown in fig. 6, in one embodiment, Robot includes:
Garment coordination command analysis module 610, its be configured to receive and parse through user sends it is multi-modal interactively enter, sentence Whether multi-modal the interactively entering that disconnected user sends is asked comprising garment coordination evaluation;
Clothes parameter collection module 620, which is configured to when multi-modal the interactively entering that user sends is commented comprising garment coordination The clothes of user are scanned when valency is asked, obtain current clothes parameter;
Garment coordination evaluates output module 630, and which is configured to the current clothes parameter for parsing user, generates and exports clothes Collocation suggestion.
Further, in one embodiment, using clothing color as clothes parameter.It is corresponding, clothes parameter collection module It is configured to obtain the current clothes parameter comprising clothing color.As shown in fig. 7, clothes parameter collection module 720 includes:
Image acquisition units 721, which is configured to obtain current image date;
User identification unit 722, which is configured to recognize from current image date and extract user images;
Clothing color acquiring unit 723, which is configured to parse the color that user images obtain the current clothes of user.
Further, user identification unit 722 is configured to when user images extract failure, is pointed out to user's output.
Further, in one embodiment, the clothing color of user's upper part of the body and the lower part of the body is obtained respectively.Specifically, As shown in figure 8, clothing color acquiring unit 830 includes:
Image segmentating device 831, which is configured to for user images to be divided into top half and the latter half;
Color detector 832, which is configured to respectively top half and the latter half be carried out color detection to obtain user The upper part of the body and the lower part of the body clothes color.
Further, in one embodiment, robot by judging the clothing color of user's upper part of the body and the lower part of the body is It is no to match to carry out garment coordination evaluation, specifically, as shown in figure 9, garment coordination evaluates output module 940 including:
Color-match judging unit 941, which is configured to the clothes according to default collocation rule judgment above the waist with the lower part of the body Whether the color of dress matches;
Output unit 942, which is configured to:
Corresponding multi-modal garment coordination is generated and is exported when the judged result of color-match judging unit 941 is to match Evaluate;
Generate and export corresponding multi-modal clothes to take when the judged result of color-match judging unit 941 is to mismatch With evaluation and further multi-modal garment coordination suggestion.
The robot of the present invention can realize evaluating user's garment coordination and further providing garment coordination suggestion. Compared to the robot of prior art, the interaction capabilities of the robot of the present invention are further enhanced, with more excellent user Experience.
While it is disclosed that embodiment as above, but described content only to facilitate understand the present invention and adopt Embodiment, is not limited to the present invention.Method of the present invention can also have other various embodiments.Without departing substantially from In the case of essence of the present invention, those of ordinary skill in the art work as and can make various corresponding changes or change according to the present invention Shape, but these corresponding changes or deformation should all belong to the scope of the claims of the present invention.

Claims (10)

1. a kind of interaction data for intelligent robot processes output intent, including:
Receive and parse through that user sends it is multi-modal interactively enter, judge described multi-modal whether interactively enter comprising garment coordination Evaluate request;
When it is described it is multi-modal interactively enter and ask comprising garment coordination evaluation when the clothes of the user are scanned, obtain and work as Front clothes parameter;
The current clothes parameter is parsed, garment coordination suggestion is generated and export.
2. method according to claim 1, it is characterised in that the current clothes parameter includes the color of clothes, described Obtaining current clothes parameter includes:
Obtain current image date;
Recognize from the current image date and extract user images;
Parse the color that the user images obtain the current clothes of user.
3. method according to claim 2, it is characterised in that the parsing user images obtain the face of the current clothes of user Color, wherein:
The user images are divided into into top half and the latter half;
Color detection is carried out to top half and the latter half to obtain the face of the clothes of the upper part of the body and the lower part of the body of user respectively Color.
4. method according to claim 3, it is characterised in that the parsing current clothes parameter, generates and exports clothes Collocation suggestion, including:
Whether matched with the color of the clothes of the lower part of the body according to default collocation rule judgment above the waist;
If matching, corresponding multi-modal garment coordination evaluation is generated and exports;
If mismatching, generate and export corresponding multi-modal garment coordination evaluation and further multi-modal garment coordination is built View.
5. the method according to any one of claim 2 to 4, it is characterised in that recognize from the current image date And user images are extracted, wherein:
When the user images extract failure, point out to user output.
6. a kind of intelligent robot, it is characterised in that the robot includes:
Garment coordination command analysis module, its be configured to receive and parse through user sends it is multi-modal interactively enter, judge described Whether multi-modal interactively entering is asked comprising garment coordination evaluation;
Clothes parameter collection module, its be configured to when it is described it is multi-modal interactively enter ask comprising garment coordination evaluation when to described The clothes of user are scanned, and obtain current clothes parameter;
Garment coordination evaluates output module, and which is configured to the parsing current clothes parameter, generates and export garment coordination suggestion.
7. robot according to claim 6, it is characterised in that the clothes parameter collection module is configured to acquisition and includes The current clothes parameter of clothing color, the clothes parameter collection module include:
Image acquisition units, which is configured to obtain current image date;
User identification unit, which is configured to from the current image date recognize and extract user images;
Clothing color acquiring unit, which is configured to parse the color that the user images obtain the current clothes of user.
8. robot according to claim 7, it is characterised in that the clothing color acquiring unit includes:
Image segmentating device, which is configured to for the user images to be divided into top half and the latter half;
Color detector, which is configured to respectively top half and the latter half be carried out color detection to obtain the upper part of the body of user With the color of the clothes of the lower part of the body.
9. robot according to claim 8, it is characterised in that the garment coordination evaluates output module to be included:
Color-match judging unit, which is configured to the color according to default collocation rule judgment above the waist with the clothes of the lower part of the body Whether match;
Output unit, which is configured to:
Corresponding multi-modal garment coordination evaluation is generated and is exported when the judged result of color-match judging unit is to match;
Corresponding multi-modal garment coordination evaluation is generated and is exported when the judged result of color-match judging unit is to mismatch And further multi-modal garment coordination suggestion.
10. the robot according to any one of claim 7 to 9, it is characterised in that the user identification unit is configured to When the user images extract failure, point out to user output.
CN201610884639.XA 2016-10-10 2016-10-10 A kind of interaction data for intelligent robot processes output intent and robot Pending CN106558052A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610884639.XA CN106558052A (en) 2016-10-10 2016-10-10 A kind of interaction data for intelligent robot processes output intent and robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610884639.XA CN106558052A (en) 2016-10-10 2016-10-10 A kind of interaction data for intelligent robot processes output intent and robot

Publications (1)

Publication Number Publication Date
CN106558052A true CN106558052A (en) 2017-04-05

Family

ID=58418331

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610884639.XA Pending CN106558052A (en) 2016-10-10 2016-10-10 A kind of interaction data for intelligent robot processes output intent and robot

Country Status (1)

Country Link
CN (1) CN106558052A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007055525A1 (en) * 2005-11-10 2007-05-18 Lg Electronics Inc. Record media written with data structure for recognizing a user and method for recognizing a user
CN102426650A (en) * 2011-09-30 2012-04-25 宇龙计算机通信科技(深圳)有限公司 Method and device of character image analysis
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN205193829U (en) * 2015-11-30 2016-04-27 北京光年无限科技有限公司 Intelligent robot system
CN105808774A (en) * 2016-03-28 2016-07-27 北京小米移动软件有限公司 Information providing method and device
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot
CN105912530A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Intelligent robot-oriented information processing method and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007055525A1 (en) * 2005-11-10 2007-05-18 Lg Electronics Inc. Record media written with data structure for recognizing a user and method for recognizing a user
CN102426650A (en) * 2011-09-30 2012-04-25 宇龙计算机通信科技(深圳)有限公司 Method and device of character image analysis
CN103984315A (en) * 2014-05-15 2014-08-13 成都百威讯科技有限责任公司 Domestic multifunctional intelligent robot
CN205193829U (en) * 2015-11-30 2016-04-27 北京光年无限科技有限公司 Intelligent robot system
CN105868827A (en) * 2016-03-25 2016-08-17 北京光年无限科技有限公司 Multi-mode interaction method for intelligent robot, and intelligent robot
CN105808774A (en) * 2016-03-28 2016-07-27 北京小米移动软件有限公司 Information providing method and device
CN105912530A (en) * 2016-04-26 2016-08-31 北京光年无限科技有限公司 Intelligent robot-oriented information processing method and system

Similar Documents

Publication Publication Date Title
CN104318558B (en) Hand Gesture Segmentation method based on Multi-information acquisition under complex scene
Ahuja et al. Static vision based Hand Gesture recognition using principal component analysis
Martinez-Martin et al. Object detection and recognition for assistive robots: Experimentation and implementation
CN106709404B (en) Image processing apparatus and image processing method
CN104156693B (en) A kind of action identification method based on the fusion of multi-modal sequence
Agrawal et al. A survey on manual and non-manual sign language recognition for isolated and continuous sign
CN103824059A (en) Facial expression recognition method based on video image sequence
CN104361326A (en) Method for distinguishing living human face
CN103618918A (en) Method and device for controlling display of smart television
Luo et al. Hand gesture recognition for human-robot interaction for service robot
CN103198303A (en) Gender identification method based on facial image
TWI415011B (en) Facial identification method and system using thereof
Gosavi et al. Facial expression recognition using principal component analysis
CN105335719A (en) Living body detection method and device
CN104281839A (en) Body posture identification method and device
CN107944398A (en) Based on depth characteristic association list diagram image set face identification method, device and medium
CN108921071A (en) Human face in-vivo detection method, device, storage medium and processor
CN111860250A (en) Image identification method and device based on character fine-grained features
Fernando et al. Low cost approach for real time sign language recognition
KR101344851B1 (en) Device and Method for Processing Image
Manaf et al. Color recognition system with augmented reality concept and finger interaction: Case study for color blind aid system
CN106558052A (en) A kind of interaction data for intelligent robot processes output intent and robot
Siby et al. Hand gesture recognition
Lee Detection and recognition of facial emotion using bezier curves
CN107066928A (en) A kind of pedestrian detection method and system based on grader

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170405

RJ01 Rejection of invention patent application after publication