CN104317389A - Method and device for identifying character role through movement - Google Patents

Method and device for identifying character role through movement Download PDF

Info

Publication number
CN104317389A
CN104317389A CN201410490405.8A CN201410490405A CN104317389A CN 104317389 A CN104317389 A CN 104317389A CN 201410490405 A CN201410490405 A CN 201410490405A CN 104317389 A CN104317389 A CN 104317389A
Authority
CN
China
Prior art keywords
movement locus
action
motion characteristic
locus model
database
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410490405.8A
Other languages
Chinese (zh)
Other versions
CN104317389B (en
Inventor
郑战海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Genius Technology Co Ltd
Original Assignee
Guangdong Genius Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Genius Technology Co Ltd filed Critical Guangdong Genius Technology Co Ltd
Priority to CN201410490405.8A priority Critical patent/CN104317389B/en
Publication of CN104317389A publication Critical patent/CN104317389A/en
Application granted granted Critical
Publication of CN104317389B publication Critical patent/CN104317389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality

Abstract

The invention provides a method and a device for identifying a character role through movement, and aims to solve the technical problems of single function and low value of traditional intelligent wearable equipment. The method comprises the following steps: obtaining the movement characteristics of the movement when a user does the movement; judging whether a movement locus model matched with the movement characteristics exists in a database or not; and if the movement locus model matched with the movement characteristics exists in the database, displaying the media characteristics of the character role corresponding to the movement locus model matched with the movement characteristics. A user, especially children, can be enabled to simulate the classical behavior of a cartoon character role, the intelligent wearable equipment is changed into a movement prop used for simulating the cartoon character role, the function of the intelligent wearable equipment is enhanced, and the use value of the intelligent wearable equipment is improved.

Description

A kind of method and apparatus by action recognition character
Technical field
The invention belongs to intelligent wearing field, particularly relate to a kind of method and apparatus by action recognition character.
Background technology
Intelligent worn device is that application wearable technology carries out intelligentized design to daily wearing, develops the general name of the equipment that can dress, such as glasses, gloves, wrist-watch, dress ornament and shoes etc.The Intelligent worn device of broad sense comprises that function is complete, size large, can not rely on smart mobile phone realizes function that is complete or part, such as intelligent watch or intelligent glasses etc., and be only absorbed in a certain class application function, need and miscellaneous equipment as smart mobile phone with the use of, such as, all kinds ofly the Intelligent bracelet of sign monitoring, intelligent jewellery etc. are carried out.Along with the progress of technology and the transition of user's request, the form of Intelligent worn device and application focus are also in continuous change.
Generally speaking, the every parts of Intelligent worn device (such as, screen) size is smaller, and does not have touch function, and the volume of whole equipment is also smaller.Therefore, existing Intelligent worn device has also only possessed its original function, such as, location, health monitoring or event notification etc., but do not possess other more functions, such as, amusement function, this also makes existing Intelligent worn device also not give play to more value.
Summary of the invention
The object of the present invention is to provide a kind of method and apparatus by action recognition character, be intended to solve existing Intelligent worn device function singleness, be worth not high technical matters.
The present invention is achieved in that a kind of method by action recognition character, and described method comprises:
Obtain the motion characteristic of described action when user makes action;
Judge whether there is in database the movement locus model matched with described motion characteristic;
If have the movement locus model matched with described motion characteristic in described database, then the media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.
Another object of the present invention is to provide a kind of device by action recognition character, described device comprises:
Acquisition module, for obtaining the motion characteristic of described action when user makes action;
Whether judge module, have for judging the movement locus model matched with described motion characteristic in database;
Display module, if for having the movement locus model matched with described motion characteristic in described database, then the media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.
From the invention described above embodiment, by obtain user do the motion characteristic of action, when there is movement locus model in a database that match with described motion characteristic, show the media characteristic of the character corresponding with the movement locus model that described motion characteristic matches, thus user, especially children can be allowed to imitate the behavior of cartoon figure role's classics, Intelligent worn device is become the motion stage property imitating cartoon figure role, enhance the function of Intelligent worn device, improve the use value of Intelligent worn device.
Accompanying drawing explanation
Fig. 1 is the realization flow schematic diagram of the method by action recognition character that the embodiment of the present invention one provides;
Fig. 2 is the realization flow schematic diagram of the method by action recognition character that the embodiment of the present invention two provides;
Fig. 3 is the realization flow schematic diagram of the method by action recognition character that the embodiment of the present invention three provides;
Fig. 4 is the structural representation of the device by action recognition character that the embodiment of the present invention four provides;
Fig. 5 is the structural representation of the device by action recognition character that the embodiment of the present invention five provides;
Fig. 6 is the structural representation of the device by action recognition character that the embodiment of the present invention six provides;
Fig. 7 is the structural representation of the device by action recognition character that the embodiment of the present invention seven provides;
Fig. 8-a is the structural representation of the device by action recognition character that the embodiment of the present invention eight provides;
Fig. 8-b is the structural representation of the device by action recognition character that the embodiment of the present invention nine provides;
Fig. 8-c is the structural representation of the device by action recognition character that the embodiment of the present invention ten provides;
Fig. 8-d is the structural representation of the device by action recognition character that the embodiment of the present invention 11 provides.
Embodiment
In order to make object of the present invention, technical scheme and beneficial effect clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The embodiment of the present invention provides a kind of method by action recognition character, and described method comprises: the motion characteristic obtaining described action when user makes action; Judge whether there is in database the movement locus model matched with described motion characteristic; If have the movement locus model matched with described motion characteristic in described database, then the media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.The embodiment of the present invention also provides accordingly by the device of action recognition character.Below be described in detail respectively.
Referring to accompanying drawing 1, is the realization flow of the method by action recognition character that the embodiment of the present invention one provides, and mainly comprises the following steps S101 to step S103:
S101, obtains the motion characteristic of action when user makes action.
In the present embodiment, user do the motion characteristic of action, not refer to user do whole ingredients of action, but do the combination of canonical dissection in the movement locus of action.
S102, judge whether to have in database with step S101 obtain the movement locus model that motion characteristic matches.
In the present embodiment, the movement locus model of some actions is stored in advance in database, such as, the movement locus model of the action that beautiful faery waves one's magic wand, shaven head saw wood by force the movement locus model of classical action in the movement locus model of the action of pulling back and Music Television (MTV) " south of the River Style ", " griggles " etc., etc.
It should be noted that, the movement locus model of action in database, both can be the movement locus model of the action of local data library storage, can be again the movement locus model of the action of the database purchase of the server of that side of network.
S103, the movement locus model matched with motion characteristic if having in database, the then media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.
In the present embodiment, the media characteristic of the character that movement locus model is corresponding can be the picture of character, audio frequency, video or image etc.
From the method by action recognition character of above-mentioned accompanying drawing 1 example, by obtain user do the motion characteristic of action, when there is movement locus model in a database that match with described motion characteristic, show the media characteristic of the character corresponding with the movement locus model that described motion characteristic matches, thus can allow user, especially children imitate the behavior of cartoon figure role's classics, Intelligent worn device is become the motion stage property imitating cartoon figure role, enhance the function of Intelligent worn device, improve the use value of Intelligent worn device.
Referring to accompanying drawing 2, is the realization flow of the method by action recognition character that the embodiment of the present invention two provides, and mainly comprises the following steps S201 to step S205:
S201, detects the movement locus of described action when user makes action.
In the present embodiment, user do action movement locus comprise user do path, the information such as multiplicity and speed of action; During concrete detection, can use some sensors, such as acceleration transducer detects.
S202, decomposes movement locus, extracts the motion characteristic of combination as described action of canonical dissection in described movement locus.
As previously mentioned, user do action motion characteristic not refer to user do whole ingredients of action, but do the combination of canonical dissection in the movement locus of action.In other words, for user do the action of action, particularly some complexity, not as motion characteristic using its whole movement locus, but using the combination of important, classical sub-action several in these actions as motion characteristic, the transitional action between these important, classical sub-actions just can neglect.
In the present embodiment, by decomposing movement locus, the motion characteristic of combination as described action of canonical dissection in described movement locus can be extracted.Such as, suppose that user is subjective to want to make beautiful faery and to wave one's magic wand This move, the action waved one's magic wand due to beautiful faery in animation generally includes first draws a circle, then toward these two important, the classical sub-actions once of circle intermediate point, therefore, can decompose the movement locus of the above-mentioned action that user makes, namely the combination of extracting canonical dissection in described movement locus first draws a circle, then toward circle intermediate point once, This move of " first drawing a circle, then toward circle intermediate point once " combines the motion characteristic as action.
S203, judges whether have the movement locus model matched with motion characteristic in database.
In the present embodiment, judge that whether having the movement locus model matched with motion characteristic in database can be: the combination of canonical dissection in movement locus compared with the movement locus model in database, if the similarity of the movement locus model in described movement locus in the combination of canonical dissection and described database reaches default threshold value, then determine that there is in database the movement locus model matched with described motion characteristic.
The beautiful faery related to for above-mentioned steps S202 again waves one's magic wand, if namely the combination of canonical dissection in movement locus " is first drawn a circle, then toward circle intermediate point once " compare with the movement locus model in database, find " first to draw a circle, then the movement locus model toward circle intermediate point once " waved one's magic wand with beautiful faery in database is just the same, or " first draw a circle, then the movement locus distortion toward circle intermediate point once " waved one's magic wand with beautiful faery in database reaches default threshold value, such as, user's drawn circle when doing the action of picture circle is not to justify very much but basic circular in one, toward circle points once time be not toward circle the center of circle but near the center of circle location point once, determine to have in database with user do action the movement locus model that matches of motion characteristic and the movement locus model that waves one's magic wand of beautiful faery.
S204, the movement locus model matched with motion characteristic if having in database, the then media characteristic of the character that display is corresponding with the movement locus model that motion characteristic matches.
In the present embodiment, the media characteristic of the character that movement locus model is corresponding can be the picture of character, audio frequency, video or image etc.
From the method by action recognition character of above-mentioned accompanying drawing 2 example, by obtain user do the motion characteristic of action, when there is movement locus model in a database that match with described motion characteristic, show the media characteristic of the character corresponding with the movement locus model that described motion characteristic matches, thus can allow user, especially children imitate the behavior of cartoon figure role's classics, Intelligent worn device is become the motion stage property imitating cartoon figure role, enhance the function of Intelligent worn device, improve the use value of Intelligent worn device.
Refer to accompanying drawing 3, be the realization flow of the method by action recognition character that the embodiment of the present invention three provides, method mainly comprises the following steps S301 to step S305:
S301, sets up movement locus model.
Due to the action of some classical characters, such as, the action of pulling back is sawed wood by force in the action that beautiful faery waves one's magic wand, shaven head, or the action of contemporary pop music TV, such as Music Television (MTV) " south of the River Style ", " griggles " etc., possess action rule majority being had to certain familiarity.Therefore, in the present embodiment, can analyze the track of these actions, movement locus modeling is carried out in abstract extraction classons action wherein, obtains the movement locus model of these actions.
Further, movement locus model can be associated with the media characteristic of character, make the media characteristic one_to_one corresponding of a movement locus model and a character, media characteristic can be the picture of character, audio frequency, video or image etc.
S302, the movement locus model storage set up by step S301 is in local data base or be uploaded to the network storage.
After setting up movement locus model, by set up movement locus model storage in local data base or be uploaded to the network storage, wherein, the movement locus model of network can be stored in, when needs carry out motion characteristic and movement locus Model Matching, can from web download to local data base.
It should be noted that, above-mentioned set up movement locus model also may not be the movement locus model of the action of some classical characters completely.In the present embodiment, the action of user's do-it-yourself, Intelligent worn device is recorded by This move and with the synchronous background music of this action, and action is associated with synchronous background music, then, This move is also saved in local data base as a movement locus model or is uploaded to the network storage.
S303, obtains the motion characteristic of action when user makes action.
In the present embodiment, user do the motion characteristic of action, not refer to user do whole ingredients of action, but do the combination of canonical dissection in the movement locus of action.
S304, judge whether to have in database with step S303 obtain the movement locus model that motion characteristic matches.
In the present embodiment, the movement locus model of some actions is stored in advance in database, such as, the movement locus model of the action that beautiful faery waves one's magic wand, shaven head saw wood by force the movement locus model of classical action in the movement locus model of the action of pulling back and Music Television (MTV) " south of the River Style ", " griggles " etc., etc.
It should be noted that, the movement locus model of action in database, both can be the movement locus model of the action of local data library storage, can be again the movement locus model of the action of the database purchase of the server of that side of network.
S305, the movement locus model matched with motion characteristic if having in database, the then media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.
In the present embodiment, the media characteristic of the character that movement locus model is corresponding can be the picture of character, audio frequency, video or image etc.
From the method by action recognition character of above-mentioned accompanying drawing 3 example, by obtain user do the motion characteristic of action, when there is movement locus model in a database that match with described motion characteristic, show the media characteristic of the character corresponding with the movement locus model that described motion characteristic matches, thus can allow user, especially children imitate the behavior of cartoon figure role's classics, Intelligent worn device is become the motion stage property imitating cartoon figure role, enhance the function of Intelligent worn device, improve the use value of Intelligent worn device.
Consider that some objective factor accidentally causes action that user does and movement locus model to differ greatly, in above-mentioned accompanying drawing 1 to accompanying drawing 3 example by the method for action recognition character, if in judgement database not with user do action the movement locus model that matches of motion characteristic, then described method also comprise prompting user again make action, to rejudge in database the movement locus model whether having and match with the motion characteristic of described user again done action; Can be text prompt during prompting, also can be voice message.
Referring to accompanying drawing 4, is the structural representation of the device by action recognition character that the embodiment of the present invention four provides.For convenience of explanation, illustrate only the part relevant to the embodiment of the present invention.The device by action recognition character of accompanying drawing 4 example mainly comprises acquisition module 401, judge module 402 and display module 403, and each functional module is described in detail as follows:
Acquisition module 401, for obtaining the motion characteristic of described action when user makes action.
In the present embodiment, user do the motion characteristic of action, not refer to user do whole ingredients of action, but do the combination of canonical dissection in the movement locus of action.
Judge module 402, for judging whether to have in database the movement locus model that the motion characteristic that obtains with acquisition module 401 matches.
In the present embodiment, the movement locus model of some actions is stored in advance in database, such as, the movement locus model of the action that beautiful faery waves one's magic wand, shaven head saw wood by force the movement locus model of classical action in the movement locus model of the action of pulling back and Music Television (MTV) " south of the River Style ", " griggles " etc., etc.
It should be noted that, the movement locus model of action in database, both can be the movement locus model of the action of local data library storage, can be again the movement locus model of the action of the database purchase of the server of that side of network.
Display module 403, if for having the movement locus model matched with motion characteristic in database, then the media characteristic of the character that display is corresponding with the movement locus model that motion characteristic matches.
In the present embodiment, the media characteristic of the character that movement locus model is corresponding can be the picture of character, audio frequency, video or image etc.
It should be noted that, in the embodiment of the device by action recognition character of above accompanying drawing 4 example, the division of each functional module only illustrates, can be as required in practical application, the facility of the such as configuration requirement of corresponding hardware or the realization of software is considered, and above-mentioned functions distribution is completed by different functional modules, inner structure by the described device by action recognition character is divided into different functional modules, to complete all or part of function described above.And, in practical application, corresponding functional module in the present embodiment can be by corresponding hardware implementing, also can perform corresponding software by corresponding hardware to complete, such as, aforesaid acquisition module can be the hardware with the motion characteristic performing described action when aforementioned acquisition user makes action, such as getter also can be general processor or other hardware devices that can perform corresponding computer program thus complete aforementioned function; For another example aforesaid judge module, can be perform the described hardware judging whether to have in database the movement locus model matched with described motion characteristic, such as determining device also can be general processor or other hardware devices (each embodiment that this instructions provides all can apply foregoing description principle) that can perform corresponding computer program thus complete aforementioned function.
The acquisition module 401 of accompanying drawing 4 example can comprise detecting unit 501 and feature extraction unit 502, as shown in Figure 5 the device by action recognition character that provides of the embodiment of the present invention five, wherein:
Detecting unit 501, for detecting the movement locus of described action when described user makes action.
In the present embodiment, user do action movement locus comprise user do path, the information such as multiplicity and speed of action; When detecting unit 501 specifically detects, can use some sensors, such as acceleration transducer detects.
Feature extraction unit 502, for decomposing described movement locus, extracts the motion characteristic of combination as described action of canonical dissection in described movement locus.
As previously mentioned, user do action motion characteristic not refer to user do whole ingredients of action, but do the combination of canonical dissection in the movement locus of action.In other words, for user do the action of action, particularly some complexity, not as motion characteristic using its whole movement locus, but using the combination of important, classical sub-action several in these actions as motion characteristic, the transitional action between these important, classical sub-actions just can neglect.
In the present embodiment, feature extraction unit 502 by decomposing movement locus, can extract the motion characteristic of combination as described action of canonical dissection in described movement locus.Such as, suppose that user is subjective to want to make beautiful faery and to wave one's magic wand This move, the action waved one's magic wand due to beautiful faery in animation generally includes first draws a circle, then toward these two important, the classical sub-actions once of circle intermediate point, therefore, feature extraction unit 502 can be decomposed the movement locus of the above-mentioned action that user makes, namely the combination of extracting canonical dissection in described movement locus first draws a circle, then toward circle intermediate point once, This move of " first drawing a circle, then toward circle intermediate point once " combines the motion characteristic as action.
The judge module 402 of accompanying drawing 5 example can comprise contrast unit 601 and determining unit 602, as shown in Figure 6 the device by action recognition character that provides of the embodiment of the present invention six, wherein:
Contrast unit 601, for comparing the combination of canonical dissection in described movement locus with the movement locus model in described database.
Determining unit 602, if reach default threshold value for the combination of canonical dissection in described movement locus and the similarity of described movement locus model, then determines to have the movement locus model matched with described motion characteristic in database.
Wave one's magic wand for the beautiful faery related in above-mentioned accompanying drawing 5 example again, if namely the combination of canonical dissection in movement locus " first draws a circle by contrast unit 601, then toward circle intermediate point once " compare with the movement locus model in database, find " first to draw a circle, then the movement locus model toward circle intermediate point once " waved one's magic wand with beautiful faery in database is just the same, or " first draw a circle, then the movement locus distortion toward circle intermediate point once " waved one's magic wand with beautiful faery in database reaches default threshold value, such as, user's drawn circle when doing the action of picture circle is not to justify very much but basic circular in one, toward circle points once time be not toward circle the center of circle but near the center of circle location point once, determining unit 602 just can determine to have in database with user do action the movement locus model that matches of motion characteristic and the movement locus model that waves one's magic wand of beautiful faery.
The device of accompanying drawing 4 example can also comprise model building module 701 and model storage module 702, as shown in Figure 7 the device by action recognition character that provides of the embodiment of the present invention seven, wherein:
Model building module 701, sets up movement locus model before obtaining the motion characteristic of described action when user makes action for acquisition module 401.
Due to the action of some classical characters, such as, the action of pulling back is sawed wood by force in the action that beautiful faery waves one's magic wand, shaven head, or the action of contemporary pop music TV, such as Music Television (MTV) " south of the River Style ", " griggles " etc., possess action rule majority being had to certain familiarity.Therefore, in the present embodiment, model building module 701 can be analyzed the track of these actions, and movement locus modeling is carried out in abstract extraction classons action wherein, obtains the movement locus model of these actions.Further, movement locus model can associate with the media characteristic of character by model building module 701, make the media characteristic one_to_one corresponding of a movement locus model and a character, media characteristic can be the picture of character, audio frequency, video or image etc.
Model storage module 702, for the movement locus model storage set up by model building module 701 in local data base or be uploaded to the network storage.
After model building module 701 sets up movement locus model, can by set up movement locus model storage in local data base or be uploaded to the network storage, wherein, be stored in the movement locus model of network, when needs carry out motion characteristic and movement locus Model Matching, can from web download to local data base.
It should be noted that, the movement locus model that above-mentioned model building module 701 is set up also may not be the movement locus model of the action of some classical characters completely.In the present embodiment, the action of user's do-it-yourself, Intelligent worn device is recorded by This move and with the synchronous background music of this action, and action is associated with synchronous background music, then, This move is also saved in local data base as a movement locus model or is uploaded to the network storage by model building module 701.
The device of accompanying drawing 4 to accompanying drawing 7 example can also comprise reminding module 801, as shown in Figure 7 the device by action recognition character that provides of the arbitrary example of the embodiment of the present invention eight to embodiment 11.Reminding module 801 is for judging the movement locus model do not matched with motion characteristic in database during at judge module 402, prompting user makes action again.This embodiment considers that some objective factor accidentally causes action that user does and movement locus model to differ greatly, and device is difficult to match suitable movement locus model for the moment.When judge module 402 judges the movement locus model do not matched with motion characteristic in database, reminding module 801 points out user again to make action, so that judge module 402 rejudges in database the movement locus model whether having and match with the motion characteristic of described user again done action, wherein, can be text prompt during prompting, also can be voice message.
It should be noted that, the content such as information interaction, implementation between each module/unit of said apparatus, due to the inventive method embodiment based on same design, its technique effect brought is identical with the inventive method embodiment, particular content see describing in the inventive method embodiment, can repeat no more herein.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is that the hardware that can carry out instruction relevant by program has come, this program can be stored in a computer-readable recording medium, storage medium can comprise: ROM (read-only memory) (ROM, Read Only Memory), random access memory (RAM, Random Access Memory), disk or CD etc.
What provide the embodiment of the present invention above is described in detail by the method and apparatus of action recognition character, apply specific case herein to set forth principle of the present invention and embodiment, the explanation of above embodiment just understands method of the present invention and core concept thereof for helping; Meanwhile, for one of ordinary skill in the art, according to thought of the present invention, all will change in specific embodiments and applications, in sum, this description should not be construed as limitation of the present invention.

Claims (10)

1., by a method for action recognition character, it is characterized in that, described method comprises:
Obtain the motion characteristic of described action when user makes action;
Judge whether there is in database the movement locus model matched with described motion characteristic;
If have the movement locus model matched with described motion characteristic in described database, then the media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.
2. the method for claim 1, is characterized in that, the motion characteristic of described action when described acquisition user makes action, comprising:
Detect the movement locus of described action when described user makes action;
Described movement locus is decomposed, extracts the motion characteristic of combination as described action of canonical dissection in described movement locus.
3. method as claimed in claim 2, is characterized in that, describedly judges whether have the movement locus model matched with described motion characteristic in database, comprising:
The combination of canonical dissection in described movement locus is compared with the movement locus model in described database;
If the combination of canonical dissection and the similarity of described movement locus model reach default threshold value in described movement locus, then determine that there is in database the movement locus model matched with described motion characteristic.
4. the method for claim 1, is characterized in that, before the motion characteristic of described action when described acquisition user makes action, also comprises:
Set up described movement locus model;
By described movement locus model storage in local data base or be uploaded to the network storage.
5. the method as described in Claims 1-4 any one, is characterized in that, the movement locus model do not matched with described motion characteristic in described database if judge, then described method also comprises:
Described user is pointed out again to make action.
6., by a device for action recognition character, it is characterized in that, described device comprises:
Acquisition module, for obtaining the motion characteristic of described action when user makes action;
Whether judge module, have for judging the movement locus model matched with described motion characteristic in database;
Display module, if for having the movement locus model matched with described motion characteristic in described database, then the media characteristic of the character that display is corresponding with the movement locus model that described motion characteristic matches.
7. device as claimed in claim 6, it is characterized in that, described acquisition module comprises:
Detecting unit, for detecting the movement locus of described action when described user makes action;
Feature extraction unit, for decomposing described movement locus, extracts the motion characteristic of combination as described action of canonical dissection in described movement locus.
8. device as claimed in claim 7, it is characterized in that, described judge module comprises:
Contrast unit, for comparing the combination of canonical dissection in described movement locus with the movement locus model in described database;
Determining unit, if reach default threshold value for the combination of canonical dissection in described movement locus and the similarity of described movement locus model, then determines to have the movement locus model matched with described motion characteristic in database.
9. device as claimed in claim 6, it is characterized in that, described device also comprises:
Model building module, for obtain described action when user makes action motion characteristic before set up described movement locus model;
Model storage module, for by described movement locus model storage in local data base or be uploaded to the network storage.
10. the device as described in claim 6 to 9 any one, is characterized in that, described device also comprises;
Reminding module, during for judging the movement locus model do not matched with described motion characteristic in described database at described judge module, points out described user again to make action.
CN201410490405.8A 2014-09-23 2014-09-23 A kind of method and apparatus by action recognition character Active CN104317389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410490405.8A CN104317389B (en) 2014-09-23 2014-09-23 A kind of method and apparatus by action recognition character

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410490405.8A CN104317389B (en) 2014-09-23 2014-09-23 A kind of method and apparatus by action recognition character

Publications (2)

Publication Number Publication Date
CN104317389A true CN104317389A (en) 2015-01-28
CN104317389B CN104317389B (en) 2017-12-26

Family

ID=52372628

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410490405.8A Active CN104317389B (en) 2014-09-23 2014-09-23 A kind of method and apparatus by action recognition character

Country Status (1)

Country Link
CN (1) CN104317389B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850651A (en) * 2015-05-29 2015-08-19 小米科技有限责任公司 Information reporting method and device and information pushing method and device
CN105607095A (en) * 2015-07-31 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Terminal control method and terminal
CN105975054A (en) * 2015-11-23 2016-09-28 乐视网信息技术(北京)股份有限公司 Method and device for information processing
CN106931968A (en) * 2017-03-27 2017-07-07 广东小天才科技有限公司 A kind of method and device for monitoring student classroom performance
CN107016347A (en) * 2017-03-09 2017-08-04 腾讯科技(深圳)有限公司 A kind of body-sensing action identification method, device and system
CN107146386A (en) * 2017-05-05 2017-09-08 广东小天才科技有限公司 A kind of anomaly detection method and device, user equipment
CN108815845A (en) * 2018-05-15 2018-11-16 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN108897231A (en) * 2018-02-08 2018-11-27 深圳迈睿智能科技有限公司 behavior prediction system and behavior prediction method
CN108986191A (en) * 2018-07-03 2018-12-11 百度在线网络技术(北京)有限公司 Generation method, device and the terminal device of figure action
CN111665770A (en) * 2020-07-01 2020-09-15 武汉华自阳光科技有限公司 Rural safe drinking equipment control system and method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970606A (en) * 2012-12-04 2013-03-13 深圳Tcl新技术有限公司 Television program recommending method and device based on identity identification
CN103248933A (en) * 2013-04-03 2013-08-14 深圳Tcl新技术有限公司 Interactive method and remote control device based on user identity for interaction
CN103544497A (en) * 2013-04-16 2014-01-29 Tcl集团股份有限公司 Identification method and identification system for mode of intelligent equipment
CN103576902A (en) * 2013-09-18 2014-02-12 酷派软件技术(深圳)有限公司 Method and system for controlling terminal equipment
CN103646425A (en) * 2013-11-20 2014-03-19 深圳先进技术研究院 A method and a system for body feeling interaction
CN104050461A (en) * 2014-06-30 2014-09-17 苏州大学 Complex 3D motion recognition method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102970606A (en) * 2012-12-04 2013-03-13 深圳Tcl新技术有限公司 Television program recommending method and device based on identity identification
CN103248933A (en) * 2013-04-03 2013-08-14 深圳Tcl新技术有限公司 Interactive method and remote control device based on user identity for interaction
CN103544497A (en) * 2013-04-16 2014-01-29 Tcl集团股份有限公司 Identification method and identification system for mode of intelligent equipment
CN103576902A (en) * 2013-09-18 2014-02-12 酷派软件技术(深圳)有限公司 Method and system for controlling terminal equipment
CN103646425A (en) * 2013-11-20 2014-03-19 深圳先进技术研究院 A method and a system for body feeling interaction
CN104050461A (en) * 2014-06-30 2014-09-17 苏州大学 Complex 3D motion recognition method and device

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850651B (en) * 2015-05-29 2019-06-18 小米科技有限责任公司 Information uploading method and device and information-pushing method and device
CN104850651A (en) * 2015-05-29 2015-08-19 小米科技有限责任公司 Information reporting method and device and information pushing method and device
CN105607095A (en) * 2015-07-31 2016-05-25 宇龙计算机通信科技(深圳)有限公司 Terminal control method and terminal
CN105975054A (en) * 2015-11-23 2016-09-28 乐视网信息技术(北京)股份有限公司 Method and device for information processing
CN107016347A (en) * 2017-03-09 2017-08-04 腾讯科技(深圳)有限公司 A kind of body-sensing action identification method, device and system
CN106931968A (en) * 2017-03-27 2017-07-07 广东小天才科技有限公司 A kind of method and device for monitoring student classroom performance
CN107146386A (en) * 2017-05-05 2017-09-08 广东小天才科技有限公司 A kind of anomaly detection method and device, user equipment
CN108897231A (en) * 2018-02-08 2018-11-27 深圳迈睿智能科技有限公司 behavior prediction system and behavior prediction method
CN108815845A (en) * 2018-05-15 2018-11-16 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN108815845B (en) * 2018-05-15 2019-11-26 百度在线网络技术(北京)有限公司 The information processing method and device of human-computer interaction, computer equipment and readable medium
CN108986191A (en) * 2018-07-03 2018-12-11 百度在线网络技术(北京)有限公司 Generation method, device and the terminal device of figure action
CN108986191B (en) * 2018-07-03 2023-06-27 百度在线网络技术(北京)有限公司 Character action generation method and device and terminal equipment
CN111665770A (en) * 2020-07-01 2020-09-15 武汉华自阳光科技有限公司 Rural safe drinking equipment control system and method

Also Published As

Publication number Publication date
CN104317389B (en) 2017-12-26

Similar Documents

Publication Publication Date Title
CN104317389A (en) Method and device for identifying character role through movement
CN110418208B (en) Subtitle determining method and device based on artificial intelligence
US10210002B2 (en) Method and apparatus of processing expression information in instant communication
CN104252861B (en) Video speech conversion method, device and server
US20180130496A1 (en) Method and system for auto-generation of sketch notes-based visual summary of multimedia content
CN108958610A (en) Special efficacy generation method, device and electronic equipment based on face
WO2016062073A1 (en) Instant messaging terminal and information translation method and apparatus therefor
CN110246512A (en) Sound separation method, device and computer readable storage medium
CN105224581B (en) The method and apparatus of picture are presented when playing music
CN107864410B (en) Multimedia data processing method and device, electronic equipment and storage medium
CN104933028A (en) Information pushing method and information pushing device
JP2022541186A (en) Video processing method, device, electronic device and storage medium
US8924491B2 (en) Tracking message topics in an interactive messaging environment
CN104681023A (en) Information processing method and electronic equipment
JP2017515134A (en) Rich multimedia in response and response of digital personal digital assistant by replication
CN106653037B (en) Audio data processing method and device
US20140013192A1 (en) Techniques for touch-based digital document audio and user interface enhancement
US20220047954A1 (en) Game playing method and system based on a multimedia file
CN104346147A (en) Method and device for editing rhythm points of music games
CN104866275B (en) Method and device for acquiring image information
CN101105943A (en) Language aided expression system and its method
CN110750996A (en) Multimedia information generation method and device and readable storage medium
CN107291704A (en) Treating method and apparatus, the device for processing
WO2018098340A1 (en) Intelligent graphical feature generation for user content
US20230298628A1 (en) Video editing method and apparatus, computer device, and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant