CN104317389B - A kind of method and apparatus by action recognition character - Google Patents
A kind of method and apparatus by action recognition character Download PDFInfo
- Publication number
- CN104317389B CN104317389B CN201410490405.8A CN201410490405A CN104317389B CN 104317389 B CN104317389 B CN 104317389B CN 201410490405 A CN201410490405 A CN 201410490405A CN 104317389 B CN104317389 B CN 104317389B
- Authority
- CN
- China
- Prior art keywords
- action
- motion
- trajectory model
- motion trajectory
- characteristic
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Processing Or Creating Images (AREA)
Abstract
The present invention provides a kind of method and apparatus by action recognition character, it is intended to solves the technical problem that existing Intelligent worn device function is single, value is not high.Methods described includes:Obtain the motion characteristic that user makes action during action;Judge whether there is the motion trajectory model to match with the motion characteristic in database;If having the motion trajectory model to match with the motion characteristic in the database, the media characteristic of character corresponding with the motion trajectory model that the motion characteristic matches is shown.The present invention can allow user, especially children to imitate the classical behavior of cartoon figure role, and Intelligent worn device is become to imitate the sports props of cartoon figure role, enhances the function of Intelligent worn device, improves the use value of Intelligent worn device.
Description
Technical field
The invention belongs to field of intelligent wear, more particularly to a kind of method and apparatus by action recognition character.
Background technology
Intelligent worn device application wearable technology carries out intelligentized design to daily wearing, develops and can dress
The general name of equipment, such as glasses, gloves, wrist-watch, dress ornament and shoes etc..The Intelligent worn device of broad sense includes that function is complete, size
Greatly, complete or partial function, such as intelligent watch or intelligent glasses etc. can be realized independent of smart mobile phone, and are only absorbed in
In certain a kind of application function, it is necessary to be used cooperatively with miscellaneous equipment such as smart mobile phone, for example, the intelligence of all kinds of carry out sign monitorings
Bracelet, intelligent jewellery etc..With advances in technology and user's request transition, the form of Intelligent worn device and apply focus
Also constantly changing.
In general, Intelligent worn device items parts (for example, screen) size is smaller, and without touch work(
Can, the volume of whole equipment is also smaller.Therefore, existing Intelligent worn device also only possesses its original function, example
Such as, positioning, health monitoring or event prompting etc., are but not equipped with other more functions, for example, amusement function, this also makes
Obtain existing Intelligent worn device and do not give play to more values also.
The content of the invention
It is an object of the invention to provide a kind of method and apparatus by action recognition character, it is intended to solves existing
Intelligent worn device function is single, the not high technical problem of value.
The present invention is achieved in that a kind of method by action recognition character, and methods described includes:
Obtain the motion characteristic that user makes action during action;
Judge whether there is the motion trajectory model to match with the motion characteristic in database;
If having the motion trajectory model to match with the motion characteristic in the database, display and the action
The media characteristic of character corresponding to the motion trajectory model that feature matches.
Another object of the present invention is to provide a kind of device by action recognition character, described device includes:
Acquisition module, for obtaining the motion characteristic of action when user makes action;
Judge module, for judging whether there is the motion trajectory model to match with the motion characteristic in database;
Display module, if for having the motion trajectory model to match with the motion characteristic in the database,
The media characteristic of display character corresponding with the motion trajectory model that the motion characteristic matches.
It was found from the embodiments of the present invention, the motion characteristic acted is done by obtaining user, is had in database
During the motion trajectory model to match with the motion characteristic, the motion trajectory model pair to match with the motion characteristic is shown
The media characteristic for the character answered, will so as to allow user, especially children to imitate the classical behavior of cartoon figure role
Intelligent worn device becomes to imitate the sports props of cartoon figure role, enhances the function of Intelligent worn device, improves intelligence
The use value of wearable device.
Brief description of the drawings
Fig. 1 is the implementation process schematic diagram for the method by action recognition character that the embodiment of the present invention one provides;
Fig. 2 is the implementation process schematic diagram for the method by action recognition character that the embodiment of the present invention two provides;
Fig. 3 is the implementation process schematic diagram for the method by action recognition character that the embodiment of the present invention three provides;
Fig. 4 is the structural representation for the device by action recognition character that the embodiment of the present invention four provides;
Fig. 5 is the structural representation for the device by action recognition character that the embodiment of the present invention five provides;
Fig. 6 is the structural representation for the device by action recognition character that the embodiment of the present invention six provides;
Fig. 7 is the structural representation for the device by action recognition character that the embodiment of the present invention seven provides;
Fig. 8-a are the structural representations for the device by action recognition character that the embodiment of the present invention eight provides;
Fig. 8-b are the structural representations for the device by action recognition character that the embodiment of the present invention nine provides;
Fig. 8-c are the structural representations for the device by action recognition character that the embodiment of the present invention ten provides;
Fig. 8-d are the structural representations for the device by action recognition character that the embodiment of the present invention 11 provides.
Embodiment
In order that the purpose of the present invention, technical scheme and beneficial effect are more clearly understood, below in conjunction with accompanying drawing and implementation
Example, the present invention will be described in further detail.It should be appreciated that specific embodiment described herein is only explaining this hair
It is bright, it is not intended to limit the present invention.
The embodiment of the present invention provides a kind of method by action recognition character, and methods described includes:Obtain user
Make the motion characteristic of action during action;Judge whether there is the motion rail to match with the motion characteristic in database
Mark model;If having the motion trajectory model to match with the motion characteristic in the database, display and the action
The media characteristic of character corresponding to the motion trajectory model that feature matches.The embodiment of the present invention also provides to be passed through accordingly
The device of action recognition character.It is described in detail individually below.
Accompanying drawing 1 is referred to, is the realization stream for the method by action recognition character that the embodiment of the present invention one provides
Journey, mainly include the following steps that S101 to step S103:
S101, obtain user and make the motion characteristic acted during action.
In the present embodiment, user does the motion characteristic acted, not refers to that user does the whole compositions acted
Part, but do the combination of canonical dissection in the movement locus acted.
S102, judge in database whether to have with step S101 acquired in the movement locus mould that matches of motion characteristic
Type.
In the present embodiment, the motion trajectory model of some actions is previously stored in database, for example, beautiful faery puts to good use
The motion trajectory model and MTV for the action that motion trajectory model, the Logger Vick of the action of magic saw wood to pull back
《South of the River Style》、《Griggles》Motion trajectory model of classical action, etc. in.
Both can be the action of local data library storage it should be noted that the motion trajectory model acted in database
Motion trajectory model, can be again the action of the database purchase of the server of that side of network motion trajectory model.
S103, if having the motion trajectory model to match with motion characteristic in database, display and the action are special
Levy the media characteristic of character corresponding to the motion trajectory model to match.
In the present embodiment, the media characteristic of character corresponding to motion trajectory model can be the figure of character
Piece, audio, video or image etc..
From the method by action recognition character of the above-mentioned example of accompanying drawing 1, acted by obtaining user
Motion characteristic, in database have match with the motion characteristic motion trajectory model when, display with the action
The media characteristic of character corresponding to the motion trajectory model that feature matches, so as to allow user, especially children's mould
The classical behavior of imitative cartoon figure role, Intelligent worn device is become to imitate the sports props of cartoon figure role, enhanced
The function of Intelligent worn device, improve the use value of Intelligent worn device.
Accompanying drawing 2 is referred to, is the realization stream for the method by action recognition character that the embodiment of the present invention two provides
Journey, mainly include the following steps that S201 to step S205:
S201, detection user make the movement locus of action during action.
In the present embodiment, user do the movement locus that acts comprising user do the path acted, number of repetition and
The information such as speed;During specific detection, some sensors, such as acceleration transducer can be used to be detected.
S202, movement locus is decomposed, extract the combination of canonical dissection in the movement locus as the action
Motion characteristic.
As it was previously stated, user, which does the motion characteristic acted, not refers to that user does the whole parts acted,
But do the combination of canonical dissection in the movement locus acted.In other words, acted for user, be particularly some complexity
Action, be not using its whole movement locus as motion characteristic, but by these act in several important, classical sons move
As motion characteristic, the transition sexual act can between these important, classical son actions neglects for the combination of work.
In the present embodiment, canonical dissection in the movement locus can be extracted by being decomposed to movement locus
Combine the motion characteristic as the action.Want to make beautiful faery for example, it is assumed that user is subjective and wave one's magic wand This move, by
The action that beautiful faery waves one's magic wand in animation generally includes first to draw a circle, and it is important, classical then to tap the two toward among enclosing
Son action, therefore, the movement locus for the above-mentioned action that can be made to user decomposes, and extracts in the movement locus
The combination of canonical dissection is first drawn a circle, and is then tapped toward circle is middle, and " first drawing a circle, then tapped toward among enclosing ", this is dynamic
Make motion characteristic of the combination as action.
S203, judge whether there is the motion trajectory model to match with motion characteristic in database.
In the present embodiment, judge in database whether to have the motion trajectory model that matches with motion characteristic can be with
It is:By the combination of canonical dissection in movement locus compared with the motion trajectory model in database, if allusion quotation in the movement locus
The combination of type part and the similarity of the motion trajectory model in the database reach default threshold value, it is determined that in database
With the motion trajectory model to match with the motion characteristic.
Again so that the above-mentioned steps S202 beautiful faerys being related to wave one's magic wand as an example, if the combination by canonical dissection in movement locus
" first draw a circle, then tapped toward circle is middle " compared with the motion trajectory model in database, find " first draw a circle, it is then past
Being tapped among circle " motion trajectory model that waves one's magic wand with beautiful faery in database is just the same, or " first draws a circle, then
Tapped among toward circle " reach default threshold value, example with the motion trajectory model similarity that waves one's magic wand of beautiful faery in database
Such as, user's drawn circle when doing the action of picture circle is not to justify very much but is not past when being tapped toward circle substantially in a circle
The center of circle of circle still taps in the position close to the center of circle, determines there is the motion characteristic for doing and acting with user in database
The motion trajectory model that the motion trajectory model to match i.e. beautiful faery waves one's magic wand.
S204, if having the motion trajectory model to match with motion characteristic in database, display and motion characteristic phase
The media characteristic of character corresponding to the motion trajectory model of matching.
In the present embodiment, the media characteristic of character corresponding to motion trajectory model can be the figure of character
Piece, audio, video or image etc..
From the method by action recognition character of the above-mentioned example of accompanying drawing 2, acted by obtaining user
Motion characteristic, in database have match with the motion characteristic motion trajectory model when, display with the action
The media characteristic of character corresponding to the motion trajectory model that feature matches, so as to allow user, especially children's mould
The classical behavior of imitative cartoon figure role, Intelligent worn device is become to imitate the sports props of cartoon figure role, enhanced
The function of Intelligent worn device, improve the use value of Intelligent worn device.
Accompanying drawing 3 is referred to, is the realization stream for the method by action recognition character that the embodiment of the present invention three provides
Journey, method mainly include the following steps that S301 to step S305:
S301, establish motion trajectory model.
Due to the action of some classical characters, for example, the action that waves one's magic wand of beautiful faery, Logger Vick are sawed wood back and forth
The action of drawing, or the action of contemporary pop music TV, such as MTV《South of the River Style》、《Griggles》Deng possessing pair
There is the action rule of certain familiarity for majority.Therefore, in the present embodiment, the track of these actions can be divided
Analysis, it is abstracted extraction classons action therein and carries out movement locus modeling, obtain the motion trajectory model of these actions.
It is possible to further which the media characteristic of motion trajectory model and character is associated so that a motion
The media characteristic of locus model and character corresponds, and media characteristic can be the picture of character, audio, regard
Frequency or image etc..
S302, the step S301 motion trajectory models established are stored in local data base or are uploaded to network storage.
After motion trajectory model is established, the motion trajectory model established can be stored in local data base or upload
To network storage, wherein, the motion trajectory model of network is stored in, is matched needing progress motion characteristic with motion trajectory model
When, local data base can be downloaded to from network.
It should be noted that above-mentioned established motion trajectory model also may not be entirely the dynamic of some classical characters
The motion trajectory model of work.In the present embodiment, the action of user's do-it-yourself one, Intelligent worn device by This move and with
The synchronous background music of the action is recorded, and action and synchronous background music are associated, then, by This move
Local data base is saved in as a motion trajectory model or is uploaded to network storage.
S303, obtain user and make the motion characteristic acted during action.
In the present embodiment, user does the motion characteristic acted, not refers to that user does the whole compositions acted
Part, but do the combination of canonical dissection in the movement locus acted.
S304, judge in database whether to have with step S303 acquired in the movement locus mould that matches of motion characteristic
Type.
In the present embodiment, the motion trajectory model of some actions is previously stored in database, for example, beautiful faery puts to good use
The motion trajectory model and MTV for the action that motion trajectory model, the Logger Vick of the action of magic saw wood to pull back
《South of the River Style》、《Griggles》Motion trajectory model of classical action, etc. in.
Both can be the action of local data library storage it should be noted that the motion trajectory model acted in database
Motion trajectory model, can be again the action of the database purchase of the server of that side of network motion trajectory model.
S305, if having the motion trajectory model to match with motion characteristic in database, display and the action are special
Levy the media characteristic of character corresponding to the motion trajectory model to match.
In the present embodiment, the media characteristic of character corresponding to motion trajectory model can be the figure of character
Piece, audio, video or image etc..
From the method by action recognition character of the above-mentioned example of accompanying drawing 3, acted by obtaining user
Motion characteristic, in database have match with the motion characteristic motion trajectory model when, display with the action
The media characteristic of character corresponding to the motion trajectory model that feature matches, so as to allow user, especially children's mould
The classical behavior of imitative cartoon figure role, Intelligent worn device is become to imitate the sports props of cartoon figure role, enhanced
The function of Intelligent worn device, improve the use value of Intelligent worn device.
Accidentally cause user to do action in view of some objective factors to differ greatly with motion trajectory model, above-mentioned attached
Fig. 1 is into the method by action recognition character of the example of accompanying drawing 3, if judging not act with user in database
The motion trajectory model that matches of motion characteristic, then methods described also include prompting user and make action again, so as to again
Judge in database whether to have and do the motion trajectory model that the motion characteristic acted matches again with the user;Prompting
When can be text prompt or voice message.
Accompanying drawing 4 is referred to, is that the structure for the device by action recognition character that the embodiment of the present invention four provides is shown
It is intended to.For convenience of description, it illustrate only the part related to the embodiment of the present invention.The example of accompanying drawing 4 passes through action recognition people
The device of object angle color mainly includes acquisition module 401, judge module 402 and display module 403, and each functional module describes in detail such as
Under:
Acquisition module 401, for obtaining the motion characteristic of action when user makes action.
In the present embodiment, user does the motion characteristic acted, not refers to that user does the whole compositions acted
Part, but do the combination of canonical dissection in the movement locus acted.
Judge module 402, for judging whether there is the motion characteristic obtained with acquisition module 401 to match in database
Motion trajectory model.
In the present embodiment, the motion trajectory model of some actions is previously stored in database, for example, beautiful faery puts to good use
The motion trajectory model and MTV for the action that motion trajectory model, the Logger Vick of the action of magic saw wood to pull back
《South of the River Style》、《Griggles》Motion trajectory model of classical action, etc. in.
Both can be the action of local data library storage it should be noted that the motion trajectory model acted in database
Motion trajectory model, can be again the action of the database purchase of the server of that side of network motion trajectory model.
Display module 403, if for having the motion trajectory model that matches with motion characteristic in database, display with
The media characteristic of character corresponding to the motion trajectory model that motion characteristic matches.
In the present embodiment, the media characteristic of character corresponding to motion trajectory model can be the figure of character
Piece, audio, video or image etc..
It should be noted that in the embodiment of the device by action recognition character of the example of the figures above 4, respectively
The division of functional module is merely illustrative of, can be as needed in practical application, for example, the configuration requirement of corresponding hardware or
The convenient consideration of the realization of software, and above-mentioned function distribution is completed by different functional modules, described it will be known by acting
The internal structure of the device of other character is divided into different functional modules, to complete all or part of work(described above
Energy.Moreover, in practical application, the corresponding functional module in the present embodiment can be realized by corresponding hardware, can also be by
Corresponding hardware performs corresponding software and completed, for example, foregoing acquisition module, can be that there is the foregoing acquisition user of execution to do
Go out the hardware of the motion characteristic of action when acting, for example, getter or be able to carry out corresponding computer program from
And complete the general processor or other hardware devices of foregoing function;Judge module as the aforementioned again, can be described in execution
Judge whether there is the hardware of the motion trajectory model to match with the motion characteristic in database, such as determining device, also may be used
To be to be able to carry out corresponding computer program so as to complete the general processor of foregoing function or other hardware device (this explanations
Each embodiment that book provides can all apply foregoing description principle).
The acquisition module 401 of the example of accompanying drawing 4 can include detection unit 501 and feature extraction unit 502, such as the institute of accompanying drawing 5
Show the device by action recognition character that the embodiment of the present invention five provides, wherein:
Detection unit 501, for detecting the movement locus of the action when user makes action.
In the present embodiment, user do the movement locus that acts comprising user do the path acted, number of repetition and
The information such as speed;When detection unit 501 specifically detects, some sensors, such as acceleration transducer can be used to be detected.
Feature extraction unit 502, for being decomposed to the movement locus, extract canonical dissection in the movement locus
Motion characteristic of the combination as the action.
As it was previously stated, user, which does the motion characteristic acted, not refers to that user does the whole parts acted,
But do the combination of canonical dissection in the movement locus acted.In other words, acted for user, be particularly some complexity
Action, be not using its whole movement locus as motion characteristic, but by these act in several important, classical sons move
As motion characteristic, the transition sexual act can between these important, classical son actions neglects for the combination of work.
In the present embodiment, feature extraction unit 502 can extract the motion rail by being decomposed to movement locus
Motion characteristic of the combination of canonical dissection as the action in mark.For example, it is assumed that user is subjective to want that making beautiful faery puts to good use
Magic This move, because the action that beautiful faery waves one's magic wand in animation generally includes first to draw a circle, then tapped toward circle is middle
The two important, classical son actions, therefore, the motion for the above-mentioned action that feature extraction unit 502 can be made to user
Track is decomposed, and the combination for extracting canonical dissection in the movement locus is first drawn a circle, and is then tapped toward circle is middle, will
" first drawing a circle, then tapped toward circle the is middle " motion characteristic of This move combination as action.
The judge module 402 of the example of accompanying drawing 5 can include comparison unit 601 and determining unit 602, as shown in Figure 6 originally
The device by action recognition character that inventive embodiments six provide, wherein:
Comparison unit 601, for by the movement locus canonical dissection combination with the database in motion rail
Mark model compares.
Determining unit 602, if combination and the phase of the motion trajectory model for canonical dissection in the movement locus
Reach default threshold value like degree, it is determined that there is the motion trajectory model to match with the motion characteristic in database.
Again so that the beautiful faery being related in the above-mentioned example of accompanying drawing 5 waves one's magic wand as an example, if comparison unit 601 is by movement locus
The combination of canonical dissection " is first drawn a circle, then tapped toward circle is middle " compared with the motion trajectory model in database, is found
" first drawing a circle, then tapped toward circle is middle " and the motion trajectory model that beautiful faery in database waves one's magic wand are just the same, or
The motion trajectory model similarity that person's " first drawing a circle, then tapped toward circle is middle " waves one's magic wand with beautiful faery in database reaches
Default threshold value, for example, user's drawn circle when doing the action of picture circle is not to justify very much but substantially in a circle, toward circle points
It is not the center of circle toward circle when once but is tapped close to the position in the center of circle, determining unit 602 can be determines database
In have the movement locus that the motion trajectory model i.e. beautiful faery that the motion characteristic that acts matches waves one's magic wand done with user
Model.
The device of the example of accompanying drawing 4 can also include model building module 701 and model memory module 702, as shown in Figure 7
The device by action recognition character that the embodiment of the present invention seven provides, wherein:
Model building module 701, for acquisition module 401 obtain user make action when the action motion characteristic it
Before establish motion trajectory model.
Due to the action of some classical characters, for example, the action that waves one's magic wand of beautiful faery, Logger Vick are sawed wood back and forth
The action of drawing, or the action of contemporary pop music TV, such as MTV《South of the River Style》、《Griggles》Deng possessing pair
There is the action rule of certain familiarity for majority.Therefore, in the present embodiment, model building module 701 can be to these
The track of action is analyzed, and is abstracted extraction classons action therein and is carried out movement locus modeling, obtains the fortune of these actions
Dynamic locus model.Further, model building module 701 can be carried out the media characteristic of motion trajectory model and character
Association a so that motion trajectory model and the media characteristic of a character correspond, and media characteristic can be personage
Picture, audio, video or image of role etc..
Model memory module 702, the motion trajectory model for model building module 701 to be established are stored in local data
Storehouse is uploaded to network storage.
After model building module 701 establishes motion trajectory model, the motion trajectory model established can be stored in
Local data base is uploaded to network storage, wherein, be stored in the motion trajectory model of network, need to carry out motion characteristic with
When motion trajectory model matches, local data base can be downloaded to from network.
It should be noted that the motion trajectory model that above-mentioned model building module 701 is established also may not be entirely
The motion trajectory model of the action of classical character.In the present embodiment, one action of user's do-it-yourself, Intelligent worn device
This move and background music synchronous with the action are recorded, and action and synchronous background music are associated,
Then, This move is also served as a motion trajectory model and is saved in local data base or is uploaded to by model building module 701
Network storage.
Accompanying drawing 4 to the device of the example of accompanying drawing 7 can also include reminding module 801, as shown in Figure 7 the embodiment of the present invention eight
The device by action recognition character provided to any example of embodiment 11.Reminding module 801 is used to judge mould
When block 402 judges the motion trajectory model not matched in database with motion characteristic, user is prompted to make action again.This
One embodiment allow for some objective factors accidentally cause user do action differed greatly with motion trajectory model, device one
When be difficult to match suitable motion trajectory model.Do not match in judge module 402 judges database with motion characteristic
Motion trajectory model when, reminding module 801 prompt user make action again, so that judge module 402 rejudges data
Whether have in storehouse and do the motion trajectory model that the motion characteristic acted matches again with the user, wherein, during prompting
Can be text prompt or voice message.
It should be noted that the content such as information exchange, implementation procedure between each module/unit of said apparatus, due to
The inventive method embodiment is based on same design, and its technique effect brought is identical with the inventive method embodiment, particular content
Reference can be made to the narration in the inventive method embodiment, here is omitted.
One of ordinary skill in the art will appreciate that all or part of step in the various methods of above-described embodiment is can
To instruct the hardware of correlation to complete by program, the program can be stored in a computer-readable recording medium, storage
Medium can include:Read-only storage (ROM, Read Only Memory), random access memory (RAM, Random
Access Memory), disk or CD etc..
The method and apparatus by action recognition character provided above the embodiment of the present invention have been carried out in detail
Introduce, specific case used herein is set forth to the principle and embodiment of the present invention, the explanation of above example
It is only intended to help the method and its core concept for understanding the present invention;Meanwhile for those of ordinary skill in the art, according to this
The thought of invention, there will be changes in specific embodiments and applications, in summary, this specification content should
It is interpreted as limitation of the present invention.
Claims (10)
- A kind of 1. method by action recognition character, it is characterised in that methods described includes:The motion characteristic that user makes action during action is obtained, it is the user that the user, which does the motion characteristic acted, Do the combination of canonical dissection in the movement locus acted;Judge whether there is the motion trajectory model to match with the motion characteristic in database;If having the motion trajectory model to match with the motion characteristic in the database, display and the motion characteristic The media characteristic of character corresponding to the motion trajectory model to match, character corresponding to the motion trajectory model Media characteristic includes picture, audio, video or the image of character.
- 2. the method as described in claim 1, it is characterised in that the action spy for obtaining user and making action during action Sign, including:Detect the movement locus that the user makes action during action;The movement locus is decomposed, extracts action of the combination of canonical dissection in the movement locus as the action Feature.
- 3. method as claimed in claim 2, it is characterised in that described to judge whether have and the motion characteristic in database The motion trajectory model to match, including:By the combination of canonical dissection in the movement locus compared with the motion trajectory model in the database;If the combination of canonical dissection and the similarity of the motion trajectory model reach default threshold value in the movement locus, Determine that there is the motion trajectory model to match with the motion characteristic in database.
- 4. the method as described in claim 1, it is characterised in that the action spy for obtaining user and making action during action Before sign, in addition to:Establish the motion trajectory model;The motion trajectory model is stored in local data base or is uploaded to network storage.
- 5. the method as described in Claims 1-4 any one, it is characterised in that if judge in the database not with institute The motion trajectory model that motion characteristic matches is stated, then methods described also includes:The user is prompted to make action again.
- 6. a kind of device by action recognition character, it is characterised in that described device includes:Acquisition module, for obtaining the motion characteristic of action when user makes action, the user does the action acted It is characterized in that the user does the combination of canonical dissection in the movement locus acted;Judge module, for judging whether there is the motion trajectory model to match with the motion characteristic in database;Display module, if for having the motion trajectory model to match with the motion characteristic in the database, show The media characteristic of character corresponding with the motion trajectory model that the motion characteristic matches, the motion trajectory model pair The media characteristic for the character answered includes picture, audio, video or the image of character.
- 7. device as claimed in claim 6, it is characterised in that the acquisition module includes:Detection unit, for detecting the movement locus of the action when user makes action;Feature extraction unit, for being decomposed to the movement locus, extract the combination of canonical dissection in the movement locus Motion characteristic as the action.
- 8. device as claimed in claim 7, it is characterised in that the judge module includes:Comparison unit, for by the movement locus canonical dissection combination with the database in motion trajectory model ratio Compared with;Determining unit, if the combination and the similarity of the motion trajectory model for canonical dissection in the movement locus reach Default threshold value, it is determined that there is the motion trajectory model to match with the motion characteristic in database.
- 9. device as claimed in claim 6, it is characterised in that described device also includes:Model building module, the motion characteristic for obtaining action when user makes action establish the movement locus before Model;Model memory module, for the motion trajectory model to be stored in into local data base or is uploaded to network storage.
- 10. the device as described in claim 6 to 9 any one, it is characterised in that described device also includes;Reminding module, for the motion not matched in judging the database in the judge module with the motion characteristic During locus model, the user is prompted to make action again.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410490405.8A CN104317389B (en) | 2014-09-23 | 2014-09-23 | A kind of method and apparatus by action recognition character |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410490405.8A CN104317389B (en) | 2014-09-23 | 2014-09-23 | A kind of method and apparatus by action recognition character |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104317389A CN104317389A (en) | 2015-01-28 |
CN104317389B true CN104317389B (en) | 2017-12-26 |
Family
ID=52372628
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410490405.8A Active CN104317389B (en) | 2014-09-23 | 2014-09-23 | A kind of method and apparatus by action recognition character |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104317389B (en) |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850651B (en) * | 2015-05-29 | 2019-06-18 | 小米科技有限责任公司 | Information uploading method and device and information-pushing method and device |
CN105607095A (en) * | 2015-07-31 | 2016-05-25 | 宇龙计算机通信科技(深圳)有限公司 | Terminal control method and terminal |
CN105975054A (en) * | 2015-11-23 | 2016-09-28 | 乐视网信息技术(北京)股份有限公司 | Method and device for information processing |
CN107016347A (en) * | 2017-03-09 | 2017-08-04 | 腾讯科技(深圳)有限公司 | A kind of body-sensing action identification method, device and system |
CN106931968A (en) * | 2017-03-27 | 2017-07-07 | 广东小天才科技有限公司 | A kind of method and device for monitoring student classroom performance |
CN107146386B (en) * | 2017-05-05 | 2019-12-31 | 广东小天才科技有限公司 | Abnormal behavior detection method and device, and user equipment |
CN208314197U (en) * | 2018-02-08 | 2019-01-01 | 深圳迈睿智能科技有限公司 | beam emitter |
CN108815845B (en) * | 2018-05-15 | 2019-11-26 | 百度在线网络技术(北京)有限公司 | The information processing method and device of human-computer interaction, computer equipment and readable medium |
CN108986191B (en) * | 2018-07-03 | 2023-06-27 | 百度在线网络技术(北京)有限公司 | Character action generation method and device and terminal equipment |
CN111665770A (en) * | 2020-07-01 | 2020-09-15 | 武汉华自阳光科技有限公司 | Rural safe drinking equipment control system and method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102970606A (en) * | 2012-12-04 | 2013-03-13 | 深圳Tcl新技术有限公司 | Television program recommending method and device based on identity identification |
CN103544497A (en) * | 2013-04-16 | 2014-01-29 | Tcl集团股份有限公司 | Identification method and identification system for mode of intelligent equipment |
CN103646425A (en) * | 2013-11-20 | 2014-03-19 | 深圳先进技术研究院 | A method and a system for body feeling interaction |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103248933A (en) * | 2013-04-03 | 2013-08-14 | 深圳Tcl新技术有限公司 | Interactive method and remote control device based on user identity for interaction |
CN103576902A (en) * | 2013-09-18 | 2014-02-12 | 酷派软件技术(深圳)有限公司 | Method and system for controlling terminal equipment |
CN104050461B (en) * | 2014-06-30 | 2017-05-03 | 苏州大学 | Complex 3D motion recognition method and device |
-
2014
- 2014-09-23 CN CN201410490405.8A patent/CN104317389B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102970606A (en) * | 2012-12-04 | 2013-03-13 | 深圳Tcl新技术有限公司 | Television program recommending method and device based on identity identification |
CN103544497A (en) * | 2013-04-16 | 2014-01-29 | Tcl集团股份有限公司 | Identification method and identification system for mode of intelligent equipment |
CN103646425A (en) * | 2013-11-20 | 2014-03-19 | 深圳先进技术研究院 | A method and a system for body feeling interaction |
Also Published As
Publication number | Publication date |
---|---|
CN104317389A (en) | 2015-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104317389B (en) | A kind of method and apparatus by action recognition character | |
CN108958610A (en) | Special efficacy generation method, device and electronic equipment based on face | |
CN110163054A (en) | A kind of face three-dimensional image generating method and device | |
US9584455B2 (en) | Method and apparatus of processing expression information in instant communication | |
CN106710590A (en) | Voice interaction system with emotional function based on virtual reality environment and method | |
CN111194465B (en) | Audio activity tracking and summarization | |
CN107888843A (en) | Sound mixing method, device, storage medium and the terminal device of user's original content | |
CN108345385A (en) | Virtual accompany runs the method and device that personage establishes and interacts | |
CN108492817A (en) | A kind of song data processing method and performance interactive system based on virtual idol | |
CN107293300A (en) | Audio recognition method and device, computer installation and readable storage medium storing program for executing | |
CN102157007A (en) | Performance-driven method and device for producing face animation | |
CN206711600U (en) | The voice interactive system with emotive function based on reality environment | |
US11948558B2 (en) | Messaging system with trend analysis of content | |
CN109343695A (en) | Exchange method and system based on visual human's behavioral standard | |
CN109429078A (en) | Method for processing video frequency and device, for the device of video processing | |
US20180027090A1 (en) | Information processing device, information processing method, and program | |
CN109300469A (en) | Simultaneous interpretation method and device based on machine learning | |
CN115496550A (en) | Text generation method and device | |
CN110501673A (en) | A kind of binaural sound source direction in space estimation method and system based on multitask time-frequency convolutional neural networks | |
CN115348458A (en) | Virtual live broadcast control method and system | |
CN108415561A (en) | Gesture interaction method based on visual human and system | |
CN109961152A (en) | Personalized interactive method, system, terminal device and the storage medium of virtual idol | |
CN108268139A (en) | Virtual scene interaction method and device, computer installation and readable storage medium storing program for executing | |
CN105551504B (en) | A kind of method and device based on crying triggering intelligent mobile terminal functional application | |
CN105261053A (en) | Image rendering method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |