CN105847987A - Method and system for correcting human body actions through television and body feeling accessory component - Google Patents
Method and system for correcting human body actions through television and body feeling accessory component Download PDFInfo
- Publication number
- CN105847987A CN105847987A CN201610173854.9A CN201610173854A CN105847987A CN 105847987 A CN105847987 A CN 105847987A CN 201610173854 A CN201610173854 A CN 201610173854A CN 105847987 A CN105847987 A CN 105847987A
- Authority
- CN
- China
- Prior art keywords
- action
- human
- human body
- sensing accessory
- simulated
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000009471 action Effects 0.000 title claims abstract description 238
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000012549 training Methods 0.000 claims description 35
- 239000000284 extract Substances 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 7
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 12
- 238000012706 support-vector machine Methods 0.000 description 11
- 230000008569 process Effects 0.000 description 6
- 210000002683 foot Anatomy 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 238000004088 simulation Methods 0.000 description 4
- 210000003128 head Anatomy 0.000 description 3
- 230000037147 athletic performance Effects 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- OYPRJOBELJOOCE-UHFFFAOYSA-N Calcium Chemical compound [Ca] OYPRJOBELJOOCE-UHFFFAOYSA-N 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005242 forging Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 210000003127 knee Anatomy 0.000 description 1
- 210000002414 leg Anatomy 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000007670 refining Methods 0.000 description 1
- 230000003238 somatosensory effect Effects 0.000 description 1
- 230000009182 swimming Effects 0.000 description 1
- 210000000115 thoracic cavity Anatomy 0.000 description 1
- 210000000707 wrist Anatomy 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
- G06V40/23—Recognition of whole body movements, e.g. for sport training
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
- H04N21/47214—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for content reservation or setting reminders; for requesting event notification, e.g. of sport results or stock market
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4882—Data services, e.g. news ticker for displaying messages, e.g. warnings, reminders
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
- A63B2024/0012—Comparing movements or motion sequences with a registered reference
- A63B2024/0015—Comparing movements or motion sequences with computerised simulations of movements or motion sequences, e.g. for generating an ideal template as reference to be achieved by the user
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Databases & Information Systems (AREA)
- Physical Education & Sports Medicine (AREA)
- Business, Economics & Management (AREA)
- Finance (AREA)
- Strategic Management (AREA)
- Image Analysis (AREA)
Abstract
An embodiment of the invention provides a method and a system for correcting human body actions through a television and a body feeling accessory component. The method and the system are applied in the field of body feeling identification technology. The method comprises the steps of identifying the human body action in a current frame picture by the body feeling accessory component, thereby obtaining a simulated action which corresponds with the human body action; comparing the obtained simulated action with a preset action sample by the body feeling accessory component, and defining a difference between the simulated action and the action sample; generating correction reminding information by the body feeling accessory component according to the defined difference, and playing the correction reminding information through the television. The method and the system can automatically monitor the human body action and furthermore perform correction reminding.
Description
Technical field
Embodiment of the present invention relates to somatosensory recognition technical field, particularly relate to a kind of by TV and
Body-sensing accessory corrects human action and system.
Background technology
Along with improving constantly of quality of life, the life stress that people bear is the most increasing.In order to
Alleviating life stress, keep fit or mould more preferable build, people often select to carry out
Various sports, such as, run, do Yoga or do fitness exercise etc..
Owing to recent environmental problems is day by day serious, outdoor air quality is the most poor, therefore people
It is more willing to select to be engaged in sports in indoor.Such as, people can go, in gymnasium, to be engaged in race
The sports such as step, Yoga, swimming, it is also possible to the body-building action followed at home in television set is entered
Row body-building.
At present, if selecting to go to gymnasium to perform physical exercise, then on the one hand the price of gymnasium is often
Higher, the daily expenditure of person taking exercise can be increased;On the other hand coach's number of gymnasium is the most limited,
Suitable exercise program can not be formulated for each person taking exercise or carry out the guidance of athletic performance.
If selecting the body-building action followed at home in TV to carry out body-building, then person taking exercise often cannot
Observe that the athletic performance of self, sometimes action meeting not in place are substantially reduced the efficiency of body-building.
For above-mentioned situation, person taking exercise can land mirror in the other one side of placing of television set, such that it is able to
When the action followed in television set carries out body-building, it can be seen that the action of self whether specification.?
Realizing inventor in process of the present invention to find, such method is the most complicated, can occupy the sky of indoor
Between so that person taking exercise cannot perform physical exercise with unfolding.Meanwhile, person taking exercise needs to take into account TV simultaneously
Standard operation in machine and the action of self in mirror of landing, can make person taking exercise more tired, and lead to
Cross the contrast of naked eyes, also the action of self and standard operation cannot be kept completely the same.
Therefore, prior art is needed badly a kind of method correcting human action.
Summary of the invention
Embodiment of the present invention provides a kind of method correcting human action by TV and body-sensing accessory
And system, automatically human action can be monitored, and carry out human action correcting prompting.
Embodiment of the present invention provides a kind of side being corrected human action by TV and body-sensing accessory
Method, described TV and body-sensing accessory communications connect, and described method includes: body-sensing accessory is to present frame
Human action in picture is identified, and obtains the simulated action corresponding with described human action;
The described simulated action obtained is contrasted by body-sensing accessory with the action specimen preset, and determines described
Difference between simulated action and described action specimen;Body-sensing accessory is raw according to the described difference determined
Become to correct information, and by described rectification information of televising.
Further, described simulated action includes the skeleton corresponding with described human action;
Described human action in present frame picture is identified, obtains corresponding with described human action
Simulated action, including: utilize the human body grader that pre-sets, identify present frame picture
The target site of the predetermined number that middle human action is corresponding;According to the default clustering algorithm institute to identifying
State the pixel in the target site of predetermined number and carry out clustering processing, obtain each target site pair
The skeleton point answered;The described skeleton point obtained is constituted the simulation corresponding with described human action move
Make.
Further, the human body grader that described utilization pre-sets, identify present frame picture
The target site of the predetermined number that middle human action is corresponding, including: obtain human body training set,
Described human body training set includes the human body sample graph of predetermined number;Extract described human body
The feature value vector of human body sample graph in the training set of position;Based on extract described eigenvalue to
Amount calculates the classification condition of human body sample graph in described human body training set;Return based on described
The target site of the predetermined number that human action is corresponding in class condition identification present frame picture.
Further, described the described simulated action obtained is contrasted with default action specimen,
Determine the difference between described simulated action and described action specimen, including: the described mould that will obtain
The central point of plan action overlaps with the central point of the action specimen preset;Determine described simulated action and
Described action specimen is in the difference of predetermined position.
Further, the described rectification information by televising is voice messaging or word letter
Breath or image information.
Further, described method also includes: carry out the default object in described present frame picture
Identify, and calculate described present frame figure according to described default object reciprocal time in predeterminable area
The quantity of motion that in sheet, human body is corresponding.
Embodiment of the present invention provides a kind of
System, described system includes: body-sensing accessory, for knowing the human action in present frame picture
Not, the simulated action corresponding with described human action is obtained;By obtain described simulated action with
The action specimen preset contrasts, and determines the difference between described simulated action and described action specimen
Different;Generate according to the described difference determined and correct information;TV, leads to described body-sensing accessory
Letter connects, and for showing the action specimen that body-sensing accessory is preset, and plays rectifying of body-sensing accessory generation
Positive information.
Further, described simulated action includes the skeleton corresponding with described human action;
Human action in present frame picture is identified by described body-sensing accessory, obtains and moves with described human body
Make corresponding simulated action, particularly as follows: described body-sensing accessory utilizes the human body pre-set
Grader, identifies the target site of the predetermined number that human action is corresponding in present frame picture;According to
The pixel preset in the target site of the clustering algorithm described predetermined number to identifying is carried out at cluster
Reason, obtains the skeleton point that each target site is corresponding;The described skeleton point obtained is constituted with described
The simulated action that human action is corresponding.
Further, described body-sensing accessory utilizes the human body grader pre-set, and identifies and works as
The target site of the predetermined number that human action is corresponding in front frame picture, particularly as follows: described body-sensing is joined
Part obtains human body training set, and described human body training set includes people's body of predetermined number
Position sample graph;Extract the feature value vector of human body sample graph in described human body training set;
Human body sample graph in described human body training set is calculated based on the described feature value vector extracted
Classification condition;Based on corresponding the presetting of human action in described classification condition identification present frame picture
The target site of quantity.
Further, described body-sensing accessory is additionally operable to: to the default object in described present frame picture
It is identified, and described currently according to the reciprocal time calculating in predeterminable area of the described default object
The quantity of motion that in frame picture, human body is corresponding.
The a kind of of embodiment of the present invention offer corrects human action by TV and body-sensing accessory
Method and system, utilize body-sensing accessory to be monitored human action, and in present frame picture
Human action is identified, such that it is able to obtain the simulated action corresponding with described human action;
By the simulated action of acquisition is contrasted with the standard operation in television set, such that it is able to obtain
Know described human action whether specification, when time lack of standardization, can send out to the person taking exercise before television set
Go out to correct information.Therefore, embodiment of the present invention achieves automatically human action
It is monitored, and carries out selectively correcting prompting.
Accompanying drawing explanation
In order to be illustrated more clearly that embodiment of the present invention or technical scheme of the prior art, below
By the accompanying drawing used required in embodiment or description of the prior art is introduced the most simply, aobvious
And easy insight, the accompanying drawing in describing below is some embodiments of the present invention, general for this area
From the point of view of logical technical staff, on the premise of not paying creative work, it is also possible to according to these accompanying drawings
Obtain other accompanying drawing.
Fig. 1 corrects human body for the one that embodiment of the present invention provides by TV and body-sensing accessory and moves
The method flow diagram made;
Fig. 2 for embodiment of the present invention provide to identify human action in present frame picture corresponding
The method flow diagram of skeleton;
Fig. 3 is the schematic diagram that embodiment of the present invention passes through support vector machine calculating class condition.
Detailed description of the invention
For making the purpose of embodiment of the present invention, technical scheme and advantage clearer, below will knot
Close the accompanying drawing in embodiment of the present invention, the technical scheme in embodiment of the present invention is carried out clear,
It is fully described by, it is clear that described embodiment is a part of embodiment of the present invention, and not
It it is whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art exist
Do not make the every other embodiment obtained under creative work premise, broadly fall into the present invention
The scope of protection.
Although multiple operations that flow process include with particular order occur are described below, but it should also be apparent that
Understand, these processes can include more or less of operation, these operation can sequentially perform or
Executed in parallel, such as, use parallel processor or multi-thread environment.
Fig. 1 corrects human body for the one that embodiment of the present invention provides by TV and body-sensing accessory and moves
The method flow diagram made.As it is shown in figure 1, described method includes:
Step S1: the human action in present frame picture is identified by body-sensing accessory, obtains and institute
State the simulated action that human action is corresponding.
In embodiments of the present invention, can pass through body-sensing accessory is installed on a television set, thus can
To be monitored the human action of person taking exercise before television set, wherein body-sensing accessory can be that body-sensing is taken the photograph
As head.
In embodiments of the present invention, the human action in present frame picture can be entered by body-sensing accessory
Row identifies, to obtain the simulated action corresponding with described human action.In embodiment of the present invention
In, the process being identified human action can complete in body-sensing accessory, it is also possible to institute
State in the processor that body-sensing accessory is connected and complete.Such as, body-sensing accessory is getting described present frame
After picture, this picture can be sent to coupled processor, may then pass through described
Described present frame picture is identified by processor.
In embodiments of the present invention, described simulated action can be complete with described human action one
The action caused, it is also possible to be only the skeleton corresponding with described human action.In the present invention
In embodiment, it is contemplated that human body can be indicated by 20 skeleton points, therefore can be
After human action in present frame picture is identified, generate corresponding with described human action
Skeleton, can include multiple skeleton point in described skeleton, for example, at least 20.Pass through
Described skeleton can reflect the action at each position of human body before television set, thus conveniently will
Human action contrasts with the standard operation in television set.
In embodiments of the present invention, as in figure 2 it is shown, specifically can be come by following step
Identify the skeleton that in present frame picture, human action is corresponding.
Step S11: utilize the human body grader pre-set, identifies human body in present frame picture
The target site of the predetermined number that action is corresponding.
In embodiments of the present invention, human body grader, described people's body can be pre-set
The picture of human body can be analyzed by position grader, thus identifies each that comprise in this human body
Position.Such as, human body can be divided into head, shoulder, arm, ancon, ankle, foot,
Wrist, hand, trunk, leg and knee, can divide again at above-mentioned each position
For some up and down, in order to more accurately identify for human body.
In embodiments of the present invention, described human body grader can be by the method for machine learning
Set up, say, that utilize the picture training described human body grader at each position of human body,
Such that it is able to allow described human body grader generate the class condition in order to divide different parts, so
After just can be to the described human body grader pending picture of input, such that it is able to according to described
Each human body in described pending picture is identified by class condition.
Specifically, in embodiments of the present invention, human body training set can be obtained in advance, described
Human body training set includes the human body sample graph of predetermined number.In order to ensure that draw divides
Class condition is relatively more accurate, can set in embodiments of the present invention in described human body training set
Putting human body sample graph as much as possible, described human body sample graph can be contained above-mentioned each
Individual human body.After obtaining human body training set, in embodiments of the present invention, permissible
Extract the feature value vector of human body sample graph in described human body training set.Described eigenvalue
Vector can be the pixel value vector that human body sample graph is corresponding.Owing to human body sample graph is
It is made up of some pixels, in embodiments of the present invention can be by RGB corresponding for each pixel
Value extracts, and is arranged in order by the eigenvalue that each pixel extracts, to constitute
Described feature value vector.Such as, the most following form of feature value vector a series of constituted after arrangement
The numerical value of arrangement:
(RGB (1,1), RGB (1,2) ..., RGB (1,120), RGB (2,1), RGB (2,2) ...,
RGB (2,120) ..., RGB (200,1), RGB (200,2) ... RGB (200,120))
Wherein, (m, n)=Ra, Gb, Bc, m, n represent certain in human body sample graph to RGB respectively
Row and column residing for one pixel;For the picture of 200 pixel * 120 pixels, the value model of m
Enclosing can be 1 to 200, and the span of n can be 1 to 120.Ra, Gb, Bc are 0-255
In any integer, the rgb value corresponding in order to represent this pixel respectively.
In embodiments of the present invention, each human body sample graph characteristic of correspondence value is obtained in extraction
After vector, just can calculate in described human body training set based on the described feature value vector extracted
The classification condition of human body sample graph.Specifically, here to use support vector machine (Support
Vector Machine) introduce as a example by algorithm calculate human body sample graph in described training sample point
The implementation of class condition.Support vector machine is that first Cortes and Vapnik proposed in nineteen ninety-five
, it shows many distinctive advantages in solving small sample, non-linear and high dimensional pattern identification,
And can promote the use of in the other machines problems concerning study such as Function Fitting.Generally, support to
Amount machine can solve the classification of complex transaction and the problem of criteria for classification.
The example utilizing the linear classification that Fig. 3 shows is explained and is classified by algorithm of support vector machine
Ultimate principle.As it is shown on figure 3, the point in the coordinate diagram of left side represents the training sample of input, right
The point that fork in the coordinate diagram of side represents represents calculated C1 class training sample, the point that circle represents
Represent calculated C2 class training sample.As it is shown on figure 3, by training sample by supporting vector
After machine algorithm calculates, it is possible to obtain sorted C1 and C2 two class training sample, and can obtain
To the classification condition dividing C1 and C2 two class.
For the linear classification of Fig. 3, and described classification condition (vv ' line in figure, super
Plane) can represent with a linear function, such as it is expressed as:
F (x)=wx+b
Wherein, w and b is that support vector machine calculates (support vector machine to feature value vector set
In be referred to as " training ") after the parameter that obtains, the feature value vector of x representative picture.
F (x) represents the mapping relations in support vector machine.When f (x)=0, feature now
Value vector x is i.e. positioned on described hyperplane.When f (x) is more than 0, sit on the right side of corresponding diagram 3
Mark on a map the feature value vector of middle hyperplane upper right side;When f (x) is less than 0, corresponding diagram 3 is right
The feature value vector of hyperplane lower left side in the coordinate diagram of side.
It is each that the feature value vector of input is such as in bivector, i.e. corresponding diagram 3 on coordinate
Point.Algorithm of support vector machine, i.e. constantly the straight line in the range of the feature value vector of search input, logical
Cross and attempt calculating each this straight line searched and each feature value vector (point in figure)
Distance, obtains such straight line: the distance of this nearest feature value vector in air line distance both sides is
Big and equal.As shown in right side coordinate diagram in Fig. 3, calculated straight line vv ' i.e. hyperplane.From
In Fig. 3 right side coordinate diagram it can be seen that under two-dimensional case hyperplane vv ' be a straight line, this straight line away from
From both sides, the distance of nearest feature value vector is maximum and equal, and this distance is L.
So, by the algorithm of support vector machine, just can obtain dividing different people in training sample
The class condition of body region.Then, in embodiments of the present invention, can be based on described classification bar
The target site of the predetermined number that human action is corresponding in part identification present frame picture.Specifically,
Described human body grader can utilize class condition to enter the human action in present frame picture
Row classification, thus identify the predetermined number that the human action in described present frame picture comprises
Target site.
Step S12: according to default clustering algorithm to identify described predetermined number target site in
Pixel carries out clustering processing, obtains the skeleton point that each target site is corresponding.
In embodiments of the present invention, in getting described present frame picture, human action is corresponding
After multiple target sites, just can be according to the mesh of the default clustering algorithm described predetermined number to identifying
Pixel in mark position carries out clustering processing, obtains the skeleton point that each target site is corresponding.Tool
Body ground, described clustering algorithm can include K-MEANS algorithm, Agglomerative Hierarchical Clustering algorithm or
At least one in DBSCAN algorithm.In the target site that described clustering algorithm can will identify that
Pixel be gathered in a bit, the final this point assembled just can be corresponding as described target site
Skeleton point.
So, each target site identified all is carried out clustering processing, such that it is able to obtain each
The skeleton point that individual target site is corresponding.
Step S13: the described skeleton point obtained is constituted the simulation corresponding with described human action and moves
Make.
In embodiments of the present invention, after obtaining the skeleton point that each target site is corresponding, will
These skeleton points carry out line in order, just can obtain the skeleton corresponding with described human action
Figure, described skeletal graph just can be as the simulated action obtained.
In described simulated action, the line between adjacent two skeleton points just can form human body
Action, such as, the line between left shoulder skeleton point and left hand elbow skeleton point can sketch the contours of human body
The lines of left upper arm, these lines sketched the contours of just can be corresponding as the left upper arm with human action
Simulated action.
Step S2: the described simulated action obtained is contrasted with the action specimen preset, determines
Difference between described simulated action and described action specimen.
In embodiments of the present invention, after obtaining the simulated action corresponding with human action,
Just the described simulated action obtained can be contrasted with the action specimen preset, thus judge to work as
Front human action is the most consistent with action specimen, say, that by described simulated action and institute
State action specimen to contrast, it may be determined that whether current time human action puts in place.
In embodiments of the present invention, the central point of the described simulated action that can just obtain is with pre-
If action specimen central point overlap.Described central point can be the central point of trunk, example
Central point such as thoracic cavity.At the central point by the central point of described simulated action Yu deliberate action specimen
After coincidence, just may determine that other positions of simulated action whether with other positions of action specimen
Corresponding consistent.So, just may determine that described simulated action and described action specimen are at predeterminated position
The difference at place.
In embodiments of the present invention, described predeterminated position can be pre-for different action specimen
First specify.Such as, for certain action specimen, it focuses on the position of arm and foot
The most accurate.So, in this case, just can be by true to the arm in this action specimen and foot
Be set to predeterminated position, when simulated action and action specimen are contrasted, can only to arm and
The position of foot contrasts, and may thereby determine that simulated action and action specimen are in arm and the position of foot
Put the difference that place exists.
Step S3: generate according to the described difference determined and correct information, and by televising
Described rectification information, described rectification information is voice messaging or Word message or image letter
Breath.
In embodiments of the present invention, when described simulated action and described action specimen are at predeterminated position
When place there are differences, just can generate according to the difference determined and correct information.Described rectification carries
Show that information can be corresponding with the predeterminated position in step S2.Such as, when simulated action and action mark
This is when the position of arm there are differences, and the rectification that just can generate " arm position is not inconsistent " carries
Show information.
Further, in embodiments of the present invention, it is also possible to generate and more specifically correct prompting
Information.Such as, when determining that the simulated action predetermined position with action specimen there are differences,
Simulated action and the action specimen position relationship in described predetermined position can be judged further,
And generate more detailed rectification information based on the position relationship judged.Such as, dynamic when simulation
When making inconsistent with the arm position in action specimen, it can be determined that the arm of simulated action and action
The arm of the position relationship between the arm of specimen, such as simulated action is positioned at described action specimen
The top of arm, then in this case, just can generate " please being moved down by arm "
Correct information, more clearly to remind the position and the side of rectification that person taking exercise should correct
To.
In another embodiment of the present invention, outside human action is corrected, it is also possible to according to
The difference of the exercising apparatus that person taking exercise uses, adds up the quantity of motion of person taking exercise.Specifically, at this
In invention embodiment, the default object in described present frame picture can be identified.Described
Default object can be such as the exercising apparatus such as dumbbell, barbell.Concrete identification process is equally
By the method for support vector machine, by different exercising apparatus is learnt, to generate not
Same exercising apparatus carries out the class condition classified, the most just can be by described class condition
Exercising apparatus in present frame picture is identified.
After default object in identifying described present frame picture, just can be according to described default right
As the reciprocal time in predeterminable area calculates the quantity of motion that in described present frame picture, human body is corresponding.
Described predeterminable area can be determined in advance according to the difference of described default object, described default
Region can be described default object location scope when being used by person taking exercise.Such as, mute
The predeterminable area of bell is often the length range of person taking exercise's arm, and the predeterminable area of barbell can be forging
The length range of refining person's height.In embodiments of the present invention, described default object is at predeterminable area
The most once move back and forth, just it is believed that person taking exercise has carried out once moving, such that it is able to system
Count described default object reciprocal time in predeterminable area, calculate the quantity of motion of person taking exercise.
Therefore, the one that embodiment of the present invention provides corrects people by TV and body-sensing accessory
Body action, utilizes body-sensing accessory to be monitored human action, and moves human body in present frame picture
It is identified, such that it is able to obtain the simulated action corresponding with described human action;By inciting somebody to action
The simulated action obtained contrasts with the standard operation in television set, such that it is able to know described people
Body action whether specification, when time lack of standardization, can send rectification prompting to the person taking exercise before television set
Information.
Embodiment of the present invention also provides for
System.Described system may include that
Body-sensing accessory, for being identified the human action in present frame picture, obtains with described
The simulated action that human action is corresponding;By the described simulated action obtained and the action specimen preset
Contrast, determine the difference between described simulated action and described action specimen;According to determine
Described difference generates corrects information;
TV, is connected with described body-sensing accessory communications, for showing the action mark that body-sensing accessory is preset
This, and play the rectification information that body-sensing accessory generates, described rectification information is voice letter
Breath or Word message or image information.
In the present invention one preferred implementation, described body-sensing accessory can be body-sensing photographic head.
In the present invention one preferred implementation, described simulated action includes and described human action phase
Corresponding skeleton.
Correspondingly, the human action in present frame picture is identified by described body-sensing accessory, obtains
The simulated action corresponding with described human action, particularly as follows:
Described body-sensing accessory utilizes the human body grader pre-set, and identifies in present frame picture
The target site of the predetermined number that human action is corresponding;According to default clustering algorithm to described in identification
Pixel in the target site of predetermined number carries out clustering processing, obtains each target site corresponding
Skeleton point;The described skeleton point obtained is constituted the simulated action corresponding with described human action.
Wherein, described body-sensing accessory utilizes the human body grader pre-set, and identifies present frame
The target site of the predetermined number that human action is corresponding in picture, particularly as follows: described body-sensing accessory obtains
Taking human body training set, described human body training set includes the human body sample of predetermined number
This figure;Extract the feature value vector of human body sample graph in described human body training set;Based on
In the described feature value vector described human body training set of calculating extracted, human body sample graph returns
Class condition;Based on the predetermined number that human action in described classification condition identification present frame picture is corresponding
Target site.
The described simulated action obtained is contrasted by described body-sensing accessory with the action specimen preset,
Determine the difference between described simulated action and described action specimen, particularly as follows: described body-sensing accessory
The central point of the described simulated action obtained is overlapped with the central point of the action specimen preset;Determine
Described simulated action and described action specimen are in the difference of predetermined position.
Additionally, in another preferred implementation of the present invention, described body-sensing accessory is additionally operable to described
Default object in present frame picture is identified, and according to described default object in predeterminable area
Reciprocal time calculate the quantity of motion that human body is corresponding in described present frame picture.
It should be noted that the specific implementation of each functional module above-mentioned with mentioned by TV
With body-sensing accessory correct human action method in step S1 to S3 in description consistent, this
In just repeat no more.
It will be appreciated by those skilled in the art that all or part of step realizing in above-described embodiment method
The program that can be by completes to instruct relevant hardware, and this program is stored in a storage medium
In, including some instruction with so that an equipment (can be single-chip microcomputer, chip etc.) or process
Device (processor) performs all or part of step of method described in each embodiment of the application.And it is front
The storage medium stated includes: USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc
Or the various medium that can store program code such as CD.
Therefore, the one that embodiment of the present invention provides corrects people by TV and body-sensing accessory
The method and system of body action, utilizes body-sensing accessory to be monitored human action, and to present frame
In picture, human action is identified, such that it is able to obtain the simulation corresponding with described human action
Action;By the simulated action of acquisition is contrasted with the standard operation in television set, thus can
To know described human action whether specification, when time lack of standardization, can be to the person taking exercise before television set
Send rectification information.
Above the describing of various embodiments of the present invention is supplied to this area skill with the purpose described
Art personnel.It is not intended to exhaustive or is not intended to limit the invention to single disclosed real
Execute mode.As it has been described above, the various replacements of the present invention and change are for above-mentioned technology art skill
Will be apparent from for art personnel.Therefore, although specifically discuss the reality of some alternatives
Execute mode, but other embodiment will be apparent from, or those skilled in the art are relative
Easily draw.It is contemplated that be included in all replacements of this present invention discussed, amendment,
And change, and other embodiment in the spirit and scope of above-mentioned application that falls.
Each embodiment in this specification all uses the mode gone forward one by one to describe, between each embodiment
Identical similar part sees mutually, and what each embodiment stressed is to implement with other
The difference of example.For embodiment of the method, owing to it is substantially similar to system in fact
Executing example, so describe is fairly simple, relevant part sees the part of system embodiment and illustrates.
Although depicting the present invention by embodiment, it will be appreciated by the skilled addressee that the present invention
There is many deformation and the change spirit without deviating from the present invention, it is desirable to appended claim includes this
Deform a bit and change the spirit without deviating from the present invention.
Claims (10)
1. the method correcting human action by TV and body-sensing accessory, described TV and body-sensing
Accessory communications connects, it is characterised in that described method includes:
Human action in present frame picture is identified by body-sensing accessory, obtains and moves with described human body
Make corresponding simulated action;
The described simulated action obtained is contrasted by body-sensing accessory with the action specimen preset, and determines
Difference between described simulated action and described action specimen;
Body-sensing accessory generates according to the described difference determined and corrects information, and by televising
Described rectification information.
The method correcting human action by TV and body-sensing accessory the most according to claim 1,
It is characterized in that, described simulated action includes the skeleton corresponding with described human action;
Described human action in present frame picture is identified, obtains and described human action phase
Corresponding simulated action, including:
Utilize the human body grader pre-set, identify that in present frame picture, human action is corresponding
The target site of predetermined number;
The pixel in target site according to the default clustering algorithm described predetermined number to identifying clicks on
Row clustering processing, obtains the skeleton point that each target site is corresponding;
The described skeleton point obtained is constituted the simulated action corresponding with described human action.
The method correcting human action by TV and body-sensing accessory the most according to claim 2,
It is characterized in that, the human body grader that described utilization pre-sets, identify in present frame picture
The target site of the predetermined number that human action is corresponding, including:
Obtaining human body training set, described human body training set includes the human body of predetermined number
Position sample graph;
Extract the feature value vector of human body sample graph in described human body training set;
Human body sample in described human body training set is calculated based on the described feature value vector extracted
The classification condition of this figure;
Mesh based on predetermined number corresponding to human action in described classification condition identification present frame picture
Mark position.
The method correcting human action by TV and body-sensing accessory the most according to claim 1,
It is characterized in that, described the described simulated action obtained is contrasted with default action specimen,
Determine the difference between described simulated action and described action specimen, including:
The central point of the described simulated action obtained is overlapped with the central point of the action specimen preset;
Determine described simulated action and the described action specimen difference in predetermined position.
The method correcting human action by TV and body-sensing accessory the most according to claim 1,
It is characterized in that, the described rectification information by televising is voice messaging or Word message
Or image information.
The method correcting human action by TV and body-sensing accessory the most according to claim 1,
It is characterized in that, described method also includes:
Default object in described present frame picture is identified, and exists according to described default object
Reciprocal time in predeterminable area calculates the quantity of motion that in described present frame picture, human body is corresponding.
7. the system being corrected human action by TV and body-sensing accessory, it is characterised in that institute
The system of stating includes:
Body-sensing accessory, for being identified the human action in present frame picture, obtains with described
The simulated action that human action is corresponding;By the described simulated action obtained and the action specimen preset
Contrast, determine the difference between described simulated action and described action specimen;According to determine
Described difference generates corrects information;
TV, is connected with described body-sensing accessory communications, for showing the action mark that body-sensing accessory is preset
This, and play the rectification information that body-sensing accessory generates.
The system being corrected human action by TV and body-sensing accessory the most according to claim 7,
It is characterized in that, described simulated action includes the skeleton corresponding with described human action;
Human action in present frame picture is identified by described body-sensing accessory, obtains and described people
The simulated action that body action is corresponding, particularly as follows:
Described body-sensing accessory utilizes the human body grader pre-set, and identifies in present frame picture
The target site of the predetermined number that human action is corresponding;According to default clustering algorithm to described in identification
Pixel in the target site of predetermined number carries out clustering processing, obtains each target site corresponding
Skeleton point;The described skeleton point obtained is constituted the simulated action corresponding with described human action.
The system being corrected human action by TV and body-sensing accessory the most according to claim 8,
It is characterized in that, described body-sensing accessory utilizes the human body grader pre-set, and identifies current
The target site of the predetermined number that human action is corresponding in frame picture, particularly as follows:
Described body-sensing accessory obtains human body training set, and described human body training set includes pre-
If the human body sample graph of quantity;Extract human body sample graph in described human body training set
Feature value vector;Calculate in described human body training set based on the described feature value vector extracted
The classification condition of human body sample graph;Based on human body in described classification condition identification present frame picture
The target site of the predetermined number that action is corresponding.
The most according to claim 7 by what TV and body-sensing accessory corrected human action it is
System, it is characterised in that described body-sensing accessory is additionally operable to:
Default object in described present frame picture is identified, and exists according to described default object
Reciprocal time in predeterminable area calculates the quantity of motion that in described present frame picture, human body is corresponding.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610173854.9A CN105847987A (en) | 2016-03-24 | 2016-03-24 | Method and system for correcting human body actions through television and body feeling accessory component |
PCT/CN2016/088197 WO2017161734A1 (en) | 2016-03-24 | 2016-07-01 | Correction of human body movements via television and motion-sensing accessory and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610173854.9A CN105847987A (en) | 2016-03-24 | 2016-03-24 | Method and system for correcting human body actions through television and body feeling accessory component |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105847987A true CN105847987A (en) | 2016-08-10 |
Family
ID=56583258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610173854.9A Pending CN105847987A (en) | 2016-03-24 | 2016-03-24 | Method and system for correcting human body actions through television and body feeling accessory component |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN105847987A (en) |
WO (1) | WO2017161734A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548675A (en) * | 2016-11-08 | 2017-03-29 | 湖南拓视觉信息技术有限公司 | Virtual military training method and device |
CN106599882A (en) * | 2017-01-07 | 2017-04-26 | 武克易 | Body sensing motion identification device |
CN106648112A (en) * | 2017-01-07 | 2017-05-10 | 武克易 | Somatosensory action recognition method |
CN108091380A (en) * | 2017-11-30 | 2018-05-29 | 中科院合肥技术创新工程院 | Teenager's basic exercise ability training system and method based on multi-sensor fusion |
CN109308437A (en) * | 2017-07-28 | 2019-02-05 | 上海形趣信息科技有限公司 | Action recognition error correction method, electronic equipment, storage medium |
CN109815776A (en) * | 2017-11-22 | 2019-05-28 | 腾讯科技(深圳)有限公司 | Action prompt method and apparatus, storage medium and electronic device |
CN110298218A (en) * | 2018-03-23 | 2019-10-01 | 上海形趣信息科技有限公司 | Interactive body-building device and interactive body-building system |
CN111354434A (en) * | 2018-12-21 | 2020-06-30 | 三星电子株式会社 | Electronic device and method for providing information |
CN113058261A (en) * | 2021-04-22 | 2021-07-02 | 杭州当贝网络科技有限公司 | Somatosensory action recognition method and system based on reality scene and game scene |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324041A (en) * | 2011-09-09 | 2012-01-18 | 深圳泰山在线科技有限公司 | Pixel classification method, joint body gesture recognition method and mouse instruction generating method |
CN103230664A (en) * | 2013-04-17 | 2013-08-07 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN103390174A (en) * | 2012-05-07 | 2013-11-13 | 深圳泰山在线科技有限公司 | Physical education assisting system and method based on human body posture recognition |
CN104157107A (en) * | 2014-07-24 | 2014-11-19 | 燕山大学 | Human body posture correction device based on Kinect sensor |
CN105307017A (en) * | 2015-11-03 | 2016-02-03 | Tcl集团股份有限公司 | Method and device for correcting posture of smart television user |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7970176B2 (en) * | 2007-10-02 | 2011-06-28 | Omek Interactive, Inc. | Method and system for gesture classification |
US9377857B2 (en) * | 2009-05-01 | 2016-06-28 | Microsoft Technology Licensing, Llc | Show body position |
TWI537767B (en) * | 2013-10-04 | 2016-06-11 | 財團法人工業技術研究院 | System and method of multi-user coaching inside a tunable motion-sensing range |
-
2016
- 2016-03-24 CN CN201610173854.9A patent/CN105847987A/en active Pending
- 2016-07-01 WO PCT/CN2016/088197 patent/WO2017161734A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324041A (en) * | 2011-09-09 | 2012-01-18 | 深圳泰山在线科技有限公司 | Pixel classification method, joint body gesture recognition method and mouse instruction generating method |
CN103390174A (en) * | 2012-05-07 | 2013-11-13 | 深圳泰山在线科技有限公司 | Physical education assisting system and method based on human body posture recognition |
CN103230664A (en) * | 2013-04-17 | 2013-08-07 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN104157107A (en) * | 2014-07-24 | 2014-11-19 | 燕山大学 | Human body posture correction device based on Kinect sensor |
CN105307017A (en) * | 2015-11-03 | 2016-02-03 | Tcl集团股份有限公司 | Method and device for correcting posture of smart television user |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106548675A (en) * | 2016-11-08 | 2017-03-29 | 湖南拓视觉信息技术有限公司 | Virtual military training method and device |
CN106599882A (en) * | 2017-01-07 | 2017-04-26 | 武克易 | Body sensing motion identification device |
CN106648112A (en) * | 2017-01-07 | 2017-05-10 | 武克易 | Somatosensory action recognition method |
CN109308437B (en) * | 2017-07-28 | 2022-06-24 | 上海史贝斯健身管理有限公司 | Motion recognition error correction method, electronic device, and storage medium |
CN109308437A (en) * | 2017-07-28 | 2019-02-05 | 上海形趣信息科技有限公司 | Action recognition error correction method, electronic equipment, storage medium |
CN109815776A (en) * | 2017-11-22 | 2019-05-28 | 腾讯科技(深圳)有限公司 | Action prompt method and apparatus, storage medium and electronic device |
CN109815776B (en) * | 2017-11-22 | 2023-02-10 | 腾讯科技(深圳)有限公司 | Action prompting method and device, storage medium and electronic device |
CN108091380A (en) * | 2017-11-30 | 2018-05-29 | 中科院合肥技术创新工程院 | Teenager's basic exercise ability training system and method based on multi-sensor fusion |
CN110298218A (en) * | 2018-03-23 | 2019-10-01 | 上海形趣信息科技有限公司 | Interactive body-building device and interactive body-building system |
CN110298218B (en) * | 2018-03-23 | 2022-03-04 | 上海史贝斯健身管理有限公司 | Interactive fitness device and interactive fitness system |
CN111354434A (en) * | 2018-12-21 | 2020-06-30 | 三星电子株式会社 | Electronic device and method for providing information |
CN111354434B (en) * | 2018-12-21 | 2024-01-19 | 三星电子株式会社 | Electronic device and method for providing information thereof |
CN113058261A (en) * | 2021-04-22 | 2021-07-02 | 杭州当贝网络科技有限公司 | Somatosensory action recognition method and system based on reality scene and game scene |
CN113058261B (en) * | 2021-04-22 | 2024-04-19 | 杭州当贝网络科技有限公司 | Somatosensory motion recognition method and system based on reality scene and game scene |
Also Published As
Publication number | Publication date |
---|---|
WO2017161734A1 (en) | 2017-09-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105847987A (en) | Method and system for correcting human body actions through television and body feeling accessory component | |
CN109863535B (en) | Motion recognition device, storage medium, and motion recognition method | |
CN109191588B (en) | Motion teaching method, motion teaching device, storage medium and electronic equipment | |
CN108717531B (en) | Human body posture estimation method based on Faster R-CNN | |
CN103186775B (en) | Based on the human motion identification method of mix description | |
US9183431B2 (en) | Apparatus and method for providing activity recognition based application service | |
CN109176512A (en) | A kind of method, robot and the control device of motion sensing control robot | |
CN107480730A (en) | Power equipment identification model construction method and system, the recognition methods of power equipment | |
US20150117760A1 (en) | Regionlets with Shift Invariant Neural Patterns for Object Detection | |
US20140037191A1 (en) | Learning-based pose estimation from depth maps | |
CN110738101A (en) | Behavior recognition method and device and computer readable storage medium | |
CN110728220A (en) | Gymnastics auxiliary training method based on human body action skeleton information | |
US11295527B2 (en) | Instant technique analysis for sports | |
CN109214366A (en) | Localized target recognition methods, apparatus and system again | |
US20160296795A1 (en) | Apparatus and method for analyzing golf motion | |
US11113571B2 (en) | Target object position prediction and motion tracking | |
CN113989944B (en) | Operation action recognition method, device and storage medium | |
CN107766851A (en) | A kind of face key independent positioning method and positioner | |
CN109740454A (en) | A kind of human body posture recognition methods based on YOLO-V3 | |
CN107967687A (en) | A kind of method and system for obtaining object walking posture | |
CN105069745A (en) | face-changing system based on common image sensor and enhanced augmented reality technology and method | |
CN105976395A (en) | Video target tracking method based on sparse representation | |
CN104680188A (en) | Method for constructing human body posture reference image library | |
CN109766782A (en) | Real-time body action identification method based on SVM | |
CN111797704B (en) | Action recognition method based on related object perception |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160810 |
|
WD01 | Invention patent application deemed withdrawn after publication |