CN105809653B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN105809653B
CN105809653B CN201410836597.3A CN201410836597A CN105809653B CN 105809653 B CN105809653 B CN 105809653B CN 201410836597 A CN201410836597 A CN 201410836597A CN 105809653 B CN105809653 B CN 105809653B
Authority
CN
China
Prior art keywords
accuracy
source video
frame number
target video
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410836597.3A
Other languages
Chinese (zh)
Other versions
CN105809653A (en
Inventor
孙茂杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen TCL New Technology Co Ltd
Original Assignee
Shenzhen TCL New Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen TCL New Technology Co Ltd filed Critical Shenzhen TCL New Technology Co Ltd
Priority to CN201410836597.3A priority Critical patent/CN105809653B/en
Priority to PCT/CN2015/090279 priority patent/WO2016107226A1/en
Publication of CN105809653A publication Critical patent/CN105809653A/en
Application granted granted Critical
Publication of CN105809653B publication Critical patent/CN105809653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof

Abstract

The invention discloses a kind of image processing method and device, method includes: to obtain source video image and target video image;Based on source video image and target video image, using preset accuracy matching algorithm, respectively obtain current action accuracy matching to previous accuracy of action matching pair, obtain current action accuracy matching pair source video frame number and target video frame number and previous accuracy of action matching pair source video frame number and target video frame number;According to the source video frame number and target video frame number of the matching pair of the accuracy of current action, and the source video frame number and target video frame number of previous accuracy of action matching pair, using preset continuity algorithm, the continuity result of current action is calculated, it is possible thereby to obtain the total tune effect of user's echomotism in real time by the continuity of movement, and can the action imitation level to user evaluation reference is provided.

Description

Image processing method and device
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image processing methods and device.
Background technique
Currently, have many dancing softwares that user can be allowed to learn referring to the movement of dancing program on the market, for example, with Family can cooperate music to carry out dancing campaign according to the direction arrow on the dancing arrow and dance rug of television image.But Existing dancing software, the final score for the movement that can only indicate whether user's current action correct and user dances, Wu Fashi When show whether current action and a upper movement link up to user, do not include what user was linked up in the final score of user yet Score.
Summary of the invention
The present invention provides a kind of image processing method and device, and main purpose is to solve how to obtain user in real time current The technical issues of continuity of movement.
To achieve the above object, the present invention provides a kind of image processing method, comprising:
Obtain source video image and target video image;
Based on the source video image and target video image, using preset accuracy matching algorithm, obtains work as respectively To with the matching pair of previous accuracy of action, the source for obtaining the accuracy matching pair of current action is regarded for preceding accuracy of action matching The source video frame number and target video frame of frequency frame number and the matching pair of target video frame number and previous accuracy of action Serial number;
According to the source video frame number and target video frame number and previous of the matching pair of the accuracy of the current action The source video frame number and target video frame number of accuracy of action matching pair are calculated using preset continuity algorithm To the continuity result of current action.
Preferably, described to be based on the source video image and target video image, using preset accuracy matching algorithm, The accuracy matching pair for obtaining current action obtains the source video frame number and target video of the accuracy matching pair of current action The step of frame number includes:
The source video frame sequence that current action is obtained from the source video image, obtains from the target video image The target video frame sequence of current action;
Key point information is extracted from the frame sequence of acquisition, obtains key point group sequence and the pass of target video of source video Key point group sequence;
Corresponding key point carries out in the key point group sequence of key point group sequence and target video to the source video Matching, obtain current action accuracy matching pair, and obtain current action accuracy matching pair source video frame number and Target video frame number.
Preferably, the key point includes: the head of human body, hand, abdomen, foot;The pass to the source video Corresponding key point is matched in key point group sequence and the key point group sequence of target video, obtains the accuracy of current action Matching pair step include:
It obtains each in multiple frames of the key point group sequence of the source video and the key point group sequence of target video Four angles of the frame based on the key point, four angles are the angle that angle, head and the right hand that head and left hand are formed are formed The angle that angle, abdomen and the right crus of diaphragm that degree, abdomen and left foot are formed are formed;
It is right in each frame of the key point group sequence of the source video and the key point group sequence of target video to calculate separately The difference at angle is answered, and obtains total difference at corresponding four angles of each frame;
Obtain the total difference of minimum in total difference of multiple frames;
Accuracy matching pair and the accuracy matching score value of the current action are obtained according to minimum total difference.
Preferably, the source video frame number and target video frame sequence according to the matching pair of the accuracy of the current action Number and previous accuracy of action matching pair source video frame number and target video frame number, using preset continuity Algorithm, the step of continuity result of current action is calculated include:
The source video frame number for obtaining the accuracy matching pair of the current action matches pair with previous accuracy of action Source video frame number between difference, obtain source video key frame interval;
The target video frame number for obtaining the accuracy matching pair of the current action is matched with previous accuracy of action Pair target video frame number between difference, obtain target video key frame interval;
According to the source video key frame interval, target video key frame interval and preset continuity formula, calculate Obtain the continuity score of current action;Specific formula for calculation is as follows:
Key frame interval: nv=iv-ivLst,nc=ic-icLst
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc), wherein lcFor coherent degree;scTo work as Preceding continuity score;ivFor current best match centering source video frame number;icFor current best match centering target video frame sequence Number;ivLstFor previous best match centering source video frame number;icLstFor previous best match centering target video frame number;nv For the frame period number of current source video key frame and previous key frame;ncFor current goal key frame of video and previous key frame Frame period number;lcminFor continuity score lower threshold ratio.
Preferably, this method further include:
Score value and continuity score are matched according to the accuracy, is weighted to obtain the comprehensive of current action Point.
Preferably, this method further include:
When the source video image finishes, by the accuracy being calculated every time match score value, continuity score and Comprehensive score sums it up average respectively, obtains the average of whole process.
The embodiment of the present invention also proposes a kind of image processing apparatus, comprising:
Image collection module, for obtaining source video image and target video image;
Matching primitives module, for being based on the source video image and target video image, using preset accuracy With algorithm, obtain respectively the accuracy matching of current action to the matching pair of previous accuracy of action, obtain current action The source video frame number and target video frame number of accuracy matching pair and the source video of previous accuracy of action matching pair Frame number and target video frame number;
Continuity computing module, for the source video frame number and target according to the matching pair of the accuracy of the current action Video frame number and the source video frame number and target video frame number of the matching pair of previous accuracy of action, using default Continuity algorithm, the continuity result of current action is calculated.
Preferably, the matching primitives module is also used to obtain the source video of current action from the source video image Frame sequence obtains the target video frame sequence of current action from the target video image;It is extracted from the frame sequence of acquisition Key point information obtains the key point group sequence of source video and the key point group sequence of target video;To the pass of the source video Corresponding key point is matched in key point group sequence and the key point group sequence of target video, obtains the accuracy of current action Matching pair, and obtain the source video frame number and target video frame number of the accuracy matching pair of current action.
Preferably, the key point includes: the head of human body, hand, abdomen, foot;
The matching primitives module is also used to obtain the key point group sequence of the source video and the key of target video Four angles of each frame based on the key point in multiple frames of point group sequence, four angles are that head and left hand are formed The angle that angle, abdomen and the right crus of diaphragm that angle, abdomen and the left foot that angle, head and the right hand are formed are formed are formed;Calculate separately institute The difference of corresponding angles in each frame of the key point group sequence of source video and the key point group sequence of target video is stated, and is obtained each Total difference at corresponding four angles of frame;Obtain the total difference of minimum in total difference of multiple frames;It is obtained according to minimum total difference Take accuracy matching pair and the accuracy matching score value of the current action.
Preferably, the continuity computing module is also used to obtain the source view of the accuracy matching pair of the current action The difference between source video frame number that frequency frame number matches pair with previous accuracy of action, obtains source video key interframe Every;Obtain the mesh that the target video frame number of the accuracy matching pair of the current action matches pair with previous accuracy of action The difference between video frame number is marked, target video key frame interval is obtained;It is regarded according to the source video key frame interval, target Frequency key frame interval and preset continuity formula, are calculated the continuity score of current action;Specific formula for calculation is such as Under:
Key frame interval: nv=iv-ivLst,nc=ic-icLst
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc), wherein lcFor coherent degree;scTo work as Preceding continuity score;ivFor current best match centering source video frame number;icFor current best match centering target video frame sequence Number;ivLstFor previous best match centering source video frame number;icLstFor previous best match centering target video frame number;nv For the frame period number of current source video key frame and previous key frame;ncFor current goal key frame of video and previous key frame Frame period number;lcminFor continuity score lower threshold ratio.
Preferably, the device further include:
COMPREHENSIVE CALCULATING module is weighted for matching score value and continuity score according to the accuracy To the comprehensive score of current action;And when the source video image finishes, by the accuracy being calculated every time matching Score value, continuity score and comprehensive score sum it up respectively averages, and obtains the average of whole process.
A kind of image processing method and device provided by the invention, by obtaining source video image and target video image; Based on the source video image and target video image, using preset accuracy matching algorithm, current action is obtained respectively Accuracy matching to previous accuracy of action matching pair, obtain current action accuracy matching pair source video frame number With the source video frame number and target video frame number of target video frame number and the matching pair of previous accuracy of action;Root According to the accuracy matching pair of the current action source video frame number and target video frame number and previous movement it is accurate Property matching pair source video frame number and target video frame number current action is calculated using preset continuity algorithm Continuity as a result, thus, it is possible to obtaining the total tune effect of user's echomotism, and energy in real time by the continuity of movement Enough action imitation levels to user provide evaluation reference.
Detailed description of the invention
Fig. 1 is the flow diagram of image processing method first embodiment of the present invention;
Fig. 2 is the flow diagram of image processing method second embodiment of the present invention;
Fig. 3 is the functional block diagram of image processing apparatus first embodiment of the present invention;
Fig. 4 is the functional block diagram of image processing apparatus second embodiment of the present invention.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
As shown in Figure 1, first embodiment of the invention proposes a kind of image processing method, comprising:
Step S101 obtains source video image and target video image;
In the present embodiment, source video image refers to the reference video image that user plays, and target video image is by taking the photograph The action video image learnt as the user that head or other photographing modules obtain referring to the movement in source video image.
This embodiment scheme, the video image that can be acquired with real-time display source video and camera are dynamic to the imitation of user Make to carry out continuity calculating, and is scored accordingly.
Wherein, the scoring that user learns video is made of two parts, and movement matching accuracy and continuity of movement, movement connect Algorithm of the algorithm of coherence dependent on movement matching accuracy.
Step S102 is based on the source video image and target video image, using preset accuracy matching algorithm, divides Not Huo Qu current action accuracy matching to previous accuracy of action matching pair, obtain current action accuracy matching Pair source video frame number and target video frame number and previous accuracy of action matching pair source video frame number and mesh Mark video frame number;
Wherein, the quantitative analysis of continuity is not only relied on the sequence of frames of video of camera end subscriber study, also to be regarded with source It is judged on the basis of frequency frame sequence.In source video, if a movement is quickly completed, then this movement also should be quick in camera It completes.
Continuity of movement is an important indicator for measuring dancer's level of study, coherent by the movement for calculating dancer Property, the dancing composite score of dancer can be provided.
Specifically, it after obtaining source video image and target video image, is obtained from the source video image current dynamic The source video frame sequence of work obtains the target video frame sequence of current action from the target video image.
Later, key point information is extracted from the frame sequence of acquisition, obtains the key point group sequence and target view of source video The key point group sequence of frequency.
Corresponding key point carries out in the key point group sequence of key point group sequence and target video to the source video Matching, obtain current action accuracy matching pair, and obtain current action accuracy matching pair source video frame number and Target video frame number.
Wherein, the key point may include: head, hand, abdomen, foot of human body etc..
The corresponding crucial click-through in the key point group sequence of the key point group sequence and target video to the source video Row matching, can use following scheme:
It obtains each in multiple frames of the key point group sequence of the source video and the key point group sequence of target video Four angles of the frame based on the key point, four angles are the angle that angle, head and the right hand that head and left hand are formed are formed The angle that angle, abdomen and the right crus of diaphragm that degree, abdomen and left foot are formed are formed;
It is right in each frame of the key point group sequence of the source video and the key point group sequence of target video to calculate separately The difference at angle is answered, and obtains total difference at corresponding four angles of each frame.
Obtain the total difference of minimum in total difference of above-mentioned multiple frames;It is obtained according to minimum total difference described current dynamic The accuracy matching pair of work and accuracy match score value, and then obtain the source video frame sequence of the accuracy matching pair of current action Number and target video frame number.
Specifically, obtaining 4 angles of the first frame of the current action at camera shooting head end and source video end, calculates separately and take the photograph As the difference of head end and the corresponding each angle in source video end, summed the difference of 4 angles to obtain the first frame total Difference.
With this principle, total difference of each frame in multiple frames of current action is obtained, and obtain in multiple total differences Minimum total difference obtains the first optimal accuracy matching pair of the current action according to minimum total difference, it is assumed that described The frame number of the corresponding current action of minimum total difference is 90, then the described first optimal accuracy is paired into the frame of camera shooting head end The frame of the frame number 90 of the frame and source video end of serial number 90.
During said extracted key point, each frame in camera shooting head end and source video end image to acquisition is needed to do Frame pretreatment, frame preprocessing process include that background frames obtain, image grayscale binaryzation and target frame extraction, key point identify, In:
Background frames obtain:
When initial, a background picture is obtained from camera shooting head end and source video end respectively.The background of camera shooting head end can pass through It is interacted with user, user is notified to click button and shoot an environmental background picture.The background at source video end will be in video flowing First frame carries out background extracting.Above-mentioned two background is modeled, for the real-time update in matching process later.
Image grayscale binaryzation and Objective extraction:
It is identical with the processing at source video end for camera shooting head end, following processing is done to each frame:
Present frame and background frames are all subjected to gray processing processing, that is, rgb color is converted to gray value;
Will be after the gray processing of present frame and background frames as a result, being subtracted each other and carrying out elimination noise processed, difference is less than Given threshold is considered background area, is otherwise target area, i.e. human region;
Corresponding matrix is established, the point value of background area is 0, and target area is 1.
The above process is completed, the matrix that processing is obtained is as input for key point identification and subsequent process.
Key point identification:
Key point includes head, right-hand man, abdomen (center) and left and right foot totally 6 points.
Matrix obtained above is further processed, obtaining can claim target rectangular comprising the minimum rectangle of target area. Region is divide into upper part and lower part by target is rectangular, goes out left and right foot in lower region recognition, other points are in upper region recognition.
Left and right foot: in lower region, there is 1 (target area) at first in a line a line scan matrix, the right and left from top to bottom Row, i.e., left and right foot is expert at respectively, and then obtains bipod key point.
Right-hand man: in upper region, respectively from left to right, from right to left, and one column scan matrix of a column, the 1 (mesh occurred at first Mark region) column, i.e. respectively right-hand man's column, and then obtain two hand key points.
Abdomen (center): by the critical row of lower regions, being expert at as abdomen (center) key point, scan the row, will The midpoint of continuous 1 longest section is as abdomen key point.
Head: scanning abdomen key point column or so specifies rectangular region composed by several column, highest point in this region For header key point.
The matching criteria of accuracy is angle, i.e., head respectively with the angle of right-hand man, abdomen (center) respectively with left and right foot Angle, each frame can calculate this 4 angles after identifying key point.
Based on above-mentioned matching principle, available previous accuracy of action matching pair and previous accuracy of action The source video frame number and target video frame number of matching pair.
Step S103, according to the source video frame number and target video frame sequence of the matching pair of the accuracy of the current action Number and previous accuracy of action matching pair source video frame number and target video frame number, using preset continuity The continuity result of current action is calculated in algorithm.
Specifically, the source video frame number and previous accuracy of action of the accuracy matching pair of the current action are obtained Difference between the source video frame number of matching pair, obtains source video key frame interval;
The target video frame number for obtaining the accuracy matching pair of the current action is matched with previous accuracy of action Pair target video frame number between difference, obtain target video key frame interval;
According to the source video key frame interval, target video key frame interval and preset continuity formula, calculate Obtain the continuity score of current action.
Specific calculation formula can be such that
It is calculated using the following equation key frame interval:
Key frame interval: nv=iv-ivLst,nc=ic-icLst; (1)
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc); (3)
Wherein, lcIt, can be with value [0,1] for coherent degree;scCurrent continuity score, can be with value [0,100];ivTo work as Preceding best match centering source video frame number;icFor current best match centering camera frame number;ivLstFor previous best match Centering source video frame number;icLstFor previous best match centering camera frame number (target video frame number);nvFor current source The frame period number of key frame of video and previous key frame;ncFor when preceding camera key frame (target video key frame) and previous pass The frame period number of key frame;lcminContinuity score lower threshold ratio [0,1), kcContinuity scoring weight, (0,1)
The present embodiment through the above scheme, obtains source video image and target video image;Based on the source video image And target video image, using preset accuracy matching algorithm, obtain respectively the accuracy matching of current action to it is previous Accuracy of action matching pair obtains the source video frame number and target video frame number of the accuracy matching pair of current action, And the source video frame number and target video frame number of previous accuracy of action matching pair;According to the standard of the current action The source video frame number and target video frame number and the source video frame of previous accuracy of action matching pair of true property matching pair The continuity of current action is calculated as a result, as a result, using preset continuity algorithm in serial number and target video frame number It can obtain the total tune effect of user's dancing movement in real time by the continuity of movement.
As shown in Fig. 2, second embodiment of the invention proposes a kind of image processing method, based on the above embodiment, further includes:
Step S104 matches score value and continuity score according to the accuracy, is weighted and is currently moved The comprehensive score of work.
The accuracy being calculated every time is matched score value, coherent when the source video image finishes by step S105 Property score and comprehensive score sum it up average respectively, obtain the average of whole process.
Specifically, current video frame sequence accuracy and continuity matching are all calculated and finished, obtained currently according to weighted sum Sequence synthesis score.Situation keeps in sequence, repeats above-mentioned two algorithm, until finishing, by every secondary accuracy score, links up Property score and comprehensive score sum it up average respectively, obtain the average of whole process.
Specific algorithm is as follows:
The parameter being directed to is as shown in table 1 below:
Table 1
After movement matching algorithm calls every time, and then call continuity algorithm.
In action accuracy matching algorithm, constantly obtaining frame sequence from source video and camera, (one frame of every acquisition is immediately Extract key point), i.e. acquisition key point group sequence QpvAnd Qpc, two sequence synchronizations carry out, when number reaches n simultaneouslymatch++It (can Setting), carry out movement matching calculate, the result is that obtained best match to and matching score, matching be respectively i to serial numberv、 ic
Then, it is calculated using the following equation key frame interval:
Key frame interval: nv=iv-ivLst,nc=ic-icLst; (1)
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc); (3)
Finally, current composite score and evaluation score value can be calculated:
Current composite score: s=sm·(1-kc)+sc·kc; (4)
Average continuity score:
Average aggregate score: sav=smAv·(1-kc)+scAv·kc; (6)
So far, current sequence continuity calculating finishes, while obtaining comprehensive score, empties QpvAnd Qpc, repeat above-mentioned mistake Journey, until finishing.
Every secondary accuracy score, continuity score and comprehensive score are summed it up respectively and averaged, obtains the flat of whole process Equal score.
Each parameter involved in above-mentioned formula is referred to above-mentioned table 1.
The present embodiment through the above scheme, obtains source video image and target video image;Based on the source video image And target video image, using preset accuracy matching algorithm, obtain respectively the accuracy matching of current action to it is previous Accuracy of action matching pair obtains the source video frame number and target video frame number of the accuracy matching pair of current action, And the source video frame number and target video frame number of previous accuracy of action matching pair;According to the standard of the current action The source video frame number and target video frame number and the source video frame of previous accuracy of action matching pair of true property matching pair The continuity of current action is calculated as a result, as a result, using preset continuity algorithm in serial number and target video frame number It can obtain the total tune effect of user's dancing movement in real time by the continuity of movement;Further, it is also possible to calculate current comprehensive Score, mean scores and total evaluation score value are closed, and then provides evaluation reference to the action effect of user.
Accordingly, image processing apparatus of the invention is proposed.
As shown in figure 3, first embodiment of the invention proposes a kind of image processing apparatus, comprising: image collection module 201, Matching primitives module 202 and continuity computing module 203, in which:
Image collection module 201, for obtaining source video image and target video image;
Matching primitives module 202, for being based on the source video image and target video image, using preset accuracy Matching algorithm, respectively obtain current action accuracy matching to previous accuracy of action matching pair, obtain current action Accuracy matching pair source video frame number and target video frame number and previous accuracy of action matching pair source view Frequency frame number and target video frame number;
Continuity computing module 203, for according to the accuracy of the current action matching pair source video frame number and Target video frame number and the source video frame number and target video frame number of the matching pair of previous accuracy of action, use The continuity result of current action is calculated in preset continuity algorithm.
In the present embodiment, source video image refers to the reference video image that user plays, and target video image is by taking the photograph The action video image learnt as the user that head or other photographing modules obtain referring to the movement in source video image.
This embodiment scheme, the video image that can be acquired with real-time display source video and camera are dynamic to the imitation of user Make to carry out continuity calculating, and is scored accordingly.
Wherein, the scoring that user learns video is made of two parts, and movement matching accuracy and continuity of movement, movement connect Algorithm of the algorithm of coherence dependent on movement matching accuracy.
Wherein, the quantitative analysis of continuity is not only relied on the sequence of frames of video of camera end subscriber study, also to be regarded with source It is judged on the basis of frequency frame sequence.In source video, if a movement is quickly completed, then this movement also should be quick in camera It completes.
Continuity of movement is an important indicator for measuring dancer's level of study, coherent by the movement for calculating dancer Property, the dancing composite score of dancer can be provided.
Specifically, it after obtaining source video image and target video image, is obtained from the source video image current dynamic The source video frame sequence of work obtains the target video frame sequence of current action from the target video image.
Later, key point information is extracted from the frame sequence of acquisition, obtains the key point group sequence and target view of source video The key point group sequence of frequency.
Corresponding key point carries out in the key point group sequence of key point group sequence and target video to the source video Matching, obtain current action accuracy matching pair, and obtain current action accuracy matching pair source video frame number and Target video frame number.
Wherein, the key point may include: head, hand, abdomen, foot of human body etc..
The corresponding crucial click-through in the key point group sequence of the key point group sequence and target video to the source video Row matching, can use following scheme:
It obtains each in multiple frames of the key point group sequence of the source video and the key point group sequence of target video Four angles of the frame based on the key point, four angles are the angle that angle, head and the right hand that head and left hand are formed are formed The angle that angle, abdomen and the right crus of diaphragm that degree, abdomen and left foot are formed are formed;
It is right in each frame of the key point group sequence of the source video and the key point group sequence of target video to calculate separately The difference at angle is answered, and obtains total difference at corresponding four angles of each frame.
Obtain the total difference of minimum in total difference of above-mentioned multiple frames;It is obtained according to minimum total difference described current dynamic The accuracy matching pair of work and accuracy match score value, and then obtain the source video frame sequence of the accuracy matching pair of current action Number and target video frame number.
Specifically, obtaining 4 angles of the first frame of the current action at camera shooting head end and source video end, calculates separately and take the photograph As the difference of head end and the corresponding each angle in source video end, summed the difference of 4 angles to obtain the first frame total Difference.
With this principle, total difference of each frame in multiple frames of current action is obtained, and obtain in multiple total differences Minimum total difference obtains the first optimal accuracy matching pair of the current action according to minimum total difference, it is assumed that described The frame number of the corresponding current action of minimum total difference is 90, then the described first optimal accuracy is paired into the frame of camera shooting head end The frame of the frame number 90 of the frame and source video end of serial number 90.
During said extracted key point, each frame in camera shooting head end and source video end image to acquisition is needed to do Frame pretreatment, frame preprocessing process include that background frames obtain, image grayscale binaryzation and target frame extraction, key point identify, In:
Background frames obtain:
When initial, a background picture is obtained from camera shooting head end and source video end respectively.The background of camera shooting head end can pass through It is interacted with user, user is notified to click button and shoot an environmental background picture.The background at source video end will be in video flowing First frame carries out background extracting.Above-mentioned two background is modeled, for the real-time update in matching process later.
Image grayscale binaryzation and Objective extraction:
It is identical with the processing at source video end for camera shooting head end, following processing is done to each frame:
Present frame and background frames are all subjected to gray processing processing, that is, rgb color is converted to gray value;
Will be after the gray processing of present frame and background frames as a result, being subtracted each other and carrying out elimination noise processed, difference is less than Given threshold is considered background area, is otherwise target area, i.e. human region;
Corresponding matrix is established, the point value of background area is 0, and target area is 1.
The above process is completed, the matrix that processing is obtained is as input for key point identification and subsequent process.
Key point identification:
Key point includes head, right-hand man, abdomen (center) and left and right foot totally 6 points.
Matrix obtained above is further processed, obtaining can claim target rectangular comprising the minimum rectangle of target area. Region is divide into upper part and lower part by target is rectangular, goes out left and right foot in lower region recognition, other points are in upper region recognition.
Left and right foot: in lower region, there is 1 (target area) at first in a line a line scan matrix, the right and left from top to bottom Row, i.e., left and right foot is expert at respectively, and then obtains bipod key point.
Right-hand man: in upper region, respectively from left to right, from right to left, and one column scan matrix of a column, the 1 (mesh occurred at first Mark region) column, i.e. respectively right-hand man's column, and then obtain two hand key points.
Abdomen (center): by the critical row of lower regions, being expert at as abdomen (center) key point, scan the row, will The midpoint of continuous 1 longest section is as abdomen key point.
Head: scanning abdomen key point column or so specifies rectangular region composed by several column, highest point in this region For header key point.
The matching criteria of accuracy is angle, i.e., head respectively with the angle of right-hand man, abdomen (center) respectively with left and right foot Angle, each frame can calculate this 4 angles after identifying key point.
Based on above-mentioned matching principle, available previous accuracy of action matching pair and previous accuracy of action The source video frame number and target video frame number of matching pair.
In the source video frame number and target video frame number of the accuracy matching pair for obtaining current action and previous dynamic After the source video frame number and target video frame number of the accuracy matching pair of work, matched according to the accuracy of the current action Pair source video frame number and target video frame number and previous accuracy of action matching pair source video frame number and mesh It marks video frame number and the continuity result of current action is calculated using preset continuity algorithm.
Specifically, the source video frame number and previous accuracy of action of the accuracy matching pair of the current action are obtained Difference between the source video frame number of matching pair, obtains source video key frame interval;
The target video frame number for obtaining the accuracy matching pair of the current action is matched with previous accuracy of action Pair target video frame number between difference, obtain target video key frame interval;
According to the source video key frame interval, target video key frame interval and preset continuity formula, calculate Obtain the continuity score of current action.
Specific calculation formula can be such that
It is calculated using the following equation key frame interval:
Key frame interval: nv=iv-ivLst,nc=ic-icLst; (1)
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc); (3)
Wherein, lcIt, can be with value [0,1] for coherent degree;scCurrent continuity score, can be with value [0,100];ivTo work as Preceding best match centering source video frame number;icFor current best match centering camera frame number;ivLstFor previous best match Centering source video frame number;icLstFor previous best match centering camera frame number;nvFor current source video key frame with it is previous The frame period number of key frame;ncFor when the frame period number of preceding camera key frame and previous key frame;lcminContinuity score lower limit Threshold percentage [0,1), kcContinuity scoring weight, (0,1)
The present embodiment through the above scheme, passes through acquisition source video image and target video image;Based on the source video Image and target video image, using preset accuracy matching algorithm, obtain respectively the accuracy matching of current action to Previous accuracy of action matching pair obtains the source video frame number and target video frame sequence of the accuracy matching pair of current action Number and previous accuracy of action matching pair source video frame number and target video frame number;According to the current action Accuracy matching pair source video frame number and target video frame number and previous accuracy of action matching pair source view Frequency frame number and target video frame number, using preset continuity algorithm, the continuity of current action is calculated as a result, by This, can obtain the total tune effect of user's dancing movement in real time by the continuity of movement.
As shown in figure 4, second embodiment of the invention proposes a kind of image processing apparatus, based on the above embodiment, further includes:
COMPREHENSIVE CALCULATING module 204 is weighted for matching score value and continuity score according to the accuracy Obtain the comprehensive score of current action;And when the source video image finishes, the accuracy that will be calculated every time It sums it up and averages respectively with score value, continuity score and comprehensive score, obtain the average of whole process.
Specifically, current video frame sequence accuracy and continuity matching are all calculated and finished, obtained currently according to weighted sum Sequence synthesis score.Situation keeps in sequence, repeats above-mentioned two algorithm, until finishing, by every secondary accuracy score, links up Property score and comprehensive score sum it up average respectively, obtain the average of whole process.
Specific algorithm is as follows:
The parameter being directed to is as listed in Table 1.
After movement matching algorithm calls every time, and then call continuity algorithm.
In action accuracy matching algorithm, constantly obtaining frame sequence from source video and camera, (one frame of every acquisition is immediately Extract key point), i.e. acquisition key point group sequence QpvAnd Qpc, two sequence synchronizations carry out, when number reaches n simultaneouslymatch++It (can Setting), carry out movement matching calculate, the result is that obtained best match to and matching score, matching be respectively i to serial numberv、 ic
Then, it is calculated using the following equation key frame interval:
Key frame interval: nv=iv-ivLst,nc=ic-icLst; (1)
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc); (3)
Finally, current composite score and evaluation score value can be calculated:
Current composite score: s=sm·(1-kc)+sc·kc; (4)
Average continuity score:
Average aggregate score: sav=smAv·(1-kc)+scAv·kc; (6)
So far, current sequence continuity calculating finishes, while obtaining comprehensive score, empties QpvAnd Qpc, repeat above-mentioned mistake Journey, until finishing.
Every secondary accuracy score, continuity score and comprehensive score are summed it up respectively and averaged, obtains the flat of whole process Equal score.
Each parameter involved in above-mentioned formula is referred to above-mentioned table 1.
The present embodiment through the above scheme, passes through acquisition source video image and target video image;Based on the source video Image and target video image, using preset accuracy matching algorithm, obtain respectively the accuracy matching of current action to Previous accuracy of action matching pair obtains the source video frame number and target video frame sequence of the accuracy matching pair of current action Number and previous accuracy of action matching pair source video frame number and target video frame number;According to the current action Accuracy matching pair source video frame number and target video frame number and previous accuracy of action matching pair source view Frequency frame number and target video frame number, using preset continuity algorithm, the continuity of current action is calculated as a result, by This, can obtain the total tune effect of user's dancing movement in real time by the continuity of movement;Further, it is also possible to calculate current Comprehensive score, mean scores and total evaluation score value, and then evaluation reference is provided to the action effect of user.
The game that the application field of the embodiment of the present invention can be related in the consumer electronics such as game machine, PC, TV is beaten Point, the study of the movement class such as dancing and action behavior identify field.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (9)

1. a kind of image processing method characterized by comprising
Obtain source video image and target video image;
Based on the source video image and target video image, using preset accuracy matching algorithm, obtain respectively current dynamic Work accuracy matching to previous accuracy of action matching pair, obtain current action accuracy matching pair source video frame The source video frame number and target video frame sequence of serial number and the matching pair of target video frame number and previous accuracy of action Number;
According to the source video frame number of the accuracy of current action matching pair and target video frame number and previous movement Accuracy matching pair source video frame number and target video frame number be calculated and work as using preset continuity algorithm The continuity of preceding movement is as a result, specifically include:
Obtain the source that the source video frame number of the accuracy matching pair of the current action matches pair with previous accuracy of action Difference between video frame number obtains source video key frame interval;
Obtain what the target video frame number of the accuracy matching pair of the current action matched pair with previous accuracy of action Difference between target video frame number obtains target video key frame interval;
According to the source video key frame interval, target video key frame interval and preset continuity formula, it is calculated The continuity score of current action;Specific formula for calculation is as follows:
Key frame interval: nv=iv-ivLst,nc=ic-icLst
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc), wherein lcFor coherent degree;scCurrently to connect Coherence score;ivFor current best match centering source video frame number;icFor current best match centering target video frame number; ivLstFor previous best match centering source video frame number;icLstFor previous best match centering target video frame number;nvTo work as The frame period number of preceding source video key frame and previous key frame;ncFor the interframe of current goal key frame of video and previous key frame Every number;lcminFor continuity score lower threshold ratio.
2. the method according to claim 1, wherein described be based on the source video image and target video figure Picture is obtained the accuracy matching pair of current action, is obtained the accuracy of current action using preset accuracy matching algorithm The step of source video frame number and target video frame number of pairing includes:
The source video frame sequence that current action is obtained from the source video image obtains current from the target video image The target video frame sequence of movement;
Key point information is extracted from the frame sequence of acquisition, obtains the key point group sequence of source video and the key point of target video Group sequence;
Corresponding key point matches in the key point group sequence of key point group sequence and target video to the source video, The accuracy matching pair of current action is obtained, and obtains the source video frame number and target view of the accuracy matching pair of current action Frequency frame number.
3. according to the method described in claim 2, it is characterized in that, the key point include: the head of human body, hand, abdomen, Foot;It is described that key point corresponding in the key point group sequence of the source video and the key point group sequence of target video is carried out Matching, the step for obtaining the accuracy matching pair of current action include:
Obtain each frame base in multiple frames of the key point group sequence of the source video and the key point group sequence of target video In four angles of the key point, four angles are angle, the angle of head and right hand formation, abdomen that head and left hand are formed The angle that angle, abdomen and the right crus of diaphragm that portion and left foot are formed are formed;
Calculate separately corresponding angles in each frame of the key point group sequence of the source video and the key point group sequence of target video Difference, and obtain total difference at corresponding four angles of each frame;
Obtain the total difference of minimum in total difference of multiple frames;
Accuracy matching pair and the accuracy matching score value of the current action are obtained according to minimum total difference.
4. according to the method described in claim 3, it is characterized by further comprising:
Score value and continuity score are matched according to the accuracy, is weighted to obtain the comprehensive score of current action.
5. according to the method described in claim 4, it is characterized by further comprising:
When the source video image finishes, by the accuracy being calculated every time matching score value, continuity score and synthesis Score sums it up average respectively, obtains the average of whole process.
6. a kind of image processing apparatus characterized by comprising
Image collection module, for obtaining source video image and target video image;
Matching primitives module is matched using preset accuracy and is calculated for being based on the source video image and target video image Method, respectively obtain current action accuracy matching to previous accuracy of action matching pair, obtain the accurate of current action Property matching pair source video frame number and target video frame number and previous accuracy of action matching pair source video frame sequence Number and target video frame number;
Continuity computing module, for the source video frame number and target video according to the matching pair of the accuracy of the current action Frame number and the source video frame number and target video frame number of the matching pair of previous accuracy of action, using preset company The continuity of current action is calculated as a result, specifically including in coherence algorithm: obtaining the accuracy matching pair of the current action The source video frame number that is matched with previous accuracy of action pair of source video frame number between difference, obtain source video key Frame period;The target video frame number for obtaining the accuracy matching pair of the current action matches pair with previous accuracy of action Target video frame number between difference, obtain target video key frame interval;According to the source video key frame interval, mesh Video Key frame period and preset continuity formula are marked, the continuity score of current action is calculated;It is specific to calculate public affairs Formula is as follows:
Key frame interval: nv=iv-ivLst,nc=ic-icLst
It is then currently coherent to spend formula:
Current continuity score formula: sc=100 (lcmin+(1-lcmin)·lc), wherein lcFor coherent degree;sC isIt is current coherent Property score;ivFor current best match centering source video frame number;icFor current best match centering target video frame number; ivLstFor previous best match centering source video frame number;icLstFor previous best match centering target video frame number;nvTo work as The frame period number of preceding source video key frame and previous key frame;ncFor the interframe of current goal key frame of video and previous key frame Every number;lcminFor continuity score lower threshold ratio.
7. device according to claim 6, which is characterized in that
The matching primitives module is also used to obtain the source video frame sequence of current action from the source video image, from institute State the target video frame sequence that current action is obtained in target video image;Key point information is extracted from the frame sequence of acquisition, Obtain the key point group sequence of source video and the key point group sequence of target video;To the key point group sequence of the source video and Corresponding key point is matched in the key point group sequence of target video, obtains the accuracy matching pair of current action, and obtain Take the source video frame number and target video frame number of the accuracy matching pair of current action.
8. device according to claim 7, which is characterized in that the key point include: the head of human body, hand, abdomen, Foot;
The matching primitives module is also used to obtain the key point group sequence of the source video and the key point group of target video Four angles of each frame based on the key point in multiple frames of sequence, four angles are the angle that head and left hand are formed The angle that angle, abdomen and the right crus of diaphragm that angle, abdomen and the left foot that degree, head and the right hand are formed are formed are formed;It calculates separately described The difference of corresponding angles in each frame of the key point group sequence of the key point group sequence and target video of source video, and obtain each frame Total difference at corresponding four angles;Obtain the total difference of minimum in total difference of multiple frames;It is obtained according to minimum total difference The accuracy matching pair of the current action and accuracy match score value.
9. device according to claim 8, which is characterized in that further include:
COMPREHENSIVE CALCULATING module is weighted and is worked as matching score value and continuity score according to the accuracy The comprehensive score of preceding movement;And when the source video image finishes, by the accuracy being calculated every time match score value, Continuity score and comprehensive score sum it up respectively averages, and obtains the average of whole process.
CN201410836597.3A 2014-12-29 2014-12-29 Image processing method and device Active CN105809653B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201410836597.3A CN105809653B (en) 2014-12-29 2014-12-29 Image processing method and device
PCT/CN2015/090279 WO2016107226A1 (en) 2014-12-29 2015-09-22 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410836597.3A CN105809653B (en) 2014-12-29 2014-12-29 Image processing method and device

Publications (2)

Publication Number Publication Date
CN105809653A CN105809653A (en) 2016-07-27
CN105809653B true CN105809653B (en) 2019-01-01

Family

ID=56284142

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410836597.3A Active CN105809653B (en) 2014-12-29 2014-12-29 Image processing method and device

Country Status (2)

Country Link
CN (1) CN105809653B (en)
WO (1) WO2016107226A1 (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108040289A (en) * 2017-12-12 2018-05-15 天脉聚源(北京)传媒科技有限公司 A kind of method and device of video playing
CN112950951B (en) * 2021-01-29 2023-05-02 浙江大华技术股份有限公司 Intelligent information display method, electronic device and storage medium
CN113705536A (en) * 2021-09-18 2021-11-26 深圳市领存技术有限公司 Continuous action scoring method, device and storage medium
CN114827730A (en) * 2022-04-19 2022-07-29 咪咕文化科技有限公司 Video cover selecting method, device, equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731316A (en) * 2005-08-19 2006-02-08 北京航空航天大学 Human-computer interaction method for dummy ape game
CN101615302A (en) * 2009-07-30 2009-12-30 浙江大学 The dance movement generation method that music data drives based on machine learning
US20110159938A1 (en) * 2008-07-08 2011-06-30 Konami Digital Entertainment Co., Ltd. Game device, computer program therefor, and recording medium therefor
CN103327356A (en) * 2013-06-28 2013-09-25 Tcl集团股份有限公司 Video matching method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1731316A (en) * 2005-08-19 2006-02-08 北京航空航天大学 Human-computer interaction method for dummy ape game
US20110159938A1 (en) * 2008-07-08 2011-06-30 Konami Digital Entertainment Co., Ltd. Game device, computer program therefor, and recording medium therefor
CN101615302A (en) * 2009-07-30 2009-12-30 浙江大学 The dance movement generation method that music data drives based on machine learning
CN103327356A (en) * 2013-06-28 2013-09-25 Tcl集团股份有限公司 Video matching method and device

Also Published As

Publication number Publication date
WO2016107226A1 (en) 2016-07-07
CN105809653A (en) 2016-07-27

Similar Documents

Publication Publication Date Title
CN111263953B (en) Action state evaluation system, device, server, method and storage medium thereof
CN105809653B (en) Image processing method and device
CN107578403B (en) The stereo image quality evaluation method for instructing binocular view to merge based on gradient information
CN104167016B (en) A kind of three-dimensional motion method for reconstructing based on RGB color and depth image
CN110163110A (en) A kind of pedestrian's recognition methods again merged based on transfer learning and depth characteristic
CN109840478B (en) Action evaluation method and device, mobile terminal and readable storage medium
CN110448870B (en) Human body posture training method
CN109584290A (en) A kind of three-dimensional image matching method based on convolutional neural networks
CN110298231A (en) A kind of method and system determined for the goal of Basketball Match video
CN109191428A (en) Full-reference image quality evaluating method based on masking textural characteristics
CN109621331A (en) Fitness-assisting method, apparatus and storage medium, server
CN110991266A (en) Binocular face living body detection method and device
CN106023151A (en) Traditional Chinese medicine tongue manifestation object detection method in open environment
CN106874830A (en) A kind of visually impaired people's householder method based on RGB D cameras and recognition of face
CN109165555A (en) Man-machine finger-guessing game method, apparatus and storage medium based on image recognition
CN104182970A (en) Souvenir photo portrait position recommendation method based on photography composition rule
CN108074286A (en) A kind of VR scenario buildings method and system
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN102567734A (en) Specific value based retina thin blood vessel segmentation method
CN109199334B (en) Tongue picture constitution identification method and device based on deep neural network
CN104050676B (en) A kind of backlight image detecting method and device based on Logistic regression models
CN108053418A (en) A kind of animal background modeling method and device
CN109993090B (en) Iris center positioning method based on cascade regression forest and image gray scale features
CN108564564A (en) Based on the medical image cutting method for improving fuzzy connectedness and more seed points
CN107295214B (en) Interpolated frame localization method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant