CN110197721A - Tendon condition evaluation method, apparatus and storage medium based on deep learning - Google Patents
Tendon condition evaluation method, apparatus and storage medium based on deep learning Download PDFInfo
- Publication number
- CN110197721A CN110197721A CN201910370527.6A CN201910370527A CN110197721A CN 110197721 A CN110197721 A CN 110197721A CN 201910370527 A CN201910370527 A CN 201910370527A CN 110197721 A CN110197721 A CN 110197721A
- Authority
- CN
- China
- Prior art keywords
- tendon
- image
- target user
- target
- action video
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H50/00—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
- G16H50/20—ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Abstract
The present invention relates to intelligent Decision Technology fields, disclose a kind of tendon condition evaluation method based on deep learning, comprising: obtain the first action video of target user;The first image is extracted from the video;Based on first the second image of image zooming-out;First image and the second image are input to convolutional neural networks model;If the model recognition result is that the tendon of the target site of target user is impaired, consecutive frame image in the first action video is obtained;The first similarity of consecutive frame image is calculated, the second similarity for presetting the consecutive frame of non-impaired tendon movement posture is obtained;The tendon extent of damage of target user is determined according to the difference of the first similarity and the second similarity and recommendation rehabilitation training information corresponding with the tendon extent of damage is the rehabilitation programme of target user.The present invention also proposes a kind of tendon condition evaluation device and a kind of storage medium based on deep learning.The present invention whether impaired to user's tendon situation can carry out accurate evaluation and provide more accurate rehabilitation training opinion after tendon is impaired.
Description
Technical field
The present invention relates to be related to intelligent Decision Technology field more particularly to a kind of tendon condition evaluation based on deep learning
Method, apparatus and computer readable storage medium.
Background technique
It is checked currently, generally requiring doctor when the possibility that user is damaged there are tendon, and is determining that tendon is damaged it
Afterwards, it needs by making user restore normal by rehabilitation training, user is applicable in the time of which kind of rehabilitation training and rehabilitation training
Judge also by doctor.If doctor judges inaccuracy, deposited when will affect user and get well, and being judged by doctor
The shape of error in judgement caused by the error in judgement caused by experience deficiency and human eye are influenced by background changing, illumination variation etc.
Condition.
Summary of the invention
The present invention provides a kind of tendon condition evaluation method, apparatus and computer-readable storage medium based on deep learning
Matter, main purpose are that whether impaired to user's tendon situation accurate evaluation can be carried out and provide more after tendon is impaired
Add accurate rehabilitation training opinion.
To achieve the above object, the present invention also provides a kind of the tendon condition evaluation method based on deep learning, this method
Include:
Obtain the first action video of target user;
The of the tendon movement posture of the target site comprising the target user is extracted from first action video
One image, wherein the first image is single-frame images;
The target site comprising the target user is extracted from first action video based on the first image
Second image of tendon movement posture, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolution
The recognition result whether tendon of the target site of the target user of neural network model output is damaged;
If the recognition result is that the tendon of the target site of the target user is impaired, first movement is obtained
Multiple groups consecutive frame image in video;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-be damaged
Second similarity of the consecutive frame of tendon movement posture;
Determine that the tendon of the target user is impaired according to the difference of first similarity and second similarity
Degree;
Determine the rehabilitation programme for recommending rehabilitation training information as the target user corresponding with the tendon extent of damage.
Optionally, the recommendation rehabilitation training information includes the rehabilitation training time, the acquisition and the extent of damage pair
After the recommendation rehabilitation training information answered, the method also includes:
Obtain the target user taken after the first preset time the second action video and second it is default when
Between after the third action video of the target user that takes;
It obtains consecutive frame image in second action video, calculates the of consecutive frame image in second action video
Three similarities;
Consecutive frame image in the third action video is obtained, of consecutive frame image in the third action video is calculated
Four similarities;
When adjusting the rehabilitation training according to the changing condition of first similarity, third similarity, the 4th similarity
Between or kept for rehabilitation training time.
Optionally, the method also includes:
Target convolution neural network model is obtained, the target convolution neural network model is by the first convolutional neural networks mould
Type and the second convolution neural network model composition, the output valve of the target convolution neural network model are the first convolution mind
The output valve of the first output valve and the second convolution neural network model through network model carries out mean value computation and obtains;
Training sample is obtained, the training sample includes the positive sample of impaired tendon image and bearing for non-impaired tendon image
Sample;
The target convolution neural network model is trained by the training sample, obtains the convolution of the training
Neural network model.
Optionally, the tendon that the target site comprising the target user is extracted from first action video is dynamic
The first image to gesture includes:
Obtain the first pose presentation and the second pose presentation of target site described in first action video, wherein
First pose presentation and second pose presentation are adjacent image;
Calculate the absolute value of the difference of the pixel value of first pose presentation and second pose presentation;
Judge whether the absolute value is greater than preset threshold;
If the absolute value is greater than the preset threshold, first pose presentation and second pose presentation are determined
The difference image of the difference of pixel value is the first image of the tendon movement posture of the target site comprising the target user.
Optionally, described to be extracted from first action video based on the first image comprising the target user
Second image of the tendon movement posture of target site includes:
Based on the first image by optical flow algorithm to the mesh in first action video including the target user
The tendon movement posture at mark position is tracked;
The multiframe that extraction traces into continuously includes that the tendon movement posture picture of the target site of the target user is
Second picture.
In addition, to achieve the above object, the present invention also provides a kind of tendon condition evaluation device based on deep learning should
Device includes memory and processor, be stored in the memory can run on the processor based on deep learning
Tendon condition evaluation program is realized as follows when the tendon condition evaluation program based on deep learning is executed by the processor
Step:
Obtain the first action video of target user;
The of the tendon movement posture of the target site comprising the target user is extracted from first action video
One image, wherein the first image is single-frame images;
The target site comprising the target user is extracted from first action video based on the first image
Second image of tendon movement posture, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolution
The recognition result whether tendon of the target site of the target user of neural network model output is damaged;
If the recognition result is that the tendon of the target site of the target user is impaired, first movement is obtained
Multiple groups consecutive frame image in video;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-be damaged
Second similarity of the consecutive frame of tendon movement posture;
Determine that the tendon of the target user is impaired according to the difference of first similarity and second similarity
Degree;
Determine the rehabilitation programme for recommending rehabilitation training information as the target user corresponding with the tendon extent of damage.
Optionally, the recommendation rehabilitation training information includes the rehabilitation training time, the tendon shape based on deep learning
Condition appraisal procedure is executed by the processor, also realization following steps:
After obtaining recommendation rehabilitation training information corresponding with the extent of damage, shooting after the first preset time is obtained
To the target user the second action video and the second preset time after the third of the target user that takes
Action video;
It obtains consecutive frame image in second action video, calculates the of consecutive frame image in second action video
Three similarities;
Consecutive frame image in the third action video is obtained, of consecutive frame image in the third action video is calculated
Four similarities;
When adjusting the rehabilitation training according to the changing condition of first similarity, third similarity, the 4th similarity
Between or kept for rehabilitation training time.
Optionally, the tendon condition evaluation program based on deep learning is executed by the processor, is also realized as follows
Step:
Target convolution neural network model is obtained, the target convolution neural network model is by the first convolutional neural networks mould
Type and the second convolution neural network model composition, the output valve of the target convolution neural network model are the first convolution mind
The output valve of the first output valve and the second convolution neural network model through network model carries out mean value computation and obtains;
Training sample is obtained, the training sample includes the positive sample of impaired tendon image and bearing for non-impaired tendon image
Sample;
The target convolution neural network model is trained by the training sample, obtains the convolution of the training
Neural network model.
Optionally, the tendon that the target site comprising the target user is extracted from first action video is dynamic
The first image to gesture includes:
Obtain the first pose presentation and the second pose presentation of target site described in first action video, wherein
First pose presentation and second pose presentation are adjacent image;
Calculate the absolute value of the difference of the pixel value of first pose presentation and second pose presentation;
Judge whether the absolute value is greater than preset threshold;
If the absolute value is greater than the preset threshold, first pose presentation and second pose presentation are determined
The difference image of the difference of pixel value is the first image of the tendon movement posture of the target site comprising the target user.
Optionally, described to be extracted from first action video based on the first image comprising the target user
Second image of the tendon movement posture of target site includes:
Based on the first image by optical flow algorithm to the mesh in first action video including the target user
The tendon movement posture at mark position is tracked;
The multiframe that extraction traces into continuously includes that the tendon movement posture picture of the target site of the target user is
Second picture.
Based on the first image by optical flow algorithm to the mesh in first action video including the target user
The tendon movement posture at mark position is tracked;
The multiframe that extraction traces into continuously includes that the tendon movement posture picture of the target site of the target user is
Second picture.
In addition, to achieve the above object, it is described computer-readable the present invention also provides a kind of computer readable storage medium
The tendon condition evaluation program based on deep learning, the tendon condition evaluation based on deep learning are stored on storage medium
Program can be executed by one or more processor, to realize the tendon condition evaluation method based on deep learning as described above
The step of.
Tendon condition evaluation method, apparatus and computer readable storage medium proposed by the present invention based on deep learning,
Obtain the first action video of target user;The target site comprising the target user is extracted from first action video
Tendon movement posture the first image, wherein the first image is single-frame images;Based on the first image from described
The second image of the tendon movement posture of the target site comprising the target user is extracted in one action video, wherein described
Second image is continuous multiple frames image;The first image and second image are input to trained convolutional neural networks mould
Type obtains the knowledge whether tendon of the target site of the target user of the convolutional neural networks model output is damaged
Other result;If the recognition result is that the tendon of the target site of the target user is impaired, first movement is obtained
Multiple groups consecutive frame image in video;The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, with
And obtain the second similarity for presetting the consecutive frame of non-impaired tendon movement posture;According to first similarity and described
The difference of two similarities determines the tendon extent of damage of the target user;Determine recommendation corresponding with the tendon extent of damage
Rehabilitation training information is the rehabilitation programme of the target user.Since the multitiered network structure extraction of convolutional neural networks inputs number
According to further feature therefore can be improved the accuracy rate of identification, realize and whether impaired to user's tendon situation accurately commented
The purpose estimated, meanwhile, determining user's tendon impaired week, the similarity of adjacent tendon posture movement when based on user movement, really
Determine the tendon extent of damage of user and then obtain recommendation rehabilitation training information corresponding with the tendon extent of damage, to realize
The purpose of more accurate rehabilitation training opinion is provided after tendon is impaired.
Detailed description of the invention
Fig. 1 is the flow diagram for the tendon condition evaluation method based on deep learning that one embodiment of the invention provides;
Fig. 2 is the internal structure signal for the tendon condition evaluation device based on deep learning that one embodiment of the invention provides
Figure;
Based on deep learning in the tendon condition evaluation device based on deep learning that Fig. 3 provides for one embodiment of the invention
Tendon condition evaluation program module diagram.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of tendon condition evaluation method based on deep learning.It is real for the present invention one shown in referring to Fig.1
The flow diagram of the tendon condition evaluation method based on deep learning of example offer is provided.This method can be held by a device
Row, which can be by software and or hardware realization.
In the present embodiment, the tendon condition evaluation method based on deep learning includes:
Step S101 obtains the first action video of target user.
In the present embodiment, the target user is the user that the method assesses tendon situation through the invention.
First action video is the action video comprising one or more positions, and one in the first action video
A or multiple positions carry out lasting movement.For example, the arm that first action video includes target user constantly repeats certain
One movement or first action video include target user be sequentially repeated standings, squat down, stand up, walk these move
Make.
In an alternative embodiment, by the first action video of photographic device photographic subjects user, camera shooting dress is obtained
Set the first action video of the target user taken, wherein the photographic device is for one or more from multiple angles to target
The photographic device that user is shot.
Step S201 extracts the tendon movement of the target site comprising the target user from first action video
First image of posture, wherein the first image is single-frame images.
In the present embodiment, the first action video is the action video of the target position position comprising target user.
The target site be it is preset, for example, target site left hand arm, alternatively, target site is right leg.
In the present embodiment, the tendon movement posture of the target site comprising target user is extracted from the first action video
First image includes: the video clip for obtaining the first time period in the first action video to second time period, obtains piece of video
Any one frame image in section is the first image, wherein the first time period to second time period is the first action video
Intermediary time period.
Optionally, in an alternative embodiment of the invention, described extract from first action video includes the target
First image of the tendon movement posture of the target site of user includes:
Obtain the first pose presentation and the second pose presentation of target site described in first action video, wherein
First pose presentation and second pose presentation are adjacent image;
Calculate the absolute value of the difference of the pixel value of first pose presentation and second pose presentation;
Judge whether the absolute value is greater than preset threshold;
If the absolute value is greater than the preset threshold, first pose presentation and second pose presentation are determined
The difference image of the difference of pixel value is the first image of the tendon movement posture of the target site comprising the target user.
In the present embodiment, first pose presentation and the second pose presentation are two of any time in the first action video
The single-frame images of Zhang Xianglin.
In the present embodiment, the preset threshold is preset.
For example, if Ik-1(x, y) indicates that the first pose presentation, I (x, y) indicate that the second pose presentation, the second pose presentation are
Latter image of the first pose presentation, T is preset threshold, then has:
Wherein, Dk(x, y) indicates the difference image of the difference of the pixel value of the first pose presentation and the second pose presentation, i.e. Dk
The image that (x, y) is indicated is the first image of the tendon movement posture of the target site comprising target user.
The difference image includes foreground point and background dot, determines the pixel value of the first pose presentation and the second pose presentation
Absolute value of the difference be less than or equal to preset threshold point be background dot, determine the pixel of the first pose presentation and the second pose presentation
The point that the absolute value of the difference of value is greater than preset threshold is foreground point, determines that the pixel value of background dot is 0, determines the pixel of foreground point
Value is 1, then obtains the bianry image of difference image.
Due in the continuous multiple frames image of movement, if background variation is little and when occurring without moving target, consecutive frame
Pixel value difference is smaller, if pixel value difference is bigger, it is determined that moving target occur.Therefore, in the present embodiment, pass through consecutive frame
Video obtains moving target, i.e. the first image of the tendon movement posture of the target site comprising target user.
In an alternative embodiment of the invention, before extracting the first image, to first action video include it is several
Frame image is pre-processed.For example, being pre-processed to each frame image that the first action video includes, mentioned to improve image
It takes and the accuracy of image recognition.
Specifically, carrying out pretreatment to several frame images that first action video includes includes: to move described first
Several frames for including as video carry out following one or more processing: gray processing processing, binary conversion treatment, noise reduction process, dimensionality reduction
Processing and dimension normalization processing.
In the present embodiment, carrying out gray processing processing to several frame images includes: the pixel three-component that several frame images are arranged
Respectively R, G, B, then several frame images being calculated by pre-set color conversion formula (such as 0.3*R+0.59*G+0.11*B)
Gray value obtains the gray value of several frame images.
The binary conversion treatment is to set 0 or 1 for the pixel on image, and whole image is made to show black and white effect.
In a kind of alternative embodiment, the image after binary conversion treatment will do it by self-adapting image denoising filter
Noise reduction process to filter out salt-pepper noise (a kind of white point or stain occurred at random i.e. in image), and is protected as far as possible
The details of image.
Specifically, if the image to noise reduction process is f (x, y), under the action of degenrate function H, due to by noise
Due to the influence by noise η (x, y), a degraded image g (x, y) is finally obtained.At this moment an image degeneration formula is obtained:
G (x, y)=η (x, y)+f (x, y), and noise reduction is carried out to image using sef-adapting filter method, it may be assumed that
Wherein,It is the noise variance of whole image,It is the pixel grey scale in the window of point (x, y) nearby
Mean value,It is the variance of pixel grey scale of the point (x, y) nearby in a window.
In the present embodiment, it can be dropped by principal component analysis (Principal Component Analysis, PCA)
Dimension processing.Principal component is that a kind of by orthogonal transformation, there may be the variables of correlation to become one group of linearly uncorrelated change by one group
The method of amount.
It, can be to people in several frame images in order to eliminate influence of the resolution ratio to dimensions of human figure of video in the present embodiment
Body posture point coordinate carries out dimension normalization processing.Due to needing to retain posture sequence when dimension normalization in time and space dimension
The relative positional relationship of degree, therefore, when being normalized, the Pan and Zoom scale of personage's posture is consistent in image.And
And the coordinate components scaling of image is also consistent, avoids damage to the stature proportionate relationship of personage in image.
Specifically, the tendon original coordinates for assuming any one frame image in several frame images are (x, y), then recoil is normalized
It is designated as (x0,y0), it may be assumed that
Wherein, d=max { w, h }, w and h are respectively the width and height of video, after normalized, x, y ∈ (- 1,1).
Step S301 extracts the mesh comprising the target user based on the first image from first action video
Mark the second image of the tendon movement posture at position, wherein second image is continuous multiple frames image.
In the present embodiment, according to target tracking algorism (for example, mean shift algorithm, based on the target of Kalman filter with
Track, or the target following based on particle filter) target site based on the first image trace target user tendon act appearance
State, and then get continuous multiple frames image.
Optionally, in an alternative embodiment of the invention, the first image that is based on is from first action video
Extraction includes that the second image of the tendon movement posture of the target site of the target user includes:
Based on the first image by optical flow algorithm to the mesh in first action video including the target user
The tendon movement posture at mark position is tracked;
The multiframe that extraction traces into continuously includes that the tendon movement posture picture of the target site of the target user is
Second picture.
Light stream is motor pattern, and light stream expresses the variation of image, can since it contains the information of target movement
Observed person is used to determine the motion conditions of target.
In the present embodiment, the deformation between two width consecutive frame tendon images can be assessed by optical flow method, and then to flesh
Tendon movement posture is tracked.
Basic assumption is image pixel conservation in optical flow method, calculates two field pictures each pixel between time T to T+t
The movement of position.
Taylor series based on image use partial derivative to room and time coordinate, obtain image constraint equation:
It is converted to obtain according to image pixel conservation:
Wherein, (x, y) is pixel coordinate, and t is the time;
By Horn-Schunck optical flow algorithm, (Horn-Schunck optical flow algorithm proposes the smoothness constraint item of light stream
This condition is combined with the Basic Constraint Equation of light stream, solves the aperture problem of light stream Basic Constraint Equation by part) it is asked
Solution is i.e.:
Wherein,WithThe mean value in u neighborhood and v neighborhood is respectively indicated, which considers light stream error, and light stream is solved
It is grouped into solution extreme-value problem, and is solved with iterative method, iterative equation is as follows:
Wherein, λ is the smooth control factor, its value is influenced by noise present in image, if noise is stronger, is said
The confidence level of bright image data itself is lower, needs more to rely on optical flow constraint, and λ can take biggish value, conversely, λ can be with
Take lesser value, thus realize in the first action video include target user target site tendon movement posture carry out with
Track.
The first image and second image are input to trained convolutional neural networks model, obtained by step S401
The recognition result for taking the tendon of the target user of the convolutional neural networks model output whether to be damaged.
The convolutional neural networks (Convolutional Neural Networks, CNN) are a kind of Feedforward Neural Networks
Network, its artificial neuron can respond the surrounding cells in a part of coverage area, and basic structure includes two layers, and one is
Feature extraction layer, the input of each neuron is connected with the local acceptance region of preceding layer, and extracts the feature of the part.Once should
After local feature is extracted, its positional relationship between other feature is also decided therewith;The second is Feature Mapping layer, network
Each computation layer be made of multiple Feature Mappings, each Feature Mapping is a plane, the weight of all neurons in plane
It is equal.Meanwhile activation primitive of the Feature Mapping structure using the small sigmoid function of influence function core as convolutional network, make
Obtaining Feature Mapping has shift invariant.Further, since the neuron on a mapping face shares weight, thus reduce network
The number of free parameter.
Each of convolutional neural networks convolutional layer all followed by one is used to ask the meter of local average and second extraction
Layer is calculated, this distinctive structure of feature extraction twice reduces feature resolution.
Specifically, convolutional neural networks may include input layer, convolutional layer, down-sampling layer, full articulamentum and output layer.
Wherein, input layer is the unique data input port of entire convolutional neural networks, for defining different types of data
Input.
Convolutional layer is used to carry out convolution operation to the data of input convolutional layer, the characteristic pattern after exporting convolution.
(Pooling layers) of down-sampling layer are used to carry out down-sampling operation on Spatial Dimension to incoming data, so that input
Characteristic pattern length and width become original half.
Each neuron and all neurons of input are connected with each other in full articulamentum, are then carried out by activation primitive
It calculates.
Output layer also referred to as classification layer, the classification score value of each classification can be calculated in last output, i.e. calculating tendon is
No impaired end value either probability value.
In the present embodiment, trained convolutional neural networks model is to first pass through training in advance to identify whether tendon is damaged user
Model.
The recognition result that whether is damaged of tendon of the target user for obtaining the output of convolutional neural networks model is specifically
Obtain the recognition result whether tendon of the target site of target user is damaged.
In the present embodiment, the first image and the second image are identified by convolutional neural networks, and then determine target
User is impaired with the presence or absence of tendon, and the first image is the first image of the tendon movement posture of the target site of target user, the
Two images are the continuous multiple frames image of the tendon movement posture of the target site comprising target user, are not determined by single-frame images
Whether the tendon movement posture of the target site of target user is damaged, and therefore, can more accurately determine mesh through this embodiment
Whether the tendon of mark user is damaged.
Optionally, in an alternative embodiment of the invention, the convolutional neural networks trained are obtained by following steps:
Target convolution neural network model is obtained, the target convolution neural network model is by the first convolutional neural networks mould
Type and the second convolution neural network model composition, the output valve of the target convolution neural network model are the first convolution mind
The output valve of the first output valve and the second convolution neural network model through network model carries out mean value computation and obtains;
Training sample is obtained, the training sample includes the positive sample of impaired tendon image and bearing for non-impaired tendon image
Sample;
The target convolution neural network model is trained by the training sample, obtains the convolution of the training
Neural network model.
Two convolutional neural networks frameworks are used in the present embodiment, specifically, target convolution neural network model includes the
One convolution neural network model and the second convolution neural network model, the input layer of the first convolution neural network model are training sample
Single-frame images in this, the input layer of the second convolution neural network model are the multiple image in training sample.
In the present embodiment, the negative sample of the training sample positive sample comprising impaired tendon image and non-impaired tendon image,
Specifically, positive sample includes the non-impaired tendon image of several single frames and the continuous non-impaired tendon image of several groups, negative sample
This is damaged tendon image and several groups comprising several single frames and is continuously damaged tendon image.
The first convolution neural network model is trained so that the first convolutional neural networks by positive sample and negative sample
Model can be identified in single-frame images and is damaged with the presence or absence of tendon, by positive sample and negative sample to the second convolutional neural networks mould
Type is trained that the second convolution neural network model is identified is impaired with the presence or absence of tendon in continuous multiple frames image.
In the present embodiment, the first convolution neural network model and the convolutional layer of the second convolutional neural networks name can be successively
Convolutional layer including a 7*7, the maximum value pond layer of 3*3,4 convolution modules, each convolution module is from linear projection
Building BOB(beginning of block), be followed by the structure block of the different number with Ontology Mapping, meanwhile, the first convolution neural network model
First output valve and the second output valve of the second convolution neural network model merge both with averaging method, obtain final
Determine whether there is the impaired judging result of tendon.
In the present embodiment, after training sample is input to target convolutional neural networks, convolution is carried out to training sample
Operation.Specifically, to image and filtering matrix (weight of one group of fixation: because multiple weights of each neuron are fixed,
A constant Filter can be regarded as again) do inner product (element multiplication is summed again one by one) operation be convolution behaviour
Make.
In the present embodiment, before carrying out convolution operation, training sample is filled on boundary, to increase the big of matrix
It is small.Specifically, a set filter { filter can be provided in convolutional layer0,filter1, it is applied respectively in image color channel
With generate one group of feature on image category channel.The scale of each filter is d*h, wherein d is the dimension of image, and h is window
The size of mouth.If each Directional Extension pixel quantity is p, the size of original image is (n+2p) * (n+2p) after filling, if
Filter size remains unchanged, then exporting picture size is (n+2p-f+1) * (n+2p-f+1).
In pond, layer carries out maximum pondization operation, to solve the problems, such as that feature quantity is unfixed, to feature in the present embodiment
cα=(cα,0,cα,1…,cα,i) pondization operation is carried out, choose cαIn maximum value as output, i.e. cα, max=maxcα。
In convolutional neural networks model, loss function is used to evaluate the predicted value of convolutional neural networks model outputWith the difference between true value Y, penalty values are smaller, and the performance of convolutional neural networks model is better.It is passed according to mode
It broadcasts, input, the output of output layer each unit areCt=f (It) (t=1,2 ..., 8), ItIt is defeated
The input value of layer unit out, CtFor the output valve of output layer each unit, wjtFor the connection weight of middle layer to output layer, y is output
The threshold value of layer unit, bjReLU function relu (x)=max is determined for the input vector of output layer to alleviate gradient dispersion problem
(0, x) it is used as activation primitive, which meets the sparsity in bionics, only just activates when input value is higher than certain amount
The neuron node is limited when input value is lower than 0, when input rises to a certain threshold or more, independent variable in function
It is in a linear relationship with dependent variable.
In the present embodiment, gradient descent algorithm solution loss function is utilized.With gradient descent algorithm solution loss function.Under gradient
Drop algorithm is the most common optimization algorithm of neural network model training.To find loss functionMinimum value, need edge
The direction opposite with gradient vector more new variables, can make in this way gradient reduce it is most fast, until loss converge to minimum value,
Parameter more new formula is as follows: L=L- α dL/dy, wherein α indicates learning rate.
Step S501 obtains institute if the tendon that the recognition result is the target site of the target user is damaged
State the multiple groups consecutive frame image in the first action video.
Every group of consecutive frame image includes two or three adjacent single-frame images in the multiple groups consecutive frame image.
In the present embodiment, the multiple groups consecutive frame image of acquisition is the tendon movement posture of the target site comprising target user
Image.
In other embodiments of the present invention, it if recognition result is undamaged for the tendon of the target site of target user, sends out
Send tendon undamaged prompting.
Step S601 calculates the first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image, and obtains
Preset the second similarity of the consecutive frame of non-impaired tendon movement posture.
In a kind of alternative embodiment, the first similarity of consecutive frame image is determined by Euclidean distance.
In other embodiments, the first similarity of consecutive frame image can also be determined by COS distance.
In the present embodiment, the second similarity of the movement posture of the consecutive frame for presetting non-impaired tendon movement posture can
Think pre-stored.Similar, the second similarity can be obtained according to adjacent non-impaired tendon image.
Step S701 determines the target user's according to the difference of first similarity and second similarity
The tendon extent of damage.
In the present embodiment, the target is determined according to the difference of first similarity and second similarity
The tendon extent of damage includes: to obtain the difference of the first similarity and the second similarity, and obtain the difference and impaired journey
The corresponding relationship of degree;Corresponding relationship according to the difference and difference of the first similarity and the second similarity and the extent of damage is true
Set the goal the tendon extent of damage of user.
Step S801 determines that recommendation rehabilitation training information corresponding with the tendon extent of damage is the target user's
Rehabilitation programme.
In the present embodiment, the corresponding recommendation rehabilitation training information of the different tendon extent of damages is stored in advance, and then obtaining
To after the tendon extent of damage of target user, recommendation rehabilitation training corresponding with the tendon extent of damage of the target user is obtained
Information is the rehabilitation programme of target user.
The tendon condition evaluation method based on deep learning that the present embodiment proposes obtains the first movement view of target user
Frequently;The first figure of the tendon movement posture of the target site comprising the target user is extracted from first action video
Picture, wherein the first image is single-frame images;It is extracted from first action video based on the first image comprising institute
State the second image of the tendon movement posture of the target site of target user, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolutional neural networks
The recognition result whether tendon of the target site of the target user of model output is damaged;If the recognition result is
The tendon of the target site of the target user is impaired, obtains the multiple groups consecutive frame image in first action video;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-impaired tendon movement
Second similarity of the consecutive frame of posture;The mesh is determined according to the difference of first similarity and second similarity
Mark the tendon extent of damage of user;Determine recommendation rehabilitation training information corresponding with the tendon extent of damage for target use
The rehabilitation programme at family.Due to the further feature of the multitiered network structure extraction input data of convolutional neural networks, Neng Gouti
The accuracy rate of height identification is realized to the whether impaired purpose for carrying out accurate evaluation of user's tendon situation, meanwhile, determining user
In tendon impaired week, the similarity of adjacent tendon posture movement, determines the tendon extent of damage of user in turn when based on user movement
Recommendation rehabilitation training information corresponding with the tendon extent of damage is obtained, is provided more accurately after tendon is impaired to realize
The purpose of rehabilitation training opinion.
Optionally, in an alternative embodiment of the invention, the recommendation rehabilitation training information includes the rehabilitation training time, described
After obtaining recommendation rehabilitation training information corresponding with the extent of damage, this method further includes following steps:
Obtain the target user taken after the first preset time the second action video and second it is default when
Between after the third action video of the target user that takes;
It obtains consecutive frame image in second action video, calculates the of consecutive frame image in second action video
Three similarities;
Consecutive frame image in the third action video is obtained, of consecutive frame image in the third action video is calculated
Four similarities;
When adjusting the rehabilitation training according to the changing condition of first similarity, third similarity, the 4th similarity
Between or kept for rehabilitation training time.
In the present embodiment, the first preset time is to determine that target user's tendon is damaged to target user to have carried out rehabilitation training
A period of time, the second preset time is a period of time after first preset time, and preferred preset time certainly is
Target user is after the first preset time and carries out a period of time of rehabilitation training.
In the present embodiment, consecutive frame image in the second action video is obtained, calculates consecutive frame image in the second action video
Third similarity, and obtain consecutive frame image in third action video, calculate consecutive frame image in third action video
4th similarity may refer to foregoing manner and carry out image acquisition and similarity calculation.
In a kind of alternative embodiment, rehabilitation is adjusted according to the changing condition of first similar, third similarity, the 4th similarity
If training time or holding rehabilitation training time include: that similarity becomes larger, extend the rehabilitation training time;If similarity
Variation reduces, and shortens the rehabilitation training time;If similarity variation remains unchanged, kept for the rehabilitation training time.
In the present embodiment, the first similarity, third similarity, the 4th similarity changing condition can determine user pass through
Recovery after rehabilitation training, therefore according to the changing condition of the similarity adjustable rehabilitation training time, so as to
Enough adjustment that the rehabilitation training time accurately and timely is given in user's rehabilitation course.
The present invention also provides a kind of tendon condition evaluation device based on deep learning.Referring to shown in Fig. 2, for the present invention one
The schematic diagram of internal structure for the tendon condition evaluation device based on deep learning that embodiment provides.
In the present embodiment, the tendon condition evaluation device 1 based on deep learning can be PC (Personal
Computer, PC), it is also possible to the terminal devices such as smart phone, tablet computer, portable computer.It should be based on depth
The tendon condition evaluation device 1 of study includes at least memory 11, processor 12, communication bus 13 and network interface 14.
Wherein, memory 11 include at least a type of readable storage medium storing program for executing, the readable storage medium storing program for executing include flash memory,
Hard disk, multimedia card, card-type memory (for example, SD or DX memory etc.), magnetic storage, disk, CD etc..Memory 11
It can be the internal storage unit of the tendon condition evaluation device 1 based on deep learning in some embodiments, such as this is based on
The hard disk of the tendon condition evaluation device 1 of deep learning.Memory 11 is also possible in further embodiments based on depth
It is equipped on the External memory equipment of the tendon condition evaluation device 1 of habit, such as the tendon condition evaluation device 1 based on deep learning
Plug-in type hard disk, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) card,
Flash card (Flash Card) etc..Further, memory 11 can also both include the tendon condition evaluation based on deep learning
The internal storage unit of device 1 also includes External memory equipment.Memory 11 can be not only used for storage and be installed on based on depth
The application software and Various types of data of the tendon condition evaluation device 1 of study, such as the tendon condition evaluation journey based on deep learning
The code etc. of sequence 01 can be also used for temporarily storing the data that has exported or will export.
Processor 12 can be in some embodiments a central processing unit (Central Processing Unit,
CPU), controller, microcontroller, microprocessor or other data processing chips, the program for being stored in run memory 11
Code or processing data, such as execute the tendon condition evaluation program 01 etc. based on deep learning.
Communication bus 13 is for realizing the connection communication between these components.
Network interface 14 optionally may include standard wireline interface and wireless interface (such as WI-FI interface), be commonly used in
Communication connection is established between the device 1 and other electronic equipments.
Optionally, which can also include user interface, and user interface may include display (Display), input
Unit such as keyboard (Keyboard), optional user interface can also include standard wireline interface and wireless interface.It is optional
Ground, in some embodiments, display can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and organic hair
Optical diode (Organic Light-Emitting Diode, OLED) touches device etc..Wherein, display appropriate can also claim
For display screen or display unit, for be shown in the information handled in the tendon condition evaluation device 1 based on deep learning and
For showing visual user interface.
Fig. 2 illustrate only the tendon condition evaluation program 01 with component 11-14 and based on deep learning based on depth
Spend the tendon condition evaluation device 1 of study, it will be appreciated by persons skilled in the art that Fig. 2 shows structure do not constitute pair
The restriction of tendon condition evaluation device 1 based on deep learning may include than illustrating less perhaps more components or group
Close certain components or different component layouts.
In 1 embodiment of device shown in Fig. 2, the tendon condition evaluation journey based on deep learning is stored in memory 11
Sequence 01;Processor 12 realizes following step when executing the tendon condition evaluation program 01 based on deep learning stored in memory 11
It is rapid:
Obtain the first action video of target user.
In the present embodiment, the target user is the user that the method assesses tendon situation through the invention.
First action video is the action video comprising one or more positions, and one in the first action video
A or multiple positions carry out lasting movement.For example, the arm that first action video includes target user constantly repeats certain
One movement or first action video include target user be sequentially repeated standings, squat down, stand up, walk these move
Make.
In an alternative embodiment, by the first action video of photographic device photographic subjects user, camera shooting dress is obtained
Set the first action video of the target user taken, wherein the photographic device is for one or more from multiple angles to target
The photographic device that user is shot.
The of the tendon movement posture of the target site comprising the target user is extracted from first action video
One image, wherein the first image is single-frame images.
In the present embodiment, the first action video is the action video of the target position position comprising target user.
The target site be it is preset, for example, target site left hand arm, alternatively, target site is right leg.
In the present embodiment, the tendon movement posture of the target site comprising target user is extracted from the first action video
First image includes: the video clip for obtaining the first time period in the first action video to second time period, obtains piece of video
Any one frame image in section is the first image, wherein the first time period to second time period is the first action video
Intermediary time period.
Optionally, in an alternative embodiment of the invention, described extract from first action video includes the target
First image of the tendon movement posture of the target site of user includes:
Obtain the first pose presentation and the second pose presentation of target site described in first action video, wherein
First pose presentation and second pose presentation are adjacent image;
Calculate the absolute value of the difference of the pixel value of first pose presentation and second pose presentation;
Judge whether the absolute value is greater than preset threshold;
If the absolute value is greater than the preset threshold, first pose presentation and second pose presentation are determined
The difference image of the difference of pixel value is the first image of the tendon movement posture of the target site comprising the target user.
In the present embodiment, first pose presentation and the second pose presentation are two of any time in the first action video
The single-frame images of Zhang Xianglin.
In the present embodiment, the preset threshold is preset.
For example, if Ik-1(x, y) indicates that the first pose presentation, I (x, y) indicate that the second pose presentation, the second pose presentation are
Latter image of the first pose presentation, T is preset threshold, then has:
Wherein, Dk(x, y) indicates the difference image of the difference of the pixel value of the first pose presentation and the second pose presentation, i.e. Dk
The image that (x, y) is indicated is the first image of the tendon movement posture of the target site comprising target user.
The difference image includes foreground point and background dot, determines the pixel value of the first pose presentation and the second pose presentation
Absolute value of the difference be less than or equal to preset threshold point be background dot, determine the pixel of the first pose presentation and the second pose presentation
The point that the absolute value of the difference of value is greater than preset threshold is foreground point, determines that the pixel value of background dot is 0, determines the pixel of foreground point
Value is 1, then obtains the bianry image of difference image.
Due in the continuous multiple frames image of movement, if background variation is little and when occurring without moving target, consecutive frame
Pixel value difference is smaller, if pixel value difference is bigger, it is determined that moving target occur.Therefore, in the present embodiment, pass through consecutive frame
Video obtains moving target, i.e. the first image of the tendon movement posture of the target site comprising target user.
In an alternative embodiment of the invention, before extracting the first image, to first action video include it is several
Frame image is pre-processed.For example, being pre-processed to each frame image that the first action video includes, mentioned to improve image
It takes and the accuracy of image recognition.
Specifically, carrying out pretreatment to several frame images that first action video includes includes: to move described first
Several frames for including as video carry out following one or more processing: gray processing processing, binary conversion treatment, noise reduction process, dimensionality reduction
Processing and dimension normalization processing.
The target site comprising the target user is extracted from first action video based on the first image
Second image of tendon movement posture, wherein second image is continuous multiple frames image.
In the present embodiment, according to target tracking algorism (for example, mean shift algorithm, based on the target of Kalman filter with
Track, or the target following based on particle filter) target site based on the first image trace target user tendon act appearance
State, and then get continuous multiple frames image.
Optionally, in an alternative embodiment of the invention, the first image that is based on is from first action video
Extraction includes that the second image of the tendon movement posture of the target site of the target user includes:
Based on the first image by optical flow algorithm to the mesh in first action video including the target user
The tendon movement posture at mark position is tracked;
The multiframe that extraction traces into continuously includes that the tendon movement posture picture of the target site of the target user is
Second picture.
Light stream is motor pattern, and light stream expresses the variation of image, can since it contains the information of target movement
Observed person is used to determine the motion conditions of target.
In the present embodiment, the deformation between two width consecutive frame tendon images can be assessed by optical flow method, and then to flesh
Tendon movement posture is tracked.
The first image and second image are input to trained convolutional neural networks model, obtain the convolution
The recognition result whether tendon of the target user of neural network model output is damaged.
The convolutional neural networks (Convolutional Neural Networks, CNN) are a kind of Feedforward Neural Networks
Network, its artificial neuron can respond the surrounding cells in a part of coverage area, and basic structure includes two layers, and one is
Feature extraction layer, the input of each neuron is connected with the local acceptance region of preceding layer, and extracts the feature of the part.Convolution mind
It is used to ask the computation layer of local average and second extraction through each of network convolutional layer all followed by one, it is this distinctive
Feature extraction structure reduces feature resolution twice.
In the present embodiment, trained convolutional neural networks model is to first pass through training in advance to identify whether tendon is damaged user
Model.
The recognition result that whether is damaged of tendon of the target user for obtaining the output of convolutional neural networks model is specifically
Obtain the recognition result whether tendon of the target site of target user is damaged.
In the present embodiment, the first image and the second image are identified by convolutional neural networks, and then determine target
User is impaired with the presence or absence of tendon, and the first image is the first image of the tendon movement posture of the target site of target user, the
Two images are the continuous multiple frames image of the tendon movement posture of the target site comprising target user, are not determined by single-frame images
Whether the tendon movement posture of the target site of target user is damaged, and therefore, can more accurately determine mesh through this embodiment
Whether the tendon of mark user is damaged.
Optionally, in an alternative embodiment of the invention, the convolutional neural networks trained are obtained by following steps:
Target convolution neural network model is obtained, the target convolution neural network model is by the first convolutional neural networks mould
Type and the second convolution neural network model composition, the output valve of the target convolution neural network model are the first convolution mind
The output valve of the first output valve and the second convolution neural network model through network model carries out mean value computation and obtains;
Training sample is obtained, the training sample includes the positive sample of impaired tendon image and bearing for non-impaired tendon image
Sample;
The target convolution neural network model is trained by the training sample, obtains the convolution of the training
Neural network model.
Two convolutional neural networks frameworks are used in the present embodiment, specifically, target convolution neural network model includes the
One convolution neural network model and the second convolution neural network model, the input layer of the first convolution neural network model are training sample
Single-frame images in this, the input layer of the second convolution neural network model are the multiple image in training sample.
In the present embodiment, the negative sample of the training sample positive sample comprising impaired tendon image and non-impaired tendon image,
Specifically, positive sample includes the non-impaired tendon image of several single frames and the continuous non-impaired tendon image of several groups, negative sample
This is damaged tendon image and several groups comprising several single frames and is continuously damaged tendon image.
The first convolution neural network model is trained so that the first convolutional neural networks by positive sample and negative sample
Model can be identified in single-frame images and is damaged with the presence or absence of tendon, by positive sample and negative sample to the second convolutional neural networks mould
Type is trained that the second convolution neural network model is identified is impaired with the presence or absence of tendon in continuous multiple frames image.
In the present embodiment, the first convolution neural network model and the convolutional layer of the second convolutional neural networks name can be successively
Convolutional layer including a 7*7, the maximum value pond layer of 3*3,4 convolution modules, each convolution module is from linear projection
Building BOB(beginning of block), be followed by the structure block of the different number with Ontology Mapping, meanwhile, the first convolution neural network model
First output valve and the second output valve of the second convolution neural network model merge both with averaging method, obtain final
Determine whether there is the impaired judging result of tendon.
In the present embodiment, after training sample is input to target convolutional neural networks, convolution is carried out to training sample
Operation.
In a kind of optional example, before carrying out convolution operation, training sample is filled on boundary, to increase matrix
Size.
In pond, layer carries out maximum pondization operation, to solve the problems, such as that feature quantity is unfixed, to feature in the present embodiment
cα=(cα,0,cα,1…,cα,i) pondization operation is carried out, choose cαIn maximum value as output, i.e. cα, max=maxcα。
In convolutional neural networks model, loss function is used to evaluate the predicted value of convolutional neural networks model outputWith the difference between true value Y, penalty values are smaller, and the performance of convolutional neural networks model is better.
In the present embodiment, gradient descent algorithm solution loss function is utilized.
If the recognition result is that the tendon of the target site of the target user is impaired, first movement is obtained
Multiple groups consecutive frame image in video.
Every group of consecutive frame image includes two or three adjacent single-frame images in the multiple groups consecutive frame image.
In the present embodiment, the multiple groups consecutive frame image of acquisition is the tendon movement posture of the target site comprising target user
Image.
In other embodiments of the present invention, it if recognition result is undamaged for the tendon of the target site of target user, sends out
Send tendon undamaged prompting.
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-be damaged
Second similarity of the consecutive frame of tendon movement posture.
In a kind of alternative embodiment, the first similarity of consecutive frame image is determined by Euclidean distance.
In other embodiments, the first similarity of consecutive frame image can also be determined by COS distance.
In the present embodiment, the second similarity of the movement posture of the consecutive frame for presetting non-impaired tendon movement posture can
Think pre-stored.Similar, the second similarity can be obtained according to adjacent non-impaired tendon image.
Determine that the tendon of the target user is impaired according to the difference of first similarity and second similarity
Degree.
In the present embodiment, the target is determined according to the difference of first similarity and second similarity
The tendon extent of damage includes: to obtain the difference of the first similarity and the second similarity, and obtain the difference and impaired journey
The corresponding relationship of degree;Corresponding relationship according to the difference and difference of the first similarity and the second similarity and the extent of damage is true
Set the goal the tendon extent of damage of user.
Determine the rehabilitation programme for recommending rehabilitation training information as the target user corresponding with the tendon extent of damage.
In the present embodiment, the corresponding recommendation rehabilitation training information of the different tendon extent of damages is stored in advance, and then obtaining
To after the tendon extent of damage of target user, recommendation rehabilitation training corresponding with the tendon extent of damage of the target user is obtained
Information is the rehabilitation programme of target user.
The tendon condition evaluation device based on deep learning that the present embodiment proposes obtains the first movement view of target user
Frequently;The first figure of the tendon movement posture of the target site comprising the target user is extracted from first action video
Picture, wherein the first image is single-frame images;It is extracted from first action video based on the first image comprising institute
State the second image of the tendon movement posture of the target site of target user, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolutional neural networks
The recognition result whether tendon of the target site of the target user of model output is damaged;If the recognition result is
The tendon of the target site of the target user is impaired, obtains the multiple groups consecutive frame image in first action video;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-impaired tendon movement
Second similarity of the consecutive frame of posture;The mesh is determined according to the difference of first similarity and second similarity
Mark the tendon extent of damage of user;Determine recommendation rehabilitation training information corresponding with the tendon extent of damage for target use
The rehabilitation programme at family.Due to the further feature of the multitiered network structure extraction input data of convolutional neural networks, Neng Gouti
The accuracy rate of height identification is realized to the whether impaired purpose for carrying out accurate evaluation of user's tendon situation, meanwhile, determining user
In tendon impaired week, the similarity of adjacent tendon posture movement, determines the tendon extent of damage of user in turn when based on user movement
Recommendation rehabilitation training information corresponding with the tendon extent of damage is obtained, is provided more accurately after tendon is impaired to realize
The purpose of rehabilitation training opinion.
Optionally, in an alternative embodiment of the invention, the recommendation rehabilitation training information includes the rehabilitation training time, described
Tendon condition evaluation program based on deep learning is executed by the processor, also realization following steps:
It is described obtain corresponding with extent of damage recommendation rehabilitation training information after, after the first preset time of acquisition
The target user's taken after the second action video and the second preset time of the target user taken
Third action video;
It obtains consecutive frame image in second action video, calculates the of consecutive frame image in second action video
Three similarities;
Consecutive frame image in the third action video is obtained, of consecutive frame image in the third action video is calculated
Four similarities;
When adjusting the rehabilitation training according to the changing condition of first similarity, third similarity, the 4th similarity
Between or kept for rehabilitation training time.
In the present embodiment, the first preset time is to determine that target user's tendon is damaged to target user to have carried out rehabilitation training
A period of time, the second preset time is a period of time after first preset time, and preferred preset time certainly is
Target user is after the first preset time and carries out a period of time of rehabilitation training.
In the present embodiment, consecutive frame image in the second action video is obtained, calculates consecutive frame image in the second action video
Third similarity, and obtain consecutive frame image in third action video, calculate consecutive frame image in third action video
4th similarity may refer to foregoing manner and carry out image acquisition and similarity calculation.
In a kind of alternative embodiment, rehabilitation is adjusted according to the changing condition of first similar, third similarity, the 4th similarity
If training time or holding rehabilitation training time include: that similarity becomes larger, extend the rehabilitation training time;If similarity
Variation reduces, and shortens the rehabilitation training time;If similarity variation remains unchanged, kept for the rehabilitation training time.
In the present embodiment, the first similarity, third similarity, the 4th similarity changing condition can determine user pass through
Recovery after rehabilitation training, therefore according to the changing condition of the similarity adjustable rehabilitation training time, so as to
Enough adjustment that the rehabilitation training time accurately and timely is given in user's rehabilitation course.
Optionally, in other embodiments, the tendon condition evaluation program based on deep learning can also be divided into one
A or multiple modules, one or more module are stored in memory 11, and by one or more processors (this implementation
Example is processor 12) it is performed to complete the present invention, the so-called module of the present invention is to refer to complete a series of of specific function
Computer program instructions section, for describing the tendon condition evaluation program based on deep learning in the tendon shape based on deep learning
Condition assesses the implementation procedure in device.
For example, referring to shown in Fig. 3, for the present invention is based on the bases in one embodiment of tendon condition evaluation device of deep learning
In the program module schematic diagram of the tendon condition evaluation program of deep learning, the embodiment, the tendon shape based on deep learning
Condition appraisal procedure can be divided into the first acquisition module 10, the first image zooming-out module 20, the second image zooming-out module 30, know
Other module 40, second obtains module 50, computing module 60, the first determining module 70 and the second determining module 80, illustratively:
First acquisition module 10 is used for: obtaining the first action video of target user;
First image zooming-out module 20 is used for: the target comprising the target user is extracted from first action video
First image of the tendon movement posture at position, wherein the first image is single-frame images;
Second image zooming-out module 30 is used for: is extracted from first action video based on the first image comprising institute
State the second image of the tendon movement posture of the target site of target user, wherein second image is continuous multiple frames image;
Identification module 40 is used for: the first image and second image are input to trained convolutional neural networks mould
Type obtains the knowledge whether tendon of the target site of the target user of the convolutional neural networks model output is damaged
Other result;
Second acquisition module 50 is used for: if the recognition result be the target user the target site tendon by
Damage obtains the multiple groups consecutive frame image in first action video.
Computing module 60 is used for: the first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, with
And obtain the second similarity for presetting the consecutive frame of non-impaired tendon movement posture;
First determining module 70 is used for: according to the determination of the difference of first similarity and second similarity
The tendon extent of damage of target user;
Second determining module 80 is used for: determining that recommendation rehabilitation training information corresponding with the tendon extent of damage is described
The rehabilitation programme of target user.
It is above-mentioned first obtain module 10, the first image zooming-out module 20, the second image zooming-out module 30, identification module 40,
The program modules such as the second acquisition module 50, computing module 60, the first determining module 70 and the second determining module 80 are performed institute
Functions or operations step and above-described embodiment of realization are substantially the same, and details are not described herein.
In addition, the embodiment of the present invention also proposes a kind of computer readable storage medium, the computer readable storage medium
On be stored with the tendon condition evaluation program based on deep learning, the tendon condition evaluation program based on deep learning can quilt
One or more processors execute, to realize following operation:
Obtain the first action video of target user;
The of the tendon movement posture of the target site comprising the target user is extracted from first action video
One image, wherein the first image is single-frame images;
The target site comprising the target user is extracted from first action video based on the first image
Second image of tendon movement posture, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolution
The recognition result whether tendon of the target site of the target user of neural network model output is damaged;
If the recognition result is that the tendon of the target site of the target user is impaired, first movement is obtained
Multiple groups consecutive frame image in video;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-be damaged
Second similarity of the consecutive frame of tendon movement posture;
Determine that the tendon of the target user is impaired according to the difference of first similarity and second similarity
Degree;
Determine the rehabilitation programme for recommending rehabilitation training information as the target user corresponding with the tendon extent of damage.
Computer readable storage medium specific embodiment of the present invention and the above-mentioned tendon condition evaluation based on deep learning
Each embodiment of device and method is essentially identical, does not make tired state herein.
It should be noted that the serial number of the above embodiments of the invention is only for description, do not represent the advantages or disadvantages of the embodiments.And
The terms "include", "comprise" herein or any other variant thereof is intended to cover non-exclusive inclusion, so that packet
Process, device, article or the method for including a series of elements not only include those elements, but also including being not explicitly listed
Other element, or further include for this process, device, article or the intrinsic element of method.Do not limiting more
In the case where, the element that is limited by sentence "including a ...", it is not excluded that including process, device, the article of the element
Or there is also other identical elements in method.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in one as described above
In storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that terminal device (it can be mobile phone,
Computer, server or network equipment etc.) execute method described in each embodiment of the present invention.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair
Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of tendon condition evaluation method based on deep learning, which is characterized in that the described method includes:
Obtain the first action video of target user;
The first figure of the tendon movement posture of the target site comprising the target user is extracted from first action video
Picture, wherein the first image is single-frame images;
The tendon of the target site comprising the target user is extracted from first action video based on the first image
Second image of movement posture, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolutional Neural
The recognition result whether tendon of the target site of the target user of network model output is damaged;
If the recognition result is that the tendon of the target site of the target user is impaired, first action video is obtained
In multiple groups consecutive frame image;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-impaired tendon
Second similarity of the consecutive frame of movement posture;
The tendon extent of damage of the target user is determined according to the difference of first similarity and second similarity;
Determine the rehabilitation programme for recommending rehabilitation training information as the target user corresponding with the tendon extent of damage.
2. the tendon condition evaluation method based on deep learning as described in claim 1, which is characterized in that the recommendation rehabilitation
Training information includes the rehabilitation training time, it is described obtain corresponding with extent of damage recommendation rehabilitation training information after, institute
State method further include:
Obtain the target user taken after the first preset time the second action video and the second preset time it
The third action video of the target user taken afterwards;
Consecutive frame image in second action video is obtained, the third phase of consecutive frame image in second action video is calculated
Like degree;
Consecutive frame image in the third action video is obtained, the 4th phase of consecutive frame image in the third action video is calculated
Like degree;
According to the changing condition of first similarity, third similarity, the 4th similarity adjust rehabilitation training time or
Kept for the rehabilitation training time.
3. the tendon condition evaluation method based on deep learning as described in claim 1, which is characterized in that the method is also wrapped
It includes:
Obtain target convolution neural network model, the target convolution neural network model by the first convolution neural network model and
Second convolution neural network model composition, the output valve of the target convolution neural network model are the first convolution nerve net
First output valve of network model and the output valve of the second convolution neural network model carry out mean value computation and obtain;
Training sample is obtained, the training sample includes the positive sample of impaired tendon image and the negative sample of non-impaired tendon image
This;
The target convolution neural network model is trained by the training sample, obtains the convolutional Neural of the training
Network model.
4. the tendon condition evaluation method based on deep learning as claimed any one in claims 1 to 3, which is characterized in that
First figure of the tendon movement posture that the target site comprising the target user is extracted from first action video
As including:
Obtain the first pose presentation and the second pose presentation of target site described in first action video, wherein described
First pose presentation and second pose presentation are adjacent image;
Calculate the absolute value of the difference of the pixel value of first pose presentation and second pose presentation;
Judge whether the absolute value is greater than preset threshold;
If the absolute value is greater than the preset threshold, the pixel of first pose presentation Yu second pose presentation is determined
The difference image of the difference of value is the first image of the tendon movement posture of the target site comprising the target user.
5. the tendon condition evaluation method based on deep learning as claimed any one in claims 1 to 3, which is characterized in that
The tendon for extracting the target site comprising the target user from first action video based on the first image
Second image of movement posture includes:
Based on the first image by optical flow algorithm to the target portion in first action video including the target user
The tendon movement posture of position is tracked;
The tendon movement posture picture for extracting the target site that the multiframe traced into continuously includes the target user is second
Picture.
6. a kind of tendon condition evaluation device based on deep learning, which is characterized in that described device includes memory and processing
Device is stored with the tendon condition evaluation program based on deep learning that can be run on the processor, institute on the memory
It states when the tendon condition evaluation program based on deep learning is executed by the processor and realizes following steps:
Obtain the first action video of target user;
The first figure of the tendon movement posture of the target site comprising the target user is extracted from first action video
Picture, wherein the first image is single-frame images;
The tendon of the target site comprising the target user is extracted from first action video based on the first image
Second image of movement posture, wherein second image is continuous multiple frames image;
The first image and second image are input to trained convolutional neural networks model, obtain the convolutional Neural
The recognition result whether tendon of the target site of the target user of network model output is damaged;
If the recognition result is that the tendon of the target site of the target user is impaired, first action video is obtained
In multiple groups consecutive frame image;
The first similarity of arbitrary neighborhood frame image in the multiple groups consecutive frame image is calculated, and obtains and presets non-impaired tendon
Second similarity of the consecutive frame of movement posture;
The tendon extent of damage of the target user is determined according to the difference of first similarity and second similarity;
Determine the rehabilitation programme for recommending rehabilitation training information as the target user corresponding with the tendon extent of damage.
7. the tendon condition evaluation device based on deep learning as claimed in claim 6, which is characterized in that the recommendation rehabilitation
Training information includes the rehabilitation training time, and the tendon condition evaluation program based on deep learning is executed by the processor,
Also realize following steps:
After obtaining recommendation rehabilitation training information corresponding with the extent of damage, obtain what the first preset time took later
The third movement of the target user taken after the second action video and the second preset time of the target user
Video;
Consecutive frame image in second action video is obtained, the third phase of consecutive frame image in second action video is calculated
Like degree;
Consecutive frame image in the third action video is obtained, the 4th phase of consecutive frame image in the third action video is calculated
Like degree;
According to the changing condition of first similarity, third similarity, the 4th similarity adjust rehabilitation training time or
Kept for the rehabilitation training time.
8. the tendon condition evaluation device based on deep learning as claimed in claim 6, which is characterized in that described to be based on depth
The tendon condition evaluation program of study is executed by the processor, also realization following steps:
Obtain target convolution neural network model, the target convolution neural network model by the first convolution neural network model and
Second convolution neural network model composition, the output valve of the target convolution neural network model are the first convolution nerve net
First output valve of network model and the output valve of the second convolution neural network model carry out mean value computation and obtain;
Training sample is obtained, the training sample includes the positive sample of impaired tendon image and the negative sample of non-impaired tendon image
This;
The target convolution neural network model is trained by the training sample, obtains the convolutional Neural of the training
Network model.
9. the tendon condition evaluation device based on deep learning as described in any one of claim 6 to 8, which is characterized in that
First figure of the tendon movement posture that the target site comprising the target user is extracted from first action video
As including:
Obtain the first pose presentation and the second pose presentation of target site described in first action video, wherein described
First pose presentation and second pose presentation are adjacent image;
Calculate the absolute value of the difference of the pixel value of first pose presentation and second pose presentation;
Judge whether the absolute value is greater than preset threshold;
If the absolute value is greater than the preset threshold, the pixel of first pose presentation Yu second pose presentation is determined
The difference image of the difference of value is the first image of the tendon movement posture of the target site comprising the target user.
10. a kind of computer readable storage medium, which is characterized in that be stored on the computer readable storage medium based on deep
The tendon condition evaluation program of study is spent, the tendon condition evaluation program based on deep learning can be by one or more
It manages device to execute, to realize the step of the tendon condition evaluation method based on deep learning as described in any one of claims 1 to 5
Suddenly.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910370527.6A CN110197721B (en) | 2019-05-06 | 2019-05-06 | Tendon condition assessment method, device and storage medium based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910370527.6A CN110197721B (en) | 2019-05-06 | 2019-05-06 | Tendon condition assessment method, device and storage medium based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110197721A true CN110197721A (en) | 2019-09-03 |
CN110197721B CN110197721B (en) | 2023-06-06 |
Family
ID=67752427
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910370527.6A Active CN110197721B (en) | 2019-05-06 | 2019-05-06 | Tendon condition assessment method, device and storage medium based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110197721B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782967A (en) * | 2019-11-01 | 2020-02-11 | 成都乐动信息技术有限公司 | Fitness action standard degree evaluation method and device |
CN112309578A (en) * | 2020-11-03 | 2021-02-02 | 南通市第一人民医院 | Method and system for improving recovery efficiency of osteoporotic vertebral fracture patient |
CN113158818A (en) * | 2021-03-29 | 2021-07-23 | 青岛海尔科技有限公司 | Method, device and equipment for identifying fake video |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150005637A1 (en) * | 2013-06-28 | 2015-01-01 | Uvic Industry Partnerships Inc. | Tissue displacement estimation by ultrasound speckle tracking |
US20180182094A1 (en) * | 2016-12-26 | 2018-06-28 | Intel Corporation | Proprioception training method and apparatus |
CN108446307A (en) * | 2018-02-05 | 2018-08-24 | 中国科学院信息工程研究所 | A kind of the binary set generation method and image, semantic similarity search method of multi-tag image |
CN108510475A (en) * | 2018-03-09 | 2018-09-07 | 南京索聚医疗科技有限公司 | The measurement method and system of muscle tendon knot in a kind of muscle continuous ultrasound image |
-
2019
- 2019-05-06 CN CN201910370527.6A patent/CN110197721B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150005637A1 (en) * | 2013-06-28 | 2015-01-01 | Uvic Industry Partnerships Inc. | Tissue displacement estimation by ultrasound speckle tracking |
US20180182094A1 (en) * | 2016-12-26 | 2018-06-28 | Intel Corporation | Proprioception training method and apparatus |
CN108446307A (en) * | 2018-02-05 | 2018-08-24 | 中国科学院信息工程研究所 | A kind of the binary set generation method and image, semantic similarity search method of multi-tag image |
CN108510475A (en) * | 2018-03-09 | 2018-09-07 | 南京索聚医疗科技有限公司 | The measurement method and system of muscle tendon knot in a kind of muscle continuous ultrasound image |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110782967A (en) * | 2019-11-01 | 2020-02-11 | 成都乐动信息技术有限公司 | Fitness action standard degree evaluation method and device |
CN110782967B (en) * | 2019-11-01 | 2023-04-21 | 成都乐动信息技术有限公司 | Body-building action standard degree assessment method and device |
CN112309578A (en) * | 2020-11-03 | 2021-02-02 | 南通市第一人民医院 | Method and system for improving recovery efficiency of osteoporotic vertebral fracture patient |
CN113158818A (en) * | 2021-03-29 | 2021-07-23 | 青岛海尔科技有限公司 | Method, device and equipment for identifying fake video |
CN113158818B (en) * | 2021-03-29 | 2023-04-07 | 青岛海尔科技有限公司 | Method, device and equipment for identifying fake video |
Also Published As
Publication number | Publication date |
---|---|
CN110197721B (en) | 2023-06-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7282810B2 (en) | Eye-tracking method and system | |
CN110569795B (en) | Image identification method and device and related equipment | |
CN106068514B (en) | System and method for identifying face in free media | |
Lu et al. | Application of an incremental SVM algorithm for on-line human recognition from video surveillance using texture and color features | |
JP7386545B2 (en) | Method for identifying objects in images and mobile device for implementing the method | |
WO2020078119A1 (en) | Method, device and system for simulating user wearing clothing and accessories | |
US20070122036A1 (en) | Information processing apparatus and control method therefor | |
CN110197721A (en) | Tendon condition evaluation method, apparatus and storage medium based on deep learning | |
CN109685037B (en) | Real-time action recognition method and device and electronic equipment | |
CN109637664A (en) | A kind of BMI evaluating method, device and computer readable storage medium | |
CN112084917A (en) | Living body detection method and device | |
KR101288447B1 (en) | Gaze tracking apparatus, display apparatus and method therof | |
CN105912126B (en) | A kind of gesture motion is mapped to the adaptive adjusting gain method at interface | |
CN109299658A (en) | Face area detecting method, face image rendering method, device and storage medium | |
CN108921140A (en) | Pedestrian's recognition methods again | |
CN111291612A (en) | Pedestrian re-identification method and device based on multi-person multi-camera tracking | |
CN108629265A (en) | Method and apparatus for Pupil diameter | |
Cohen | Event-based feature detection, recognition and classification | |
CN111104911A (en) | Pedestrian re-identification method and device based on big data training | |
Andriana et al. | Combination of face and posture features for tracking of moving human visual characteristics | |
CN112101303B (en) | Image data processing method and device and computer readable storage medium | |
JP7270304B2 (en) | Method and mobile device for implementing the method for verifying the identity of a user by identifying an object in an image that has the user's biometric characteristics | |
CN114511877A (en) | Behavior recognition method and device, storage medium and terminal | |
CN114612979A (en) | Living body detection method and device, electronic equipment and storage medium | |
Chen et al. | Facial landmark detection via pose-induced auto-encoder networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |