CN109190607A - A kind of motion images processing method, device and terminal - Google Patents

A kind of motion images processing method, device and terminal Download PDF

Info

Publication number
CN109190607A
CN109190607A CN201811279509.9A CN201811279509A CN109190607A CN 109190607 A CN109190607 A CN 109190607A CN 201811279509 A CN201811279509 A CN 201811279509A CN 109190607 A CN109190607 A CN 109190607A
Authority
CN
China
Prior art keywords
video image
video
frame
point frame
crucial point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811279509.9A
Other languages
Chinese (zh)
Inventor
曹新英
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN201811279509.9A priority Critical patent/CN109190607A/en
Publication of CN109190607A publication Critical patent/CN109190607A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training

Abstract

The present invention provides a kind of motion images processing methods, mobile terminal, are related to technical field of image processing.Wherein, this method comprises: determining the first crucial point frame of the first human object in the first video image;It is identical with the first crucial point frame ratio with reference to crucial point frame by the second key point transferring frame of the second human object in the second video image;According to the first crucial point frame and with reference to the difference between crucial point frame, action director's information is shown.In the present invention, terminal can be according to the figure ratio of specific action learner, the body frame of action director person in video is converted, and then it can be according to the difference between the body frame after the body frame of action learning person and action director person's conversion, display is directed to action director's information at different human body position, corrects so as to carry out targetedly movement to action learning person, so, it can be improved the accuracy of action learning and the efficiency of action learning.

Description

A kind of motion images processing method, device and terminal
Technical field
The present invention relates to technical field of image processing more particularly to a kind of motion images processing methods, device and terminal.
Background technique
In Working Life, for the demands such as occupation or hobby, people's set of limbs that would generally learn much to link up are dynamic Make, such as dancing, gymnastics, body-building, wushu etc..With being widely used for network media, more and more professional persons pass through Network uploads the instructional video of action director, and action learning persons can also carry out kinesics by the instructional video on network It practises.
However in practical applications, action learning persons often lack when imitating the movement in instructional video Professional guidance, when action imitation is nonstandard, it is difficult to obtain professional movement and correct, in this way, the accuracy of action learning It is more low, and then the efficiency of action learning will be reduced.
Summary of the invention
The present invention provides a kind of motion images processing method, device and terminal, passes through movement religion with the person that solves action learning Video is learned when being learnt, acts accuracy and the more low problem of learning efficiency.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows: a kind of method of motion images processing, is applied to Terminal, comprising:
Determine the first crucial point frame of the first human object in the first video image;
It is and the first key point frame by the second key point transferring frame of the second human object in the second video image Frame ratio is identical with reference to crucial point frame;Second video image and first video image are belonging respectively to different views Frequently, and second video image is corresponding with the first video image attribute;
According to the described first crucial point frame and the difference with reference between crucial point frame, display action director's letter Breath.
In a first aspect, the embodiment of the invention also provides a kind of motion images processing unit, which includes:
First determining module, the first crucial point frame for determining the first human object in the first video image;
Conversion module, for by the second key point transferring frame of the second human object in the second video image be with it is described First crucial point frame ratio is identical with reference to crucial point frame;Second video image and first video image difference Belong to different video, and first video image is corresponding with the second video image attribute;
Display module, for showing according to the described first crucial point frame and the difference with reference between crucial point frame Show action director's information.
Second aspect, the embodiment of the invention also provides a kind of terminal, which includes processor, memory and is stored in On the memory and the computer program that can run on the processor, the computer program are executed by the processor The step of Shi Shixian motion images processing method of the present invention.
The third aspect, the embodiment of the invention also provides a kind of computer readable storage medium, the computer-readable storages Computer program is stored on medium, and motion images processing method of the present invention is realized when computer program is executed by processor The step of.
In embodiments of the present invention, terminal can determine the first key point frame of the first human object in the first video image The body frame of frame namely action learning person, then terminal can be closed second of the second human object in the second video image The body frame of key point frame namely action director person are converted to identical with reference to key point frame with the first crucial point frame ratio Frame, that is to say can convert the body frame of action director person, in turn according to the figure ratio of specific action learner Terminal can show action director's information according to the first crucial point frame and with reference to the difference between crucial point frame.In this hair In bright embodiment, terminal can be according to the figure ratio of specific action learner, to action director person in movement instructional video Body frame is converted, and then can be according to the body frame after the body frame of action learning person and action director person's conversion Between difference, display be directed to different human body position action director's information, so as to action learning person movement carry out It targetedly corrects, so, it is possible the accuracy for improving action learning and the efficiency of action learning.
Detailed description of the invention
Fig. 1 shows the flow chart of one of embodiment of the present invention one motion images processing method;
Fig. 2 shows the flow charts of one of embodiment of the present invention two motion images processing method;
Fig. 3 shows the schematic diagram of the crucial point frame of one of embodiment of the present invention two second;
Fig. 4 shows the schematic diagram of the crucial point frame of one of embodiment of the present invention two first;
Fig. 5 shows one of embodiment of the present invention two with reference to the schematic diagram of crucial point frame;
Fig. 6 shows the structural block diagram of one of embodiment of the present invention three motion images processing unit;
Fig. 7 shows the structural block diagram of another motion images processing unit in the embodiment of the present invention three;
Fig. 8 shows the structural block diagram of first determining module of one of embodiment of the present invention three;
Fig. 9 shows the hardware structural diagram of one of each embodiment of present invention mobile terminal.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts Example, shall fall within the protection scope of the present invention.
Embodiment one
Referring to Fig.1, show the flow chart of the motion images processing method of the embodiment of the present invention one, can specifically include as Lower step:
Step 101, the first crucial point frame of the first human object in the first video image is determined.
In embodiments of the present invention, terminal can obtain or determine the second video before determining the first crucial point frame The crucial point frame of the second of second human object in image.Wherein, the second video image can be the video figure in the second video Picture, the second video can be movement instructional video, such as Dancing Teaching video, body-building instructional video etc., the second human body pair As being the action director person in the second video.Action director person can carry out action demonstration, and will carry out to the movement of displaying Video record, the second video of acquisition movement teaching, and then the second video can be uploaded in network.Action learning person can be with Second video is downloaded to terminal from network, and watches the second video to carry out action learning.
In one implementation, the second of the second human object the crucial point frame can be as used in action director person Terminal is determined, and then is uploaded in network by action director person, so that action learning person is downloaded to together with the second video In terminal used in oneself.In another implementation, the second of the second human object the crucial point frame can also be dynamic After the second video is downloaded to as learner, the terminal as used in action learning person during playing the second video for the first time It determines, and in terminal used in the person that is preset at action learning, to be called directly when subsequent progress action learning, the present invention Embodiment is not especially limited this.
When the second video of terminal plays, the inspection of human body key point can be carried out to the second video image in the second video first Survey, thus detect the second human object can natural torsion, rotation artis position, such as the head of the second human object Key point, elbow key point, knee key point etc..Wherein, the second video image can be any frame video in the second video Any frame key images of key operations can be represented in image or the second video.Then terminal can close each human body Key point is attached according to human body, every two key point interconnected can form one expression human body to Amount, each vector may be constructed the second crucial point frame of the second human object.
In addition, in embodiments of the present invention, the first video image can be the video image in the first video, the first video It can be action learning person while the second video of viewing learns, or the action learning view recorded after study Frequently, the first human object is the action learning person in the first video.
Terminal can determine during recording the first video, or after recording finishes the first video first Then the first video image corresponding with the second video image attribute in one video can carry out human body pass to the first video image The detection of key point, each human body key point can be attached by terminal according to human body later, and every two interconnected is closed Key point can form the vector of an expression human body, and each vector may be constructed the first key point frame of the first human object Frame.
Step 102, it is and the first key by the second key point transferring frame of the second human object in the second video image Point frame ratio is identical with reference to crucial point frame;Second video image and the first video image are belonging respectively to different video, and First video image is corresponding with the second video image attribute.
In embodiments of the present invention, due to figures parameters such as the height of action director person and action learning person, four limbs length It has differences, therefore the first key point transferring frame can be identical with reference to crucial with the second crucial point frame ratio by terminal Point frame that is to say that by the key point transferring frame of action director person be key point identical with current action learning person's figure Frame, so as to improve the accuracy of action director.When action director person's figure is larger, and action learning person's figure is smaller When, the first crucial point frame can be reduced into identical with the second crucial point frame ratio with reference to crucial point frame by terminal.When Action director person's figure is smaller, and when action learning person's figure is larger, and the first crucial point frame can be enlarged into and the by terminal Two crucial point frame ratios are identical with reference to crucial point frame.
Wherein, the first video image is corresponding with the second video image attribute, that is to say the first video image and the second video Image can be chosen according to identical rule, can specifically refer to the first video image in the first video with a start image Between duration, and the second video image is identical in the duration in the second video between a start image, or can refer to The key operations number that one video image represents in the first video, the key represented in the second video with the second video image Movement number is identical, or can refer to that other attributes are corresponding, and the present invention is not especially limit this.
Step 103, according to the first crucial point frame and with reference to the difference between crucial point frame, display action director's letter Breath.
In embodiments of the present invention, it is corresponding with reference to every group in crucial point frame can to compare the first crucial point frame for terminal Direction between two vectors at same human body position, when there are direction differences between two vectors at corresponding same human body position When, the movement that the person that can determine action learning corresponds to this position is nonstandard, and then terminal can show and correct position movement Action director's information so that action learning person pointedly adjusts the movement at the position.
In embodiments of the present invention, terminal can determine the first key point frame of the first human object in the first video image The body frame of frame namely action learning person, then terminal can be closed second of the second human object in the second video image The body frame of key point frame namely action director person are converted to identical with reference to key point frame with the first crucial point frame ratio Frame, that is to say can convert the body frame of action director person, in turn according to the figure ratio of specific action learner Terminal can show action director's information according to the first crucial point frame and with reference to the difference between crucial point frame.In this hair In bright embodiment, terminal can be according to the figure ratio of specific action learner, to action director person in movement instructional video Body frame is converted, and then can be according to the body frame after the body frame of action learning person and action director person's conversion Between difference, display be directed to different human body position action director's information, so as to action learning person movement carry out It targetedly corrects, so, it is possible the accuracy for improving action learning and the efficiency of action learning.
Embodiment two
Referring to Fig. 2, show the flow chart of the motion images processing method of the embodiment of the present invention two, can specifically include as Lower step:
Step 201, the second crucial point frame of the second human object in the second video image is determined.
In embodiments of the present invention, it since action director person is when recording acts instructional video, is usually opened first Field introduces, such as carry out movement introduction etc., it can just start to carry out action demonstration later, finally usually can also record conclusion, Therefore, in practical applications, when terminal gets the first video and plays out, video figure that can only to action demonstration part Detection as carrying out human body key point.In addition, in practical applications, judging whether the movement of certain set standardizes and often only needing to detect it In key operations, and for the transitional movement between two key operations, importance is lower, and therefore, terminal can be with The only detection to the video image progress human body key point frame for showing each key operations in action demonstration part.
Specifically, this step can be realized by any one of following two kinds of implementations, comprising:
The first implementation: since the first default start image in the second video, whenever detecting the second video In the movement of the second human object when duration being kept to be greater than preset duration, any frame image during movement is kept is determined as the One key images, until determining each first key images in the second video;To second in each first key images Video image carries out human body critical point detection, obtains the second crucial point frame of the second human object in the second video image.
Wherein, the first default start image is that expression movement starts the first frame image shown, in practical applications, first Default start image can be determined by specific play time, the modes such as specific action of action director person.For example, acting It can inform that action director person needs to start within the 90th second in the first video by modes such as prompting frames before director's recorded video Displaying movement, correspondingly, terminal can by the first video playing by the 90th second when corresponding video image to be determined as first pre- If start image.For another example action director person can by nodding, the specific actions triggering terminal such as V word gesture determine that first is pre- If start image, correspondingly, nod when terminal detects in the first video, the specific actions such as V word gesture when, can will at this time Video image be determined as the first default start image.The embodiment of the present invention determines terminal the side of the first default start image Formula is not especially limited.
In the first implementation, action director person can pause when showing key operations and keep certain time length, And showing can not stop when the transitional movement between key operations, correspondingly, terminal can be pre- from first in the second video If start image starts, when the movement for detecting the second human object in the second video keeps duration to be greater than preset duration, Any frame image during movement being kept be determined as the first key images, until determining each the in the second video Each key operations in one key images namely each shown entire exercise of first key images, and then terminal can To carry out human body critical point detection to any key images in the second video image namely each first key images, thus Obtain the second crucial point frame of the second human object in the second video image.
Second of implementation: since the first default start image in the second video, one is selected every preset duration The first key images of frame, until determining each first key images in the second video;To in each first key images Second video image carries out human body critical point detection, obtains the second key point frame of the second human object in the second video image Frame.
In the second implementation, since the movements such as gymnastics, dancing usually have certain rhythm, terminal is also First key images of frame can be selected every preset duration since the first default start image in the second video, until Determine each first key images that each key operations are shown in the second video, wherein preset duration is the section acted It plays, which can be set by action director person according to actual act rhythm.And then terminal can be to the second video Any key images in image namely each first key images carry out human body critical point detection, to obtain the second video The crucial point frame of the second of second human object in image.
Certainly, in practical applications, it can also be to be chosen from the second video manually by action director person and show each pass Each first key images of key movement, the present invention is not especially limit this.
In addition, in practical applications, terminal is after often determining first key images of frame, or determines all the After one key images, the first key images can be numbered or recorded with the corresponding play time of the first key images, In order to which subsequent specific first key images of lookup carry out movement comparison.
It is further to note that action director person and action learning person it is usually used be different terminal, therefore In practical application, action video recording function and action video learning functionality can be provided in same application, to act The second video that director is recorded by the application, the terminal that the application can be equipped with by other carry out at motion images Reason.
The second video image of key operations is shown by choosing, so that subsequent can only compare key operations, In this way, so as to reduce calculation amount, being improved just without carrying out human body critical point detection to each frame image in the second video The efficiency of action director.In addition, due to without being compared to the transitional movement between key operations, so as to avoid acting Learner is excessively entangled with the correction in transitional movement, and then can be improved the efficiency of action learning.
For example, the terminal of action learning person when getting the second video and playing out, can determine the second video In the first default start image, and then can be since the first default start image, whenever detecting second in the second video When the movement of human object keeps duration to be greater than preset duration 3 seconds, the one frame image of centre during movement is kept is determined as the One key images until determining each first key images in the second video, and are compiled each first key images Number, then terminal can carry out human body critical point detection to the second video image in each first key images, obtain second The crucial point frame of the second of second human object in video image, as shown in Figure 3.
Step 202, the first crucial point frame of the first human object in the first video image is determined;Second video image and First video image is belonging respectively to different video, and the first video image is corresponding with the second video image attribute.
In embodiments of the present invention, action learning person can be while the second video of viewing learns, or is learning The first video of action learning is recorded after practising.Corresponding to two kinds of implementations of step 201, this step can pass through following two Any one of kind mode is realized, comprising:
The first implementation: target sequence number of second video image in each first key images is determined;From first The second default start image in video starts, when the movement for detecting the first human object in the first video keeps duration to be greater than When preset duration, any frame image during movement is kept is determined as the second key images;Determine second key images Serial number;When the serial number of second key images is identical as target sequence number, which is determined as the first video figure Picture;Human body critical point detection is carried out to the first video image, obtain the first human object in the first video image first is crucial Point frame.
Corresponding to the first implementation of step 201, in the first implementation of this step, since terminal is every After determining first key images of frame, or determine all after the first key images, it can be to the first key images It is numbered, therefore, terminal can determine target sequence number of second video image in each first key images, target first Serial number can indicate which key operations the second video image shows.And then terminal can be determined and be shown in the first video Each second key images of key operations, terminal is after often determining second key images of frame, or determines whole After second key images, the second key images can also be numbered.Then terminal can be by each second key images In, the second key images identical with the second video image serial number are determined as the first video image, that is to say the first video image In action learning be key operations in the second video image.And then terminal can carry out human body pass to the first video image Key point detection, so that the first crucial point frame of the first human object in the first video image is obtained, in order to subsequent by first Movement in video image is compared with the movement in the second video image.
Second of implementation;Determine the target duration between the second video image and the first default start image;By In one video between the second default start image when a length of target duration video image be determined as the first video image;The Two default start images are located at before the first video image;Human body critical point detection is carried out to the first video image, obtains first The crucial point frame of the first of first human object in video image.
Corresponding to second of implementation of step 201, in second of implementation of this step, since terminal is every After determining first key images of frame, or determine to can recorde the first crucial figure all after the first key images As corresponding play time, therefore, terminal can determine the mesh between the second video image and the first default start image first Duration is marked, target duration can indicate position of the shown key operations of the second video image in the second video.And then terminal It can determine each second key images that key operations are shown in the first video, terminal is often determining a second crucial figure of frame As after, or determine also to can recorde the corresponding play time of the second key images all after the second key images.So Afterwards terminal can by each second key images, between the second default start image when a length of target duration video figure As being determined as the first video image, that is to say the action learning in the first video image is moved in the second video image Make.And then terminal can carry out human body critical point detection to the first video image, to obtain the first in the first video image The crucial point frame of the first of body object, in order to the subsequent movement by the first video image and the movement in the second video image It is compared.
It should be noted that terminal used in action learning person can be moved during recording the first video Make real-time comparison, that is to say and often determine second key images of frame, just by its first key images corresponding with attribute into Action compares.Certainly, in practical applications, terminal is also possible to finish the first video in recording and then carries out movement ratio It is right, it that is to say after determining all the second key images, sequentially that each second key images are corresponding with attribute respectively Each first key images carry out movement comparison, and the present invention is not especially limit this.
For example, the first video that terminal is recorded after watching the second video and learning in the person that gets action learning When, can determine target sequence number of second video image in each first key images be 4 namely second video image show Be the 4th key operations, then terminal can determine the second default start image from the first video, and default from second Start image starts, will when the movement for detecting the first human object in the first video keeps duration to be greater than preset duration 3 seconds One frame image of centre during movement is kept is determined as the second key images, can determine the serial number of second key images later It is identical as target sequence number, second key images can be determined as the first video image at this time, terminal can be to first later Video image carries out human body critical point detection, obtains the first crucial point frame of the first human object in the first video image, such as Shown in Fig. 4.
Step 203, the corresponding conversion proportion of each primary vector in the second crucial point frame is determined.
In embodiments of the present invention, due to figures parameters such as the height of action director person and action learning person, four limbs length It has differences, therefore terminal can be identical with current action learning person's figure with the key point transferring frame of action director person Crucial point frame, so as to improve the accuracy of action director.Second crucial point frame is by each of expression different human body position A primary vector is constituted, and terminal can determine the mould of each primary vector in the second crucial point frame first, then can be true Each third vector field homoemorphism in fixed first crucial point frame, for any primary vector, terminal can be by the primary vector Mould obtains the corresponding conversion ratio of the primary vector divided by the third vector field homoemorphism at same human body corresponding with primary vector position Example, that is to say the available each primary vector scaling for being directed to the second human object.
For example, terminal can determine the mould of each primary vector in the second crucial point frame, then determine that first is crucial Each third vector field homoemorphism in point frame indicates the primary vector ab of left forearm for any primary vector, such as in Fig. 3, Terminal can be obtained by the mould of primary vector ab divided by the mould of the third vector AB of left forearm corresponding with primary vector ab in Fig. 4 The corresponding conversion proportion of primary vector ab is k1.
Step 204, each primary vector is obtained into each secondary vector respectively multiplied by corresponding conversion proportion.
In embodiments of the present invention, terminal can be by each primary vector respectively multiplied by corresponding conversion proportion, to obtain Direction to each secondary vector, each secondary vector is the direction of primary vector, but its length is then converted to the first human body The length of the corresponding human body of object that is to say that each secondary vector, can be right in the case where retaining the direction of primary vector Its length is shortened or is amplified.
For example, terminal can be by primary vector ab multiplied by its corresponding conversion proportion k1, to obtain for primary vector ab Secondary vector a ' b ' after to conversion.
Step 205, each secondary vector is attached according to the order of connection of each primary vector, obtains closing with first Key point frame ratio is identical with reference to crucial point frame.
In embodiments of the present invention, terminal can select one from each human body key point of the first human object first A key point as datum mark, such as header key point, hand key point etc., then terminal can according to the datum mark away from It, will be each from from closely to remote sequence, successively zooming in and out each secondary vector, and according to the order of connection of each primary vector A secondary vector is successively attached, and that is to say can also be attached each secondary vector according to human body, so as to Identical with the first crucial point frame ratio with reference to crucial point frame to obtain, that is to say can obtain and specific action learner The identical standard operation of figure ratio.
For example, each secondary vector can be attached by terminal according to the order of connection of each primary vector, obtain with First crucial point frame ratio is identical with reference to crucial point frame, as shown in Figure 5.
Step 206, the first crucial point frame is determined and with reference to two mesh for corresponding to same human body position in crucial point frame Mark the angle between vector.
In embodiments of the present invention, same human body position is corresponded in point frame crucial for first and the crucial point frame of reference Two object vectors, such as correspond to the third vector of left forearm in the first crucial point frame, and with reference to right in key point frame The secondary vector of left forearm is answered, terminal can determine the angle between the two vectors, so that it is determined that the people's body region out is dynamic Make difference.
For example, corresponding to the third vector AB of left forearm in point frame crucial for first, and with reference to right in crucial point frame Secondary vector a ' the b ' of left forearm is answered, terminal can determine that the angle between the two vectors is 15 degree.
Step 207, when the angle is greater than predetermined angle, display is by the object vector in the first crucial point frame according to this Action director's information that angle is adjusted.
In embodiments of the present invention, when the angle is greater than predetermined angle, it may be said that move at the bright action learning person position Make that the accuracy imitated is lower, at this time terminal can show will be corresponded in the first crucial point frame the target of the people's body region to Amount, the action director's information being adjusted according to the angle, so that action learning person can be according to the size and rotation of the angle Direction, the movement for pointedly correcting the people's body region are referred to improve the accuracy of action learning by pointedly acting It leads, the efficiency of action learning can be improved.
For example, terminal can determine that the angle between third vector AB and secondary vector a ' b ' is greater than 5 degree of predetermined angle, into And terminal can show a prompting frame near left forearm position, and the display " left forearm is please adjusted 15 degree " in prompting frame Action director's information.
In embodiments of the present invention, terminal can determine the second key point frame of the second human object in the second video image Then frame determines the first crucial point frame of the first human object in the first video image, later can be by the second key point frame Frame is converted to the crucial point frame of reference identical with the first key point frame ratio, and that is to say can be according to specific action learner Figure ratio, the body frame of action director person is converted, and then terminal can determine the first crucial point frame and ginseng It examines in crucial point frame, the angle between two object vectors at corresponding same human body position, when the angle is greater than predetermined angle When namely the movement of action learning person's acquistion when differing larger with standard operation, terminal can show action director's information.At this In inventive embodiments, terminal can be according to the figure ratio of specific action learner, to action director person in movement instructional video Body frame converted, and then can be according to the human body frame after the body frame of action learning person and action director person's conversion Difference between frame, display be directed to different human body position action director's information, so as to the movement to action learning person into Row is targetedly corrected, and so, it is possible the accuracy for improving action learning and the efficiency of action learning.
Embodiment three
Referring to Fig. 6, a kind of structural block diagram of motion images processing unit 600 of the embodiment of the present invention three is shown, specifically May include:
First determining module 601, the first crucial point frame for determining the first human object in the first video image;
Conversion module 602, for by the second key point transferring frame of the second human object in the second video image be with Described first crucial point frame ratio is identical with reference to crucial point frame;Second video image and first video image It is belonging respectively to different video, and first video image is corresponding with the second video image attribute;
Display module 603, for according to the described first crucial point frame and the difference with reference between crucial point frame, Show action director's information.
Optionally, referring to Fig. 7, described device further include:
Second determining module 604, the second crucial point frame for determining the second human object in the second video image.
Optionally, referring to Fig. 7, second determining module 604 includes:
First determines submodule 6041, for since the first default start image in the second video, whenever detecting Any frame when movement of the second human object keeps duration to be greater than preset duration in second video, during movement is kept Image is determined as the first key images, until determining each first key images in second video;
First detection sub-module 6042, for carrying out human body to the second video image in each second key images Critical point detection obtains the second crucial point frame of the second human object in first video image.
Optionally, referring to Fig. 8, first determining module 601 includes:
Second determines submodule 6011, for determining second video image in each first key images Target sequence number;
Third determines submodule 6012, for since the second default start image in the first video, when detecting When stating the movement holding duration of the first human object in the first video greater than the preset duration, any during keeping will be acted Frame image is determined as the second key images;
4th determines submodule 6013, for determining the serial number of second key images;
5th determines submodule 6014, for when the serial number of second key images is identical as the target sequence number, Second key images are determined as the first video image;
Second detection sub-module 6015, for carrying out human body critical point detection to first video image, described in acquisition The crucial point frame of the first of first human object in first video image.
Optionally, referring to Fig. 7, second determining module 604 includes:
Submodule 6043 is selected, for being selected since the first default start image in the second video every preset duration First key images of frame are selected, until determining each first key images in second video;
Third detection sub-module 6044, for carrying out human body to the second video image in each first key images Critical point detection obtains the second crucial point frame of the second human object in second video image.
Optionally, referring to Fig. 8, first determining module 601 includes:
6th determines submodule 6016, for determining between second video image and the first default start image Target duration;
7th determines submodule 6017, for by the first video between the second default start image when it is a length of described The video image of target duration is determined as the first video image;The second default start image is located at first video image Before;
4th detection sub-module 6018, for carrying out human body critical point detection to first video image, described in acquisition The crucial point frame of the first of first human object in first video image.
Optionally, referring to Fig. 7, the conversion module 602 includes:
8th determines submodule 6021, for determining the corresponding conversion ratio of each primary vector in the second crucial point frame Example;
Operation submodule 6022, for respectively multiplied by corresponding conversion proportion, obtaining each each primary vector Secondary vector;
Connect submodule 6023, for by each secondary vector according to each primary vector the order of connection into Row connection obtains identical with the described first crucial point frame ratio with reference to crucial point frame.
Optionally, the operation submodule includes:
First determination unit, for determining the mould of each primary vector in the described second crucial point frame;
Second determination unit, for determining each third vector field homoemorphism in the described first crucial point frame;
Arithmetic element, for for any primary vector, by the mould of the primary vector divided by with described first to The third vector field homoemorphism for measuring corresponding same human body position, obtains the corresponding conversion proportion of the primary vector.
Optionally, referring to Fig. 7, the display module 603 includes:
9th determines submodule 6031, right in the described first crucial point frame and the crucial point frame of the reference for determining Answer the angle between two object vectors at same human body position;
Display sub-module 6032, for when the angle is greater than predetermined angle, showing the described first crucial point frame In action director's information for being adjusted according to the angle of object vector.
Motion images processing unit provided in an embodiment of the present invention can be realized terminal in the embodiment of the method for Fig. 1 and Fig. 2 The each process realized, to avoid repeating, which is not described herein again.
In embodiments of the present invention, terminal can determine the first human body in the first video image by the first determining module The body frame of the crucial point frame of the first of object namely action learning person, then terminal can be by conversion module, by second The body frame of the crucial point frame of the second of the second human object namely action director person are converted in video image closes with first Key point frame ratio is identical with reference to crucial point frame, and that is to say can be according to the figure ratio of specific action learner, to dynamic The body frame for the person of coaching is converted, and then terminal can be according to the first crucial point frame and with reference between crucial point frame Difference, pass through display module show action director's information.In embodiments of the present invention, terminal can learn according to specific action The figure ratio of person converts the body frame of action director person in movement instructional video, and then can be according to kinesics The difference between body frame after the body frame of habit person and action director person's conversion, display is for the dynamic of different human body position Coach information, is targetedly corrected so as to the movement to action learning person, so, it is possible to improve action learning The efficiency of accuracy and action learning.
Example IV
A kind of hardware structural diagram of Fig. 9 terminal of each embodiment to realize the present invention,
The terminal 900 includes but is not limited to: radio frequency unit 901, network module 902, audio output unit 903, input are single Member 904, sensor 905, display unit 906, user input unit 907, interface unit 908, memory 909, processor 910, And the equal components of power supply 911.It will be understood by those skilled in the art that the not structure paired terminal of terminal structure shown in Fig. 9 It limits, terminal may include perhaps combining certain components or different component layouts than illustrating more or fewer components. In embodiments of the present invention, terminal include but is not limited to mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, Wearable device and pedometer etc..
Wherein, processor 910, the first crucial point frame for determining the first human object in the first video image;It will The second key point transferring frame of the second human object is identical as the described second crucial point frame ratio in second video image The crucial point frame of reference;Second video image and first video image are belonging respectively to different video, and described One video image is corresponding with the second video image attribute;Key point frame is referred to described according to the described second crucial point frame Difference between frame shows action director's information.
In embodiments of the present invention, terminal can determine the first key point frame of the first human object in the first video image The body frame of frame namely action learning person, then terminal can be closed second of the second human object in the second video image The body frame of key point frame namely action director person are converted to identical with reference to key point frame with the first crucial point frame ratio Frame, that is to say can convert the body frame of action director person, and then terminal according to the figure ratio of action learning person Action director's information can be shown according to the first crucial point frame and with reference to the difference between crucial point frame.Of the invention real It applies in example, terminal can be according to the figure ratio of specific action learner, to the human body of action director person in movement instructional video Frame is converted, and then can be according between the body frame after the body frame of action learning person and action director person's conversion Difference, display be directed to different human body position action director's information, be directed to so as to the movement to action learning person Property correction, so, it is possible improve action learning accuracy and action learning efficiency.
It should be understood that the embodiment of the present invention in, radio frequency unit 901 can be used for receiving and sending messages or communication process in, signal Send and receive, specifically, by from base station downlink data receive after, to processor 910 handle;In addition, by uplink Data are sent to base station.In general, radio frequency unit 901 includes but is not limited to antenna, at least one amplifier, transceiver, coupling Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 901 can also by wireless communication system and network and other set Standby communication.
Terminal provides wireless broadband internet by network module 902 for user and accesses, and such as user is helped to receive and dispatch electricity Sub- mail, browsing webpage and access streaming video etc..
Audio output unit 903 can be received by radio frequency unit 901 or network module 902 or in memory 909 The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 903 can also provide and end The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that end 900 executes.Sound Frequency output unit 903 includes loudspeaker, buzzer and receiver etc..
Input unit 904 is for receiving audio or video signal.Input unit 904 may include graphics processor (Graphics Processing Unit, GPU) 9041 and microphone 9042, graphics processor 9041 is in video acquisition mode Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out Reason.Treated, and picture frame may be displayed on display unit 906.Through graphics processor 9041, treated that picture frame can be deposited Storage is sent in memory 909 (or other storage mediums) or via radio frequency unit 901 or network module 902.Mike Wind 9042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be The format output that mobile communication base station can be sent to via radio frequency unit 901 is converted in the case where telephone calling model.
Terminal 900 further includes at least one sensor 905, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjusts the brightness of display panel 9061, and proximity sensor can close display panel when terminal 900 is moved in one's ear 9061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add The size of speed can detect that size and the direction of gravity when static, can be used to identify terminal posture (such as horizontal/vertical screen switching, Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Sensor 905 can be with Including fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer, Infrared sensor etc., details are not described herein.
Display unit 906 is for showing information input by user or being supplied to the information of user.Display unit 906 can wrap Display panel 9061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 9061.
User input unit 907 can be used for receiving the number or character information of input, and generates and set with the user of terminal It sets and the related key signals of function control inputs.Specifically, user input unit 907 include touch panel 9071 and other Input equipment 9072.Touch panel 9071, also referred to as touch screen, collect user on it or nearby touch operation (such as User is using any suitable objects or attachment such as finger, stylus on touch panel 9071 or near touch panel 9071 Operation).Touch panel 9071 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 910, receives processor 910 The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves Realize touch panel 9071.In addition to touch panel 9071, user input unit 907 can also include other input equipments 9072. Specifically, other input equipments 9072 can include but is not limited to physical keyboard, function key (such as volume control button, switch Key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 9071 can be covered on display panel 9061, when touch panel 9071 is detected at it On or near touch operation after, send processor 910 to determine the type of touch event, be followed by subsequent processing device 910 according to touching The type for touching event provides corresponding visual output on display panel 9061.Although in Fig. 9, touch panel 9071 and display Panel 9061 is the function that outputs and inputs of realizing terminal as two independent components, but in certain embodiments, it can The function that outputs and inputs of terminal is realized so that touch panel 9071 and display panel 9061 is integrated, is not limited herein specifically It is fixed.
Interface unit 908 is the interface that external device (ED) is connect with terminal 900.For example, external device (ED) may include it is wired or Wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, memory card port, For connecting port, the port audio input/output (I/O), video i/o port, ear port of the device with identification module Etc..Interface unit 908 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and will One or more elements that the input received is transferred in terminal 900 or can be used for terminal 900 and external device (ED) it Between transmit data.
Memory 909 can be used for storing software program and various data.Memory 909 can mainly include storing program area The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as Audio data, phone directory etc.) etc..In addition, memory 909 may include high-speed random access memory, it can also include non-easy The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 910 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, is led to It crosses operation or executes the software program and/or module being stored in memory 909, and call and be stored in memory 909 Data execute the various functions and processing data of terminal, to carry out integral monitoring to terminal.Processor 910 may include one Or multiple processing units;Preferably, processor 910 can integrate application processor and modem processor, wherein application processing The main processing operation system of device, user interface and application program etc., modem processor mainly handles wireless communication.It can manage Solution, above-mentioned modem processor can not also be integrated into processor 910.
Terminal 900 can also include the power supply 911 (such as battery) powered to all parts, it is preferred that power supply 911 can be with It is logically contiguous by power-supply management system and processor 910, thus by power-supply management system realize management charging, electric discharge, with And the functions such as power managed.
In addition, terminal 900 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal, including processor 910, and memory 909 is stored in storage It is real when which is executed by processor 910 on device 909 and the computer program that can be run on the processor 910 Each process of existing above-mentioned motion images processing method embodiment, and identical technical effect can be reached, to avoid repeating, here It repeats no more.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium Calculation machine program, the computer program realize each process of above-mentioned motion images processing method embodiment when being executed by processor, And identical technical effect can be reached, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, Such as read-only memory (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, letter Claim RAM), magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in a storage medium In (such as ROM/RAM, magnetic disk, CD), including some instructions are with so that a terminal can be mobile phone, computer, service Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much Form belongs within protection of the invention.

Claims (10)

1. a kind of motion images processing method is applied to terminal, which is characterized in that the described method includes:
Determine the first crucial point frame of the first human object in the first video image;
It is and the described first crucial point frame ratio by the second key point transferring frame of the second human object in the second video image Example is identical with reference to crucial point frame;Second video image and first video image are belonging respectively to different video, and First video image is corresponding with the second video image attribute;
According to the described first crucial point frame and the difference with reference between crucial point frame, action director's information is shown.
2. the method according to claim 1, wherein the first human object in the first video image of the determination Before the step of first crucial point frame, comprising:
Determine the second crucial point frame of the second human object in the second video image.
3. according to the method described in claim 2, it is characterized in that, the second human object in the second video image of the determination The step of second crucial point frame, comprising:
Since the first default start image in the second video, whenever detecting the second human object in second video When movement keeps duration to be greater than preset duration, any frame image during movement is kept is determined as the first key images, until Determine each first key images in second video;
Human body critical point detection is carried out to the second video image in each first key images, obtains second video The crucial point frame of the second of second human object in image.
4. according to the method described in claim 3, it is characterized in that, the first human object in the first video image of the determination The step of first crucial point frame, comprising:
Determine target sequence number of second video image in each first key images;
Since the second default start image in the first video, when detecting the dynamic of the first human object in first video When making that duration is kept to be greater than the preset duration, any frame image during movement is kept is determined as the second key images;
Determine the serial number of second key images;
When the serial number of second key images is identical as the target sequence number, second key images are determined as first Video image;
Human body critical point detection is carried out to first video image, obtains the first human object in first video image First crucial point frame.
5. according to the method described in claim 2, it is characterized in that, the second human object in the second video image of the determination The step of second crucial point frame, comprising:
Since the first default start image in the second video, first key images of frame are selected every preset duration, until Determine each first key images in second video;
Human body critical point detection is carried out to the second video image in each first key images, obtains second video The crucial point frame of the second of second human object in image.
6. according to the method described in claim 5, it is characterized in that, the first human object in the first video image of the determination The step of first crucial point frame, comprising:
Determine the target duration between second video image and the first default start image;
By in the first video between the second default start image when a length of target duration video image be determined as One video image;The second default start image is located at before first video image;
Human body critical point detection is carried out to first video image, obtains the first human object in first video image First crucial point frame.
7. the method according to claim 1, wherein described by of the second human object in the second video image Two key point transferring frames are identical with the described first crucial point frame ratio with reference to the step of crucial point frame, comprising:
Determine the corresponding conversion proportion of each primary vector in the second crucial point frame;
By each primary vector respectively multiplied by corresponding conversion proportion, each secondary vector is obtained;
Each secondary vector is attached according to the order of connection of each primary vector, obtains closing with described first Key point frame ratio is identical with reference to crucial point frame.
8. a kind of motion images processing unit, which is characterized in that described device includes:
First determining module, the first crucial point frame for determining the first human object in the first video image;
Conversion module, for being and described first by the second key point transferring frame of the second human object in the second video image Crucial point frame ratio is identical with reference to crucial point frame;Second video image and first video image are belonging respectively to Different video, and first video image is corresponding with the second video image attribute;
Display module, for according to the described first crucial point frame and the difference with reference between crucial point frame, display to be dynamic Coach information.
9. a kind of terminal, which is characterized in that including processor, memory and be stored on the memory and can be in the processing The computer program run on device is realized when the computer program is executed by the processor as any in claim 1 to 7 The step of motion images processing method described in item.
10. a kind of computer readable storage medium, which is characterized in that store computer journey on the computer readable storage medium Sequence realizes the motion images processing side as described in any one of claims 1 to 7 when the computer program is executed by processor The step of method.
CN201811279509.9A 2018-10-30 2018-10-30 A kind of motion images processing method, device and terminal Pending CN109190607A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811279509.9A CN109190607A (en) 2018-10-30 2018-10-30 A kind of motion images processing method, device and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811279509.9A CN109190607A (en) 2018-10-30 2018-10-30 A kind of motion images processing method, device and terminal

Publications (1)

Publication Number Publication Date
CN109190607A true CN109190607A (en) 2019-01-11

Family

ID=64940881

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811279509.9A Pending CN109190607A (en) 2018-10-30 2018-10-30 A kind of motion images processing method, device and terminal

Country Status (1)

Country Link
CN (1) CN109190607A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188688A (en) * 2019-05-30 2019-08-30 网易(杭州)网络有限公司 Postural assessment method and device
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN110929641A (en) * 2019-11-21 2020-03-27 三星电子(中国)研发中心 Action demonstration method and system
CN111641861A (en) * 2020-05-27 2020-09-08 维沃移动通信有限公司 Video playing method and electronic equipment
CN113033341A (en) * 2021-03-09 2021-06-25 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113033341B (en) * 2021-03-09 2024-04-19 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104077094A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display device and method to display dance video
CN104598867A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Automatic evaluation method of human body action and dance scoring system
US9690981B2 (en) * 2015-02-05 2017-06-27 Electronics And Telecommunications Research Institute System and method for motion evaluation
KR20170106737A (en) * 2016-03-14 2017-09-22 동국대학교 산학협력단 Apparatus and method for evaluating Taekwondo motion using multi-directional recognition
CN107730529A (en) * 2017-10-10 2018-02-23 上海魔迅信息科技有限公司 A kind of video actions methods of marking and system
CN108256433A (en) * 2017-12-22 2018-07-06 银河水滴科技(北京)有限公司 A kind of athletic posture appraisal procedure and system

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103390174A (en) * 2012-05-07 2013-11-13 深圳泰山在线科技有限公司 Physical education assisting system and method based on human body posture recognition
CN104077094A (en) * 2013-03-25 2014-10-01 三星电子株式会社 Display device and method to display dance video
CN104598867A (en) * 2013-10-30 2015-05-06 中国艺术科技研究所 Automatic evaluation method of human body action and dance scoring system
US9690981B2 (en) * 2015-02-05 2017-06-27 Electronics And Telecommunications Research Institute System and method for motion evaluation
KR20170106737A (en) * 2016-03-14 2017-09-22 동국대학교 산학협력단 Apparatus and method for evaluating Taekwondo motion using multi-directional recognition
CN107730529A (en) * 2017-10-10 2018-02-23 上海魔迅信息科技有限公司 A kind of video actions methods of marking and system
CN108256433A (en) * 2017-12-22 2018-07-06 银河水滴科技(北京)有限公司 A kind of athletic posture appraisal procedure and system

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188688A (en) * 2019-05-30 2019-08-30 网易(杭州)网络有限公司 Postural assessment method and device
CN110782482A (en) * 2019-10-21 2020-02-11 深圳市网心科技有限公司 Motion evaluation method and device, computer equipment and storage medium
CN110929641A (en) * 2019-11-21 2020-03-27 三星电子(中国)研发中心 Action demonstration method and system
CN111641861A (en) * 2020-05-27 2020-09-08 维沃移动通信有限公司 Video playing method and electronic equipment
CN111641861B (en) * 2020-05-27 2022-08-02 维沃移动通信有限公司 Video playing method and electronic equipment
CN113033341A (en) * 2021-03-09 2021-06-25 北京达佳互联信息技术有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113033341B (en) * 2021-03-09 2024-04-19 北京达佳互联信息技术有限公司 Image processing method, device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109190607A (en) A kind of motion images processing method, device and terminal
CN107833283A (en) A kind of teaching method and mobile terminal
CN108184050A (en) A kind of photographic method, mobile terminal
CN107613131A (en) A kind of application program disturbance-free method and mobile terminal
CN107707817A (en) A kind of video capture method and mobile terminal
CN107831946A (en) Interactive button location regulation method, mobile terminal and computer-readable recording medium
CN109381165A (en) A kind of skin detecting method and mobile terminal
CN107196363A (en) Adjust method, terminal and the computer-readable recording medium of charging current
CN107908705A (en) A kind of information-pushing method, information push-delivery apparatus and mobile terminal
CN110465080A (en) Control method, apparatus, mobile terminal and the computer readable storage medium of vibration
CN109743504A (en) A kind of auxiliary photo-taking method, mobile terminal and storage medium
CN113365085B (en) Live video generation method and device
CN109461117A (en) A kind of image processing method and mobile terminal
CN107635070A (en) A kind of method of prompting message, terminal and storage medium
CN107767430A (en) One kind shooting processing method, terminal and computer-readable recording medium
CN110533651A (en) A kind of image processing method and device
CN109461124A (en) A kind of image processing method and terminal device
CN107977079A (en) A kind of screen rotation method and mobile terminal
CN108256308A (en) A kind of recognition of face solution lock control method and mobile terminal
CN109215683A (en) A kind of reminding method and terminal
CN108037885A (en) A kind of operation indicating method and mobile terminal
CN108668024A (en) A kind of method of speech processing and terminal
CN111641861B (en) Video playing method and electronic equipment
CN109819167A (en) A kind of image processing method, device and mobile terminal
CN109218527A (en) screen brightness control method, mobile terminal and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190111