CN109659006A - Facial muscle training method, device and electronic equipment - Google Patents
Facial muscle training method, device and electronic equipment Download PDFInfo
- Publication number
- CN109659006A CN109659006A CN201811506296.9A CN201811506296A CN109659006A CN 109659006 A CN109659006 A CN 109659006A CN 201811506296 A CN201811506296 A CN 201811506296A CN 109659006 A CN109659006 A CN 109659006A
- Authority
- CN
- China
- Prior art keywords
- point group
- feature point
- difference
- frame picture
- completeness
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present invention proposes a kind of facial muscle training method, device and electronic equipment, is related to virtual rehabilitation training field, this method comprises: obtaining at least one feature point group of target face, wherein each feature point group includes two characteristic points;Calculate changing coordinates difference corresponding at least one feature point group, wherein changing coordinates difference is coordinate difference of at least one feature point group in present frame picture;According to changing coordinates difference and initial coordinate difference and preset movement completion value, current action completeness is generated, wherein initial coordinate difference characterizes the coordinate difference of at least one feature point group in the initial state;When current action completeness is greater than preset movement completeness threshold value, determine that current training action is completed.A kind of facial muscle training method, device provided by the embodiment of the present invention and electronic equipment can feed back whether current training action is completed, it is ensured that facial muscle training quality when user carries out facial muscle training.
Description
Technical field
The present invention relates to virtual rehabilitation training fields, set in particular to a kind of facial muscle training method, device and electronics
It is standby.
Background technique
Facial paralysis can make patient mouthful strabismus askew, influence the expression of the normal expression of patient, or even will affect patient's appearance instrument shape
As generating greatly negative effect to patients ' psychological health, generating obstruction to the social interaction of patient.China facial paralysis patient is many
More, very serious by facial paralysis harm, disease incidence is in trend is risen year by year, and facial paralysis patient's all age group has, due to young man
The increase of social work pressure, morbidity are in rejuvenation trend.
If facial paralysis patient can have found early, in time if treatment, facial paralysis can be made to fully recover.Facial muscle functional rehabilitation
Training is instructed generally by the active rehabilitation that patient itself carries out strength exercise to facial eye, forehead, mouth, nose etc.
Practice, need patient that need to adhere to carrying out a certain amount of facial muscle function rehabilitation training daily, makes and for example lift eyebrow, frown, close one's eyes, alarmming
Nose such as shows tooth, is in a pout at the movement.
Summary of the invention
The purpose of the present invention is to provide a kind of facial muscle training method, device and electronic equipments, can carry out face in user
When flesh training, feed back whether current training action is completed, it is ensured that facial muscle training quality.
To achieve the goals above, technical solution used in the embodiment of the present invention is as follows:
In a first aspect, the embodiment of the invention provides a kind of facial muscle training methods, which comprises obtain target face
At least one feature point group, wherein each feature point group include two characteristic points;Calculate at least one described characteristic point
The corresponding changing coordinates difference of group, wherein the changing coordinates difference is at least one described feature point group in present frame figure
Coordinate difference in piece;Work as according to the changing coordinates difference and initial coordinate difference and preset movement completion value, generation
Preceding movement completeness, wherein the initial coordinate difference characterizes the coordinate of at least one feature point group in the initial state
Difference;When the current action completeness is greater than preset movement completeness threshold value, determine that current training action is completed.
Second aspect, the embodiment of the invention provides a kind of facial muscle training device, described device includes: that feature point group is extracted
Module, for obtaining at least one feature point group of target face, wherein each feature point group includes two characteristic points;
Coordinate difference calculating module, for calculating changing coordinates difference corresponding at least one described feature point group, wherein described to work as
Preceding coordinate difference is the coordinate difference of at least one feature point group in present frame picture;Completeness computing module is acted,
For generating current action and completing according to the changing coordinates difference and initial coordinate difference and preset movement completion value
Degree, wherein the initial coordinate difference characterizes the coordinate difference of at least one feature point group in the initial state;Judge mould
Block, for judging whether the current action completeness is greater than preset movement completeness threshold value, wherein when the current action
When completeness is greater than preset movement completeness threshold value, determine that current training action is completed.
The third aspect, the embodiment of the invention provides a kind of electronic equipment, the electronic equipment includes memory, for depositing
Store up one or more programs;Processor.When one or more of programs are executed by the processor, above-mentioned facial muscle is realized
Training method.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage mediums, are stored thereon with computer journey
Sequence, the computer program realize above-mentioned facial muscle training method when being executed by processor.
Compared with the existing technology, a kind of facial muscle training method, device provided by the embodiment of the present invention and electronic equipment lead to
At least one feature point group by target face is crossed, it is corresponding in present frame picture at least one feature point group is calculated
Changing coordinates difference after, current action is generated by changing coordinates difference and initial coordinate difference and preset movement completion value
Completeness, and then judge whether user completes current training action, compared with the prior art, energy by the current action completeness
Enough when user carries out facial muscle training, feed back whether current training action is completed, it is ensured that facial muscle training quality.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate
Appended attached drawing, is described in detail below.
Detailed description of the invention
In order to illustrate the technical solution of the embodiments of the present invention more clearly, below will be to needed in the embodiment attached
Figure is briefly described, it should be understood that the following drawings illustrates only certain embodiments of the present invention, therefore is not construed as pair
The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this
A little attached drawings obtain other relevant attached drawings.
Fig. 1 shows a kind of schematic diagram of a kind of electronic equipment provided by the embodiment of the present invention;
Fig. 2 shows a kind of a kind of schematic flow charts of facial muscle training method provided by the embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of human face characteristic point collection distributed model;
Fig. 4 is a kind of schematic flow chart of the sub-step of S300 in Fig. 2;
Fig. 5 is a kind of schematic flow chart of the sub-step of S500 in Fig. 2;
Fig. 6 is a kind of schematic flow chart of the sub-step of S400 in Fig. 2;
Fig. 7 is a kind of schematic diagram that face locating point group constitutes polygon;
Fig. 8 is another schematic diagram that face locating point group constitutes polygon;
Fig. 9 is a kind of schematic flow chart of the sub-step of S420 in Fig. 6;
Figure 10 shows a kind of schematic completion flow chart of one kind of facial muscle training method provided by the embodiment of the present invention;
Figure 11 shows a kind of a kind of schematic diagram of facial muscle training device provided by the embodiment of the present invention;
It is a kind of that Figure 12 shows a kind of coordinate difference calculating module of facial muscle training device provided by the embodiment of the present invention
Schematic diagram;
Figure 13 shows a kind of movement completeness computing module one of facial muscle training device provided by the embodiment of the present invention
Kind schematic diagram;
Figure 14 shows a kind of deliberate action completion value update mould of facial muscle training device provided by the embodiment of the present invention
A kind of schematic diagram of block;
Figure 15 shows a kind of movement completion value updating unit one of facial muscle training device provided by the embodiment of the present invention
Kind schematic diagram.
In figure: 10- electronic equipment;110- memory;120- processor;130- storage control;140- Peripheral Interface;
150- radio frequency unit;160- communication bus/signal wire;170- camera unit;180- display unit;200- facial muscle training device;
210- feature point group extraction module;220- photo resolution adjusts module;230- coordinate difference calculating module;231- subcoordinate is poor
It is worth computing unit;232- changing coordinates difference computational unit;240- deliberate action completion value update module;241- area of a polygon
Computing unit;242- acts completion value updating unit;2421- quotient computation subunit;2422- completion value updates subelement;
250- acts completeness computing module;251- acts completion value computing unit;252- acts completeness computing unit;260- judgement
Module.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.The present invention being usually described and illustrated herein in the accompanying drawings is implemented
The component of example can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiment of the present invention provided in the accompanying drawings is not intended to limit below claimed
The scope of the present invention, but be merely representative of selected embodiment of the invention.Based on the embodiments of the present invention, this field is common
Technical staff's every other embodiment obtained without creative efforts belongs to the model that the present invention protects
It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi
It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.Meanwhile of the invention
In description, term " first ", " second " etc. are only used for distinguishing description, are not understood to indicate or imply relative importance.
It should be noted that, in this document, relational terms such as first and second and the like are used merely to a reality
Body or operation are distinguished with another entity or operation, are deposited without necessarily requiring or implying between these entities or operation
In any actual relationship or order or sequence.Moreover, the terms "include", "comprise" or its any other variant are intended to
Non-exclusive inclusion, so that the process, method, article or equipment including a series of elements is not only wanted including those
Element, but also including other elements that are not explicitly listed, or further include for this process, method, article or equipment
Intrinsic element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in process, method, article or equipment including the element.
With reference to the accompanying drawing, it elaborates to some embodiments of the present invention.In the absence of conflict, following
Feature in embodiment and embodiment can be combined with each other.
Currently existing technology mainly uses the method for facial muscle function rehabilitation training traditional mirror surface therapy, that is,
Patient places mirror in front, and patient observes the state of oneself face by mirror, the detail of facial action from
Situation, to obtain the achievement of feedback training, and then the rehabilitation training of finished surface muscular function.
Facial muscle training can effectively facilitate the recovery of facial muscle motor function, promote facial paralysis rehabilitation therapeutic effect.Above-mentioned tradition
Mirror surface therapy it is although easy to operate, implemented for patient also very it is convenient be easy, but due to mirror will not to suffer from
Person feeds back the requirement whether degree that patient's training action executes reaches rehabilitation training;And when due to using mirror surface therapy, mirror
Son do not had with patient it is any interact, cause training process more dull, it is easy to make patient to rehabilitation training forfeiture it is emerging
Interest causes rehabilitation training effect poor.
Based on the problems of the above-mentioned prior art, a kind of improved procedure provided by the embodiment of the present invention is: passing through
By at least one feature point group of target face, being calculated, at least one feature point group is corresponding in present frame picture
After changing coordinates difference, it is complete that current action is generated by changing coordinates difference and initial coordinate difference and preset movement completion value
Cheng Du, and then judge whether user completes current training action by the current action completeness.
Referring to Fig. 1, Fig. 1 shows a kind of schematic structure of a kind of electronic equipment 10 provided by the embodiment of the present invention
Figure, in embodiments of the present invention, the electronic equipment 10 may be, but not limited to, smart phone, PC (personal
Computer, PC), tablet computer, pocket computer on knee, personal digital assistant (personal digital
Assistant, PDA) etc..The electronic equipment 10 includes memory 110, storage control 130, one or more (in figure
Only show one) processor 120, Peripheral Interface 140, radio frequency unit 150, camera unit 170, display unit 180 etc..These groups
Part is mutually communicated by one or more communication bus/signal wire 160.
Memory 110 can be used for storing software program and mould group, the facial muscle training cartridge as provided by the embodiment of the present invention
Set 200 corresponding program instructions/mould group, the software program and mould that processor 120 is stored in memory 110 by operation
Group, thereby executing various function application and image procossing, the facial muscle training method as provided by the embodiment of the present invention.
Wherein, the memory 110 may be, but not limited to, random access memory (Random Access
Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable
Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only
Memory, EPROM), electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only
Memory, EEPROM) etc..
Processor 120 can be a kind of IC chip, have signal handling capacity.Above-mentioned processor 120 can be with
It is general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network
Processor, NP), speech processor and video processor etc.;Can also be digital signal processor, specific integrated circuit,
Field programmable gate array or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
It may be implemented or execute disclosed each method, step and the logic diagram in the embodiment of the present invention.General processor can be
Microprocessor or the processor 120 are also possible to any conventional processor etc..
Various input/output devices are couple processor 120 and memory 110 by Peripheral Interface 140.In some implementations
In example, Peripheral Interface 140, processor 120 and storage control 130 can be realized in one single chip.The present invention other
Some embodiments in, they can also be realized by independent chip respectively.
Radio frequency unit 150 is used to receive and transmit electromagnetic wave, realizes the mutual conversion of electromagnetic wave and electric signal, thus with
Communication network or other equipment are communicated.
Camera unit 170 is for shooting picture, so that the photo of 120 pairs of processor shootings is handled.
Display unit 180 is used to provide images outputting interface for user, shows image information, so that user carries out facial muscle instruction
Practice.
It is appreciated that structure shown in FIG. 1 is only to illustrate, electronic equipment 10 may also include it is more than shown in Fig. 1 or
Less component, or with the configuration different from shown in Fig. 1.Each component shown in Fig. 1 can using hardware, software or its
Combination is realized.
For example, for above-mentioned electronic equipment 10, it includes unit or device can be used as independent equipment and exist.
For example, electronic equipment 10, which can also not include, camera unit 170 in the other some embodiments of the embodiment of the present invention,
And use electronic equipment 10 to establish the scheme communicated with a picture pick-up device and realized, picture pick-up device is used to shoot picture, such as
Shoot patient photo, then the picture of shooting is sent to by way of wired or wireless network again electronic equipment 10 with
For realizing facial muscle training method provided by the embodiment of the present invention.
Optionally, in the other some embodiments of the embodiment of the present invention, electronic equipment 10 can also be aobvious not comprising having
Show unit 180, and realized in such a way that electronic equipment 10 is communicated with a display equipment foundation, is passed through by electronic equipment 10
Image information when wired or wireless network mode is by training is sent to display equipment, so that user is complete referring to image information
At facial muscle training.
Referring to Fig. 2, Fig. 2 shows a kind of one kind of facial muscle training method provided by the embodiment of the present invention schematically to flow
Cheng Tu, in embodiments of the present invention, the facial muscle training method the following steps are included:
S100 obtains at least one feature point group of target face.
When facial muscle training, electronic equipment 10 selects information to determine at least one feature point group according to the scene of user, wherein
Each feature point group includes that there are two characteristic points, and in electronic equipment 10, being provided with each feature point group in advance is included
The information of two characteristic points, when electronic equipment 10 selects information to determine at least one feature point group according to the scene of user,
Electronic equipment 10 also defines the characteristic point of all participation facial muscle training simultaneously.
Wherein, the corresponding Training scene of scene selection at least one feature point group of information representation of user, in electronic equipment
In 10, it is preset with the corresponding relationship of multiple Training scenes and each Training scene and corresponding feature point group, works as electronics
When equipment 10 receives user's selected Training scene, i.e., information is selected using the Training scene as the scene of user, and according to
According to the corresponding relationship of selected Training scene and preset each Training scene and corresponding feature point group, determine with
Feature point group corresponding to the scene selection information of user.
Such as, it is assumed that be preset in electronic equipment 10 lift eyebrow, frown, close one's eyes, alarm nose, show tooth, be in a pout this six training places
Scape, and the corresponding relationship of preset each Training scene and corresponding feature point group are as follows: lift eyebrow character pair point group 1, wrinkle
It is corresponding special to alarm nose for eyebrow character pair point group 2 and feature point group 3, eye closing character pair point group 4, feature point group 5 and feature point group 6
Sign point group 7, shows tooth character pair point group 8 and feature point group 9, be in a pout character pair point group 10, feature point group 11 and feature point group
12;When the scene that electronic equipment 10 receives user selects information to lift eyebrow, then lift eyebrow and preset corresponding relationship are combined, determined
At least one feature point group out is characterized a group 1;When the scene that electronic equipment 10 receives user selects information to close one's eyes, then
A group 4, feature point group 5 and feature are characterized in conjunction with eye closing and preset corresponding relationship, at least one feature point group determined
Point group 6.
S300 calculates changing coordinates difference corresponding at least one feature point group.
When carrying out facial muscle training to user, electronic equipment 10 is by handling user's picture of acquisition, with judgement
Whether user completes selected Training scene.Therefore, after electronic equipment 10 gets at least one feature point group, according to institute
At least one feature point group got, working as in each comfortable present frame picture of all characteristic points for including by each feature point group
Preceding coordinate value calculates changing coordinates difference corresponding at least one feature point group, wherein establishing in present frame picture has seat
Mark system, each characteristic point respectively have corresponding present coordinate values under the coordinate system of foundation.
Wherein, each characteristic point can use in the present coordinate values in present frame picture and be preset in electronic equipment 10
Feature point set model obtains.For example, referring to Fig. 3, Fig. 3 is a kind of schematic diagram of human face characteristic point collection distributed model, the face
All characteristic points distribution that aspect of model point set includes can be obtained by Dlib open source library, and electronic equipment 10 is obtaining target face
At least one feature point group after, then combine the human face characteristic point collection distributed model, and combine Dlib increase income library, can be obtained
Present coordinate values of each characteristic point in present frame picture, and then can be calculated corresponding at least one feature point group
Changing coordinates difference.
For example, by taking the towering nose movement of left face part training as an example, it is assumed that in faceform as shown in Figure 3, left face part
The corresponding feature point group of nose is alarmmed, two characteristic points for being included are respectively characteristic point 31 and characteristic point 27, and characteristic point
31 present coordinate values in present frame picture are D31(x31,y31), present coordinate values of the characteristic point 27 in present frame picture are
D27(x27,y27), then the changing coordinates difference obtained at this time can be calculated as Δ27-31=y27-y31。
Optionally, in the application scenes of the embodiment of the present invention, in acquired at least one obtained feature point group,
It include at least two feature point groups, such as in the above-described example, frown character pair point group 2 and feature point group 3 are closed one's eyes and corresponded to
Feature point group 4, feature point group 5 and feature point group 6.Therefore, as an implementation, referring to Fig. 4, Fig. 4 is S300 in Fig. 2
Sub-step a kind of schematic flow chart, in embodiments of the present invention, S300 includes following sub-step:
S310 calculates separately each corresponding coordinate difference of feature point group at least two feature point groups.
When at least two feature point group of the electronic equipment 10 according to the scene of user selection acquisition of information to target face,
Then first calculate separately the corresponding coordinate difference of each feature point group at least two feature point groups.Such as in examples detailed above
In, when training is frowned, the feature point group determined includes that feature point group 2 and feature point group 3 then first calculate separately at this moment
Obtain the corresponding coordinate difference DELTA of feature point group 22With the corresponding coordinate difference DELTA of feature point group 33。
S320 generates changing coordinates difference according to the corresponding coordinate difference of all feature point groups.
In examples detailed above, when the feature point group determined includes feature point group 2 and feature point group 3, and, it calculates separately
The corresponding coordinate difference DELTA of feature point group 2 is arrived2With the corresponding coordinate difference DELTA of feature point group 33, electronic equipment 10 is again according to special
Sign point group 2 and the corresponding coordinate difference of feature point group 3, i.e. coordinate difference DELTA2With coordinate difference DELTA3, it is poor to generate changing coordinates
Value.
Optionally, as an implementation, the mode for generating changing coordinates difference, which can use, seeks all characteristic points
The arithmetic mean of instantaneous value of the corresponding coordinate difference of group.Such as in the above-described example, changing coordinates difference
It is worth noting that in the other some embodiments of the embodiment of the present invention, it can also be flat using geometry is sought
The mode of mean value generates changing coordinates difference, such as in the above-described example, changing coordinates difference
Optionally, when the data volume of the present frame picture obtained when electronic equipment 10 is larger, electronic equipment 10 is calculated currently
The efficiency of coordinate difference can reduce.Therefore, as an implementation, before executing S300, which is also wrapped
It includes:
S200 reduces the resolution ratio of present frame picture.
Electronic equipment 10 is calculating at least one feature point group before the changing coordinates difference in present frame picture, first reduces
The resolution ratio of present frame picture thereby reduces the size of data of present frame picture, so that after using reduction resolution ratio in S300
Present frame picture for calculating changing coordinates difference corresponding at least one above-mentioned feature point group, improve electronic equipment 10
Arithmetic speed.
Optionally, as an implementation, the mode that electronic equipment 10 reduces the resolution ratio of present frame picture can adopt
With: the pixel size in width direction and short transverse of present frame picture is reduced into half, to reduce present frame picture
Resolution ratio.
Also, optionally, as an implementation, electronic equipment 10 can be adopted when carrying out facial muscle training to user
It takes and only takes a frame to carry out image procossing per continuous two frames picture, and abandon the mode handled another frame picture, to mention
Rise the speed that electronic equipment 10 handles continuous multiframe picture.
Based on above-mentioned design, a kind of facial muscle training method provided by the embodiment of the present invention, by reducing present frame picture
Resolution ratio, so as to reduce the present frame picture after resolution ratio for calculating at least one feature point group institute in present frame picture
Corresponding changing coordinates difference, thus to the data calculation amount of present frame picture when reducing facial muscle training, and then improve face
To the processing speed of picture when flesh training.
Please continue to refer to Fig. 2, S500, completed according to changing coordinates difference and initial coordinate difference and preset movement
Value generates current action completeness.
Before carrying out facial muscle training to user, electronic equipment 10 is it needs to be determined that initial coordinate difference, the initial coordinate are poor out
Value characterizes the coordinate difference of at least one above-mentioned feature point group in the initial state, which, which can be understood as user, is not having
There is the face state before carrying out facial muscle training, for example the content for assuming that active user carries out facial muscle training is lift eyebrow, original state
The face state before eyebrow is then lifted for user, generally user is in face state when looking natural.
Optionally, as an implementation, which is at least one above-mentioned feature point group in default frame
Coordinate difference in picture.That is, user is obtained before carrying out facial muscle training using electronic equipment 10 by electronic equipment 10
Face state of one default frame picture as user in the initial state, then electronic equipment 10 calculates above-mentioned at least one again
A feature point group presets the coordinate difference in frame picture at this, as initial coordinate difference.
Also, optionally, as an implementation, when carrying out facial muscle training each time, electronic equipment 10 can be again
New default frame picture is obtained for calculating initial coordinate difference, for example, in a cycle, electronic equipment 10 is used for user
When carrying out lift eyebrow training, electronic equipment 10 is used to calculate using the first default frame picture is training initial coordinate when lifting eyebrow poor
Value, and in another circulation, when electronic equipment 10 is used to carry out alarmming nose training to user, electronic equipment 10 then uses second
Default frame picture is used to calculate the initial coordinate difference when alarmming nose training.
It is worth noting that in the other some embodiments of the embodiment of the present invention, it can also be using in electronic equipment
The mode of predetermined fixed value is as initial coordinate difference in 10, then at this time in all training circulations, for identical training place
Scape, for example when nose is alarmmed in multiple circuit training, all initial coordinate differences are all the same.Also, for different Training scenes,
For example when towering nose and training being trained to show tooth, initial coordinate difference may be arranged as difference, this depends on user to different instructions
Practice initial coordinate difference set by scene.
Also, electronic equipment 10 combines initial after obtaining above-mentioned changing coordinates difference according to the current coordinate difference value
Coordinate difference and preset movement completion value, calculate and generate current action completeness, which characterizes user
To the performance level of current facial muscle training action.
Optionally, as an implementation, referring to Fig. 5, Fig. 5 is a kind of schematic of the sub-step of S500 in Fig. 2
Flow chart, in embodiments of the present invention, S500 include following sub-step:
S510 generates current action completion value according to changing coordinates difference and initial coordinate difference.
Optionally, as an implementation, using the difference for calculating both changing coordinates difference and initial coordinate difference,
As current action completion value.That is, current action completion value Dt=| Δt-Δ0|, wherein DtFor current action completion
Value, ΔtFor changing coordinates difference, Δ0For initial coordinate difference.
It is worth noting that in the other some embodiments of the embodiment of the present invention, it can also be using other modes
Obtain current action completion value according to changing coordinates difference and initial coordinate difference, for example, using calculate changing coordinates difference with
The quotient of both initial coordinate differences, as current action completion value.
S520 generates current action completeness according to current action completion value and preset movement completion value.
Optionally, as an implementation, using calculating both current action completion value and preset movement completion value
Quotient, as current action completeness.That is, current action completenessWherein, VtFor current action completion
Degree, DtFor current action completion value, D0For preset movement completion value.
It is worth noting that in the other some embodiments of the embodiment of the present invention, it can also be using other modes
Current action completeness is obtained according to current action completion value and preset movement completion value, such as using calculating current action
Completion value and the preset difference for acting both completion values, as current action completeness.
In general, different users or same user are between different moments, electronic equipment 10 and user face
Distance can may change at any time, and in electronic equipment 10, above-mentioned preset movement completion value is the value of fixed size,
And when the distance between electronic equipment 10 and user face change, the size that is presented in target face different frame picture
It may be different, especially the picture used in different training circulations, it can be by which results in current action completeness
To the influence of the distance between electronic equipment 10 and user face.
Therefore, as an implementation, before executing S500, the facial muscle training method further include:
S400 updates preset movement completion value according to the face locating point group obtained in present frame picture.
In facial muscle training, electronic equipment 10 is also selected face's anchor point group, which includes at least
Two characteristic points, for example may include there are two characteristic point, it also may include three characteristic points either four characteristic points, may be used also
To be comprising the even more characteristic points of five characteristic points.All characteristic points for including by the face locating point group are current
Location information in frame picture, and then preset movement completion value is updated, so that updated movement completion value is for calculating life
At current action completeness, and then reduce the distance between electronic equipment 10 and user face to the shadow of current action completeness
It rings.
Optionally, as an implementation, referring to Fig. 6, Fig. 6 is a kind of schematic of the sub-step of S400 in Fig. 2
Flow chart, in embodiments of the present invention, S400 include following sub-step:
S410 calculates the current polygon that all characteristic points that face locating point group includes are constituted in present frame picture
Area.
When updating preset movement completion value, a polygon is constituted by all characteristic points that face locating point group includes
Area respectively has unique coordinate under the coordinate system established in present frame picture due to each characteristic point, then according to each
The respective coordinate value of characteristic point, is calculated that polygon that all characteristic points that face locating point group includes are constituted is corresponding to work as
Preceding area of a polygon.
Wherein, all characteristic points that face locating point group includes can be selected by the way of default characteristic point, such as
It in schematic diagram as shown in Figure 3, presets characteristic point 0 and 8 groups of characteristic point is combined into face locating point group, either, preset feature
Point 1, characteristic point 9 and characteristic point 26 are used as facial feature points group, as long as can determine that the combination of at least two characteristic points constitutes face
Anchor point group, for example, in schematic diagram as shown in Figure 3, can also select characteristic point 3, characteristic point 5, characteristic point 24 and
Characteristic point 15 constitutes face locating point group, or includes more characteristic points.
Also, be made of all characteristic points that face locating point group includes a polygon mode can with as shown in fig. 7,
When face locating point group only includes there are two characteristic point, such as characteristic point 0 and characteristic point 8 in Fig. 7, it is assumed that characteristic point 0 exists
Coordinate in present frame picture is D0(x0,y0), coordinate of the characteristic point 8 in present frame picture is D8(x8,y8), then it at this time can be with
It is parallel to the straight line X of x-axis respectively along characteristic point 00And it is parallel to the straight line Y of y-axis0, it is similarly parallel to x respectively along characteristic point 8
The straight line X of axis8And it is parallel to the straight line Y of y-axis8, by X0、Y0、X8And Y8The rectangle surrounded includes as facial feature points group
The polygon that all characteristic points are constituted in present frame picture.
Certainly, in schematic diagram as shown in Figure 7, polygon, such as connection spy can also be constituted using other modes
Sign point 0 and characteristic point 8 obtain straight line l0-8, by straight line l0-8With straight line Y0And straight line X8Surrounded triangle is as facial feature points
The polygon that all characteristic points that group includes are constituted in present frame picture.
When the quantity for the characteristic point that face locating point group includes is more than two, such as comprising there are three when characteristic point, by
The mode that all characteristic points that face locating point group includes constitute a polygon can be as shown in Figure 8, it is assumed that face is fixed at this time
Site group includes characteristic point 0, characteristic point 8 and characteristic point 16 these three characteristic points, and seat of the characteristic point 0 in present frame picture
It is designated as D0(x0,y0), coordinate of the characteristic point 8 in present frame picture is D8(x8,y8), seat of the characteristic point 16 in present frame picture
It is designated as D16(x16,y16), equally it is parallel to the straight line X of x-axis respectively along characteristic point 00And it is parallel to the straight line Y of y-axis0, along feature
Point 8 is parallel to the straight line X of x8, the straight line Y of y-axis is parallel to along characteristic point 1616, by X0、Y0、X8And Y16The rectangle surrounded
It then can be used as the polygon that all characteristic points that facial feature points group includes are constituted in present frame picture.
In schematic diagram as shown in Figure 8, institute's structure can also be sequentially connected using by characteristic point 0, characteristic point 8 and characteristic point 16
At the polygon that is constituted in present frame picture of triangle all characteristic points for including as facial feature points group.
It is appreciated that the mode of above-mentioned composition polygon is only to illustrate, the mode of the composition of polygon can also use it
His scheme, for example multiple characteristic points are taken respectively and finally obtain two averagely positioning coordinates after coordinate average value the square that constitutes
The polygon that shape is constituted as facial feature points group, as long as being capable of structure by all characteristic points that facial feature points group is included
At a determining polygon.
S420 updates preset movement completion value according to current polygon area and initial area of a polygon.
As described above, current polygon area include for face locating point group all characteristic points in present frame picture institute
The area of a polygon of composition, such as initial coordinate difference value, before carrying out facial muscle training to user, electronic equipment 10 is also it needs to be determined that go out
Initial area of a polygon, the initial area of a polygon be above-mentioned face locating point group include all characteristic points in default frame picture
Middle constituted area of a polygon, wherein the default frame picture can be picture identical with initial coordinate difference is calculated, and
And the constituted mode of initial polygon is identical with the constituted mode of current polygon, for example, current polygon is using such as
The triangle for being sequentially connected three characteristic points in present frame picture in Fig. 8 is as current polygon, initial polygon
It is then the triangle that is sequentially connected three characteristic points in default frame picture as initial polygon.
As a result, after current polygon is calculated, then according to obtained current polygon area and this is initial more
Side shape area updates preset movement completion value.
Optionally, as an implementation, referring to Fig. 9, Fig. 9 is a kind of signal of the sub-step of Fig. 6 neutron S420
Property flow chart, in embodiments of the present invention, S420 includes following sub-step:
S421 calculates the quotient of current polygon area and initial area of a polygon.
S422 updates preset movement completion value according to the quotient being calculated.
As an implementation, in embodiments of the present invention, when updating preset movement completion value, it can use and first will
The quotient being calculated opens radical sign, then updates preset movement completion value by opening the result that radical sign obtains again, that is to say, that more
The calculation formula of new preset movement completion value are as follows:
Wherein, SnFor current polygon area, S0For initial area of a polygon, D0For preset movement completion value, D0' be
Updated movement completion value.
It is appreciated that can also be updated using other modes pre- in the other some embodiments of the embodiment of the present invention
If movement completion value, for example, the quotient that is directly obtained by calculating current polygon area and initial area of a polygon and default
Movement completion value product as the updated movement completion value, or can also by calculating current polygon area with it is first
The quotient that beginning area of a polygon obtains, the product with preset proportionality coefficient, for updating preset movement completion value.
Based on above-mentioned design, a kind of facial muscle training method provided by the embodiment of the present invention, by according in face locating
Location information in point group present frame picture to update preset movement completion value, and then is completed using the updated movement
Value generates current action completeness for calculating, and can reduce the distance between electronic equipment 10 and user face to current action
The influence of completeness promotes facial muscle training accuracy.
Please continue to refer to Fig. 2, S600, judge whether current action completeness is greater than preset movement completeness threshold value;When
When to be, determine that current training action is completed;When to be no, using the subsequent frame picture of present frame picture as new present frame figure
Piece continues to execute S300.
For electronic equipment 10 by above-mentioned current action completeness compared with preset movement completeness threshold value pair, judgement is current
Whether movement completeness is greater than preset movement completeness threshold value, when current action completeness is greater than the preset movement completeness
When threshold value, the current facial muscle training action of characterization user is completed, and can terminate current facial muscle training action, thereby executing next
The circulation of a facial muscle training action, or terminate training mission;Conversely, when current action completeness is default less than or equal to this
Movement completeness threshold value when, the current facial muscle training action of characterization user is not yet completed, and needs to continue to train, at this time then to work as
The subsequent frame picture of previous frame picture as new present frame picture, such as present frame picture a later frame picture or be current
The second frame picture after frame picture, continues to execute S300.
It is worth noting that in embodiments of the present invention, when the facial muscle training method includes S200, then determine it is current
Act completeness be less than or equal to preset movement completeness threshold value when, i.e., using the subsequent frame picture of present frame picture as newly
Present frame picture, continues to execute S200.
Based on above-mentioned design, a kind of facial muscle training method provided by the embodiment of the present invention, by by target face extremely
A few feature point group, is being calculated at least one feature point group corresponding changing coordinates difference in present frame picture
Afterwards, current action completeness, Jin Eryou are generated by changing coordinates difference and initial coordinate difference and preset movement completion value
The current action completeness judges whether user completes current training action, compared with the prior art, can carry out in user
When facial muscle training, feed back whether current training action is completed, it is ensured that facial muscle training quality.
A kind of possible realization of complete method process is given below in the facial muscle training method provided based on the above embodiment
Mode, referring to Fig. 10, Figure 10 shows a kind of a kind of schematically complete of facial muscle training method provided by the embodiment of the present invention
At flow chart, it comprises all steps provided by the above embodiment.
Please refer to Figure 11, Figure 11 shows a kind of one kind of facial muscle training device 200 provided by the embodiment of the present invention and shows
Meaning property structure chart, in embodiments of the present invention, the facial muscle training device 200 include feature point group extraction module 210, coordinate difference
Computing module 230, movement completeness computing module 250 and judgment module 260.
Feature point group extraction module 210 is used to obtain at least one feature point group of target face, wherein each spy
Sign point group includes two characteristic points.
Coordinate difference calculating module 230 is used to calculate changing coordinates difference corresponding at least one described feature point group,
Wherein, the changing coordinates difference is the coordinate difference of at least one feature point group in present frame picture.
Optionally, as an implementation, Figure 12 is please referred to, Figure 12 shows one kind provided by the embodiment of the present invention
A kind of schematic diagram of coordinate difference calculating module 230 of facial muscle training device 200, in embodiments of the present invention, the coordinate
Difference calculating module 230 includes subcoordinate difference computational unit 231 and changing coordinates difference computational unit 232.
Subcoordinate difference computational unit 231 is for calculating separately each characteristic point at least two feature point group
The corresponding coordinate difference of group.
Changing coordinates difference computational unit 232 is used for according to all corresponding coordinate differences of feature point group, raw
At the changing coordinates difference.
Please continue to refer to Figure 10, acts completeness computing module 250 and be used for according to the changing coordinates difference and initial seat
Difference and preset movement completion value are marked, current action completeness is generated, wherein the initial coordinate difference characterization is described extremely
The few coordinate difference of a feature point group in the initial state.
Optionally, as an implementation, Figure 13 is please referred to, Figure 13 shows one kind provided by the embodiment of the present invention
A kind of schematic diagram of movement completeness computing module 250 of facial muscle training device 200, in embodiments of the present invention, this is dynamic
Making completeness computing module 250 includes movement completion value computing unit 251 and movement completeness computing unit 252.
Completion value computing unit 251 is acted to be used to generate according to the changing coordinates difference and the initial coordinate difference
Current action completion value.
Completeness computing unit 252 is acted to be used to complete according to the current action completion value and the preset movement
Value, generates the current action completeness.
Please continue to refer to Figure 11, judgment module 260 is for judging it is preset dynamic whether the current action completeness is greater than
Make completeness threshold value, wherein when the current action completeness is greater than preset movement completeness threshold value, determine current training
Movement is completed;When the current action completeness is less than or equal to the preset movement completeness threshold value, with described current
The subsequent frame picture of frame picture re-executes described in calculating as new present frame picture, the coordinate difference calculating module 230
Changing coordinates difference corresponding at least one feature point group.
Optionally, as an implementation, please continue to refer to Figure 11, in embodiments of the present invention, the facial muscle training cartridge
Setting 200 further includes photo resolution adjustment module 220, which adjusts module 220 for reducing the present frame figure
The resolution ratio of piece, so that the present frame picture after the reduction resolution ratio is for calculating corresponding at least one described feature point group
Changing coordinates difference.
Optionally, as an implementation, please continue to refer to Figure 11, in embodiments of the present invention, the facial muscle training cartridge
Setting 200 further includes deliberate action completion value update module 240, which is used for according in institute
The face locating point group of present frame picture acquisition is stated, the preset movement completion value is updated, so that the updated movement
Completion value generates the current action completeness for calculating, wherein the face locating point group includes at least two characteristic points.
Optionally, as an implementation, Figure 14 is please referred to, Figure 14 shows one kind provided by the embodiment of the present invention
A kind of schematic diagram of deliberate action completion value update module 240 of facial muscle training device 200, in embodiments of the present invention,
The deliberate action completion value update module 240 includes area of a polygon computing unit 241 and movement completion value updating unit 242.
Area of a polygon computing unit 241 is worked as calculating all characteristic points that the face locating point group includes described
The current polygon area constituted in previous frame picture.
Completion value updating unit 242 is acted to be used for according to the current polygon area and initial area of a polygon, more
The new preset movement completion value, wherein the initial area of a polygon is all spies that the face locating point group includes
The area of a polygon that sign point is constituted in the default frame picture.
Optionally, as an implementation, Figure 15 is please referred to, Figure 15 shows one kind provided by the embodiment of the present invention
A kind of schematic diagram of movement completion value updating unit 242 of facial muscle training device 200, in embodiments of the present invention, this is dynamic
Making completion value updating unit 242 includes that quotient computation subunit 2421 and completion value update subelement 2422.
Quotient computation subunit 2421 is used to calculate the quotient of the current polygon area Yu the initial area of a polygon
Value.
Completion value updates the quotient that subelement 2422 is used to be calculated according to described in and updates the preset movement completion
Value.
Optionally, facial muscle training device 200 involved in the present embodiment, function can pass through above-mentioned electronic equipment 10
It realizes.For example, related data, instruction involved in above-described embodiment and functional module storage are in the memory 110, then
After being executed by processor 120, the facial muscle training method in above-described embodiment is realized.
In embodiment provided herein, it should be understood that disclosed device and method, it can also be by other
Mode realize.The apparatus embodiments described above are merely exemplary, for example, the flow chart and block diagram in attached drawing are shown
The architecture, function and operation in the cards of device according to an embodiment of the present invention, method and computer program product.
In this regard, each box in flowchart or block diagram can represent a part of a module, section or code, the mould
A part of block, program segment or code includes one or more executable instructions for implementing the specified logical function.Also it answers
When note that function marked in the box can also be to be different from being marked in attached drawing in some implementations as replacement
The sequence of note occurs.For example, two continuous boxes can actually be basically executed in parallel, they sometimes can also be by opposite
Sequence execute, this depends on the function involved.It is also noted that each box in block diagram and or flow chart and
The combination of box in block diagram and or flow chart can use the dedicated hardware based system for executing defined function or movement
System is to realize, or can realize using a combination of dedicated hardware and computer instructions.
In addition, each functional module in embodiments of the present invention can integrate one independent part of formation together,
It can be modules individualism, an independent part can also be integrated to form with two or more modules.
It, can be with if the function is realized and when sold or used as an independent product in the form of software function module
It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words
The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter
Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a
People's computer, server or network equipment etc.) execute the method for the embodiment of the present invention all or part of the steps.And it is preceding
The storage medium stated includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory
The various media that can store program code such as (RAM, Random Access Memory), magnetic or disk.
In conclusion a kind of facial muscle training method, device provided by the embodiment of the present invention and electronic equipment, by by mesh
At least one feature point group for marking face, it is corresponding current in present frame picture at least one feature point group is calculated
After coordinate difference, current action is generated by changing coordinates difference and initial coordinate difference and preset movement completion value and is completed
Degree, and then judge whether user completes current training action, compared with the prior art, Neng Gou by the current action completeness
When user carries out facial muscle training, feed back whether current training action is completed, it is ensured that facial muscle training quality;Also by reducing present frame
The resolution ratio of picture, so as to reduce the present frame picture after resolution ratio for calculating at least one feature point group in present frame picture
In corresponding changing coordinates difference, thus to the data calculation amount of present frame picture when reducing facial muscle training, and then promoted
To the processing speed of picture when facial muscle training;Location information also by foundation in face locating point group present frame picture,
To update preset movement completion value, and then current action completion is generated for calculating using the updated movement completion value
Degree can reduce the influence of the distance between electronic equipment 10 and user face to current action completeness, promote facial muscle training
Accuracy.
The foregoing is only a preferred embodiment of the present invention, is not intended to restrict the invention, for the skill of this field
For art personnel, the invention may be variously modified and varied.All within the spirits and principles of the present invention, made any to repair
Change, equivalent replacement, improvement etc., should all be included in the protection scope of the present invention.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included within the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.
Claims (13)
1. a kind of facial muscle training method, which is characterized in that the described method includes:
Obtain at least one feature point group of target face, wherein each feature point group includes two characteristic points;
Calculate changing coordinates difference corresponding at least one described feature point group, wherein the changing coordinates difference is described
Coordinate difference of at least one feature point group in present frame picture;
According to the changing coordinates difference and initial coordinate difference and preset movement completion value, generates current action and complete
Degree, wherein the initial coordinate difference characterizes the coordinate difference of at least one feature point group in the initial state;
When the current action completeness is greater than preset movement completeness threshold value, determine that current training action is completed.
2. the method as described in claim 1, which is characterized in that at least one described feature point group includes at least two characteristic points
The step of group, changing coordinates difference corresponding at least one feature point group described in the calculating, comprising:
Calculate separately each corresponding coordinate difference of feature point group at least two feature point group;
According to all corresponding coordinate differences of feature point group, the changing coordinates difference is generated.
3. method according to claim 1 or 2, which is characterized in that select information determination described at least according to the scene of user
One feature point group, wherein the corresponding training place of at least one feature point group described in the scene selection information representation of the user
Scape.
4. the method as described in claim 1, which is characterized in that corresponding at least one feature point group described in the calculating
Before the step of changing coordinates difference, the method also includes:
Reduce the resolution ratio of the present frame picture so that the present frame picture reduced after resolution ratio for calculate it is described extremely
Changing coordinates difference corresponding to a few feature point group.
5. method as claimed in claim 4, which is characterized in that the step of the resolution ratio for reducing the present frame picture,
Include:
The pixel size in width direction and short transverse of the present frame picture is reduced into half, it is described current to reduce
The resolution ratio of frame picture.
6. the method as described in claim 1, which is characterized in that described according to the changing coordinates difference and initial coordinate difference
And preset movement completion value, generate current action completeness the step of, comprising:
According to the changing coordinates difference and the initial coordinate difference, current action completion value is generated;
According to the current action completion value and the preset movement completion value, the current action completeness is generated.
7. the method as described in claim 1, which is characterized in that described according to the changing coordinates difference and preset initial
Before the step of coordinate difference and preset movement completion value, generation current action completeness, the method also includes:
According to the face locating point group obtained in the present frame picture, the preset movement completion value is updated, so that described
Updated movement completion value generates the current action completeness for calculating, wherein the face locating point group includes extremely
Few two characteristic points.
8. the method for claim 7, which is characterized in that the face locating that the foundation is obtained in the present frame picture
The step of point group, the update preset movement completion value, comprising:
Calculate the current polygon that all characteristic points that the face locating point group includes are constituted in the present frame picture
Area;
According to the current polygon area and initial area of a polygon, the preset movement completion value is updated, wherein institute
All characteristic points that initial area of a polygon includes by the face locating point group are stated to constitute in the default frame picture
Area of a polygon.
9. method according to claim 8, which is characterized in that described according to the current polygon area and initial polygon
The step of shape area, the update preset movement completion value, comprising:
Calculate the quotient of the current polygon area Yu the initial area of a polygon;
The quotient being calculated according to described in updates the preset movement completion value.
10. the method as described in claim 1, which is characterized in that the method also includes:
When the current action completeness is less than or equal to the preset movement completeness threshold value, with the present frame picture
Subsequent frame picture as new present frame picture, continue to execute and work as corresponding at least one feature point group described in the calculating
The step of preceding coordinate difference.
11. the method as described in claim 1, which is characterized in that the initial coordinate difference is at least one described characteristic point
Coordinate difference of the group in default frame picture.
12. a kind of facial muscle training device, which is characterized in that described device includes:
Feature point group extraction module, for obtaining at least one feature point group of target face, wherein each feature point group
Include two characteristic points;
Coordinate difference calculating module, for calculating changing coordinates difference corresponding at least one described feature point group, wherein institute
Stating changing coordinates difference is the coordinate difference of at least one feature point group in present frame picture;
Completeness computing module is acted, for according to the changing coordinates difference and initial coordinate difference and preset having acted
At value, current action completeness is generated, wherein the initial coordinate difference characterizes at least one described feature point group in initial shape
Coordinate difference under state;
Judgment module, for judging whether the current action completeness is greater than preset movement completeness threshold value, wherein work as institute
When stating current action completeness greater than preset movement completeness threshold value, determine that current training action is completed.
13. a kind of electronic equipment characterized by comprising
Memory, for storing one or more programs;
Processor;
When one or more of programs are executed by the processor, such as side of any of claims 1-11 is realized
Method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811506296.9A CN109659006B (en) | 2018-12-10 | 2018-12-10 | Facial muscle training method and device and electronic equipment |
PCT/CN2019/124202 WO2020119665A1 (en) | 2018-12-10 | 2019-12-10 | Facial muscle training method and apparatus, and electronic device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811506296.9A CN109659006B (en) | 2018-12-10 | 2018-12-10 | Facial muscle training method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109659006A true CN109659006A (en) | 2019-04-19 |
CN109659006B CN109659006B (en) | 2021-03-23 |
Family
ID=66113947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811506296.9A Active CN109659006B (en) | 2018-12-10 | 2018-12-10 | Facial muscle training method and device and electronic equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109659006B (en) |
WO (1) | WO2020119665A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020119665A1 (en) * | 2018-12-10 | 2020-06-18 | 深圳先进技术研究院 | Facial muscle training method and apparatus, and electronic device |
CN113327247A (en) * | 2021-07-14 | 2021-08-31 | 中国科学院深圳先进技术研究院 | Facial nerve function evaluation method and device, computer equipment and storage medium |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837016A (en) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
CN113837018A (en) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
CN113837019B (en) * | 2021-08-31 | 2024-05-10 | 北京新氧科技有限公司 | Cosmetic progress detection method, device, equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140023269A1 (en) * | 2012-07-17 | 2014-01-23 | Samsung Electronics Co., Ltd. | Feature descriptor for robust facial expression recognition |
CN103917272A (en) * | 2011-09-15 | 2014-07-09 | 西格玛仪器控股有限责任公司 | System and method for treating skin and underlying tissues for improved health, function and/or appearance |
CN104331685A (en) * | 2014-10-20 | 2015-02-04 | 上海电机学院 | Non-contact active calling method |
CN108211241A (en) * | 2017-12-27 | 2018-06-29 | 复旦大学附属华山医院 | A kind of facial muscles rehabilitation training system based on mirror image visual feedback |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104683692B (en) * | 2015-02-04 | 2017-10-17 | 广东欧珀移动通信有限公司 | A kind of continuous shooting method and device |
CN105678702B (en) * | 2015-12-25 | 2018-10-19 | 北京理工大学 | A kind of the human face image sequence generation method and device of feature based tracking |
CN107169397B (en) * | 2016-03-07 | 2022-03-01 | 佳能株式会社 | Feature point detection method and device, image processing system and monitoring system |
CN106980815A (en) * | 2017-02-07 | 2017-07-25 | 王俊 | Facial paralysis objective evaluation method under being supervised based on H B rank scores |
CN107633206B (en) * | 2017-08-17 | 2018-09-11 | 平安科技(深圳)有限公司 | Eyeball motion capture method, device and storage medium |
CN108460345A (en) * | 2018-02-08 | 2018-08-28 | 电子科技大学 | A kind of facial fatigue detection method based on face key point location |
CN109659006B (en) * | 2018-12-10 | 2021-03-23 | 深圳先进技术研究院 | Facial muscle training method and device and electronic equipment |
-
2018
- 2018-12-10 CN CN201811506296.9A patent/CN109659006B/en active Active
-
2019
- 2019-12-10 WO PCT/CN2019/124202 patent/WO2020119665A1/en active Application Filing
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103917272A (en) * | 2011-09-15 | 2014-07-09 | 西格玛仪器控股有限责任公司 | System and method for treating skin and underlying tissues for improved health, function and/or appearance |
US20140023269A1 (en) * | 2012-07-17 | 2014-01-23 | Samsung Electronics Co., Ltd. | Feature descriptor for robust facial expression recognition |
CN104331685A (en) * | 2014-10-20 | 2015-02-04 | 上海电机学院 | Non-contact active calling method |
CN108211241A (en) * | 2017-12-27 | 2018-06-29 | 复旦大学附属华山医院 | A kind of facial muscles rehabilitation training system based on mirror image visual feedback |
Non-Patent Citations (2)
Title |
---|
TSE-YU PAN 等: "A Kinect-based oral rehabilitation system", 《2015 INTERNATIONAL CONFERENCE ON ORANGE TECHNOLOGIES (ICOT)》 * |
聂志慧等: "面肌功能训练法在口僻患者麻痹面肌功能康复中的作用评价", 《遵义医学院学报》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020119665A1 (en) * | 2018-12-10 | 2020-06-18 | 深圳先进技术研究院 | Facial muscle training method and apparatus, and electronic device |
CN113327247A (en) * | 2021-07-14 | 2021-08-31 | 中国科学院深圳先进技术研究院 | Facial nerve function evaluation method and device, computer equipment and storage medium |
WO2023284067A1 (en) * | 2021-07-14 | 2023-01-19 | 中国科学院深圳先进技术研究院 | Facial nerve function evaluation method and apparatus, and computer device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN109659006B (en) | 2021-03-23 |
WO2020119665A1 (en) | 2020-06-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109659006A (en) | Facial muscle training method, device and electronic equipment | |
US20210074005A1 (en) | Image processing method and apparatus, image device, and storage medium | |
US9195304B2 (en) | Image processing device, image processing method, and program | |
CN103208133B (en) | The method of adjustment that in a kind of image, face is fat or thin | |
CN110096156B (en) | Virtual reloading method based on 2D image | |
CN102567716B (en) | Face synthetic system and implementation method | |
CN107452049B (en) | Three-dimensional head modeling method and device | |
CN109686418A (en) | Facial paralysis degree evaluation method, apparatus, electronic equipment and storage medium | |
KR20150114138A (en) | Method and apparatus for virtual molding SNS service | |
WO2020147796A1 (en) | Image processing method and apparatus, image device, and storage medium | |
TWI780919B (en) | Method and apparatus for processing face image, electronic device and storage medium | |
CN106709886A (en) | Automatic image retouching method and device | |
CN109949390A (en) | Image generating method, dynamic expression image generating method and device | |
Li | Wearable computer vision systems for a cortical visual prosthesis | |
CN112837427A (en) | Processing method, device and system of variable human body model and storage medium | |
CN109190562B (en) | Intelligent sitting posture monitoring method and device, intelligent lifting table and storage medium | |
CN111861822B (en) | Patient model construction method, equipment and medical education system | |
CN106773050A (en) | A kind of intelligent AR glasses virtually integrated based on two dimensional image | |
WO2020147797A1 (en) | Image processing method and apparatus, image device, and storage medium | |
KR102429627B1 (en) | The System that Generates Avatars in Virtual Reality and Provides Multiple Contents | |
WO2020147794A1 (en) | Image processing method and apparatus, image device and storage medium | |
Harari et al. | A computer-based method for the assessment of body-image distortions in anorexia-nervosa patients | |
CN109345636A (en) | The method and apparatus for obtaining conjecture face figure | |
CN115006822A (en) | Intelligent fitness mirror control system | |
CN109144452A (en) | A kind of naked eye 3D display system and method based on 3D MIcrosope image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |