WO2020119665A1 - 面肌训练方法、装置及电子设备 - Google Patents
面肌训练方法、装置及电子设备 Download PDFInfo
- Publication number
- WO2020119665A1 WO2020119665A1 PCT/CN2019/124202 CN2019124202W WO2020119665A1 WO 2020119665 A1 WO2020119665 A1 WO 2020119665A1 CN 2019124202 W CN2019124202 W CN 2019124202W WO 2020119665 A1 WO2020119665 A1 WO 2020119665A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- current
- coordinate difference
- feature point
- point group
- action completion
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Definitions
- This application belongs to the field of virtual rehabilitation training, and particularly relates to a facial muscle training method, device and electronic equipment.
- Facial paralysis can skew the patient's mouth, affect the expression of the patient's normal expression, and even affect the appearance of the patient's appearance and appearance. It has a great negative impact on the patient's mental health and hinders the patient's social interaction.
- facial paralysis There are many patients with facial paralysis in my country, and the damage caused by facial paralysis is very serious. The incidence rate is increasing year by year. The patients with facial paralysis have all ages. Due to the increase in social work pressure of young people, the incidence is younger.
- Facial paralysis patients can be fully recovered if they are discovered early and treated in time.
- Facial muscle functional rehabilitation training is generally active rehabilitation training through the patient's own strength exercises on the face, forehead, mouth, nose, etc. The patient needs to adhere to a certain amount of facial muscle functional rehabilitation training every day, for example Raising eyebrows, frowning, closing eyes, raising nose, showing teeth, pouting, etc.
- the present application provides a facial muscle training method, device and electronic equipment, which can feed back whether the current training action is completed when the user performs facial muscle training to ensure the quality of facial muscle training.
- the present application provides a facial muscle training method, the method includes: acquiring at least one feature point group of a target face, wherein each of the feature point groups includes two feature points; The current coordinate difference corresponding to a feature point group, wherein the current coordinate difference is the coordinate difference of the at least one feature point group in the current frame picture; based on the current coordinate difference and the initial coordinate difference And a preset action completion value to generate a current action completion degree, wherein the initial coordinate difference represents the coordinate difference of the at least one feature point group in the initial state; when the current action completion degree is greater than the preset When the action completion threshold is reached, it is determined that the current training action is completed.
- the technical solution adopted in the embodiment of the present application further includes: the at least one feature point group includes at least two feature point groups, and the step of calculating the current coordinate difference corresponding to the at least one feature point group includes:
- the current coordinate difference value is generated according to the coordinate difference values corresponding to all the feature point groups.
- the technical solution adopted by the embodiment of the present application further includes: determining the at least one feature point group according to the user's scene selection information, where the user's scene selection information represents the training scene corresponding to the at least one feature point group.
- the technical solution adopted by the embodiment of the present application further includes: before the step of calculating the current coordinate difference corresponding to the at least one feature point group, the method further includes:
- the step of reducing the resolution of the current frame picture includes:
- the pixel size of the current frame picture in both the width direction and the height direction is reduced by half to reduce the resolution of the current frame picture.
- the technical solution adopted by the embodiment of the present application further includes: the step of generating the current action completion degree according to the current coordinate difference value and the initial coordinate difference value and a preset action completion value, including:
- the current action completion degree is generated according to the current action completion value and the preset action completion value.
- the technical solution adopted by the embodiment of the present application further includes: before the step of generating the current action completion degree according to the current coordinate difference value and the preset initial coordinate difference value and the preset action completion value, the method further include:
- the face positioning point group includes at least two feature points.
- the step of updating the preset action completion value according to the face positioning point group acquired in the current frame picture includes:
- the technical solution adopted by the embodiment of the present application further includes the step of updating the preset action completion value according to the current polygon area and the initial polygon area, including:
- the preset action completion value is updated according to the calculated quotient value.
- the method further includes:
- the technical solution adopted in the embodiment of the present application further includes: the initial coordinate difference value is a coordinate difference value of the at least one feature point group in a preset frame picture.
- the present application provides a facial muscle training device.
- the device includes: a feature point group extraction module for acquiring at least one feature point group of a target face, wherein each of the feature point groups includes two Feature points; a coordinate difference calculation module for calculating the current coordinate difference corresponding to the at least one feature point group, wherein the current coordinate difference value is the at least one feature point group in the current frame picture Coordinate difference value; an action completion degree calculation module, configured to generate a current action completion degree according to the current coordinate difference value and the initial coordinate difference value and a preset action completion value, wherein the initial coordinate difference value represents the at least The coordinate difference of a feature point group in the initial state; a judgment module for judging whether the current action completion degree is greater than a preset action completion threshold, wherein, when the current action completion degree is greater than the preset action completion When the threshold is reached, it is determined that the current training action is completed.
- the present application provides an electronic device.
- the electronic device includes a memory for storing one or more programs; a processor.
- the one or more programs are executed by the processor, the above facial muscle training method is implemented.
- a facial muscle training method, device, and electronic device provided by the present application are obtained by calculating at least one feature point group of a target face After a feature point group corresponds to the current coordinate difference in the current frame picture, the current action completion degree is generated from the current coordinate difference value and the initial coordinate difference value and the preset action completion value, and then the current action completion degree determines whether the user is Completion of the current training action, compared with the prior art, when the user performs facial muscle training, it can feed back whether the current training action is completed to ensure the quality of facial muscle training.
- FIG. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
- FIG. 2 is a schematic flowchart of a facial muscle training method according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of a face feature point set distribution model according to an embodiment of the present application.
- FIG. 4 is a schematic flowchart of sub-steps of S300 in FIG. 2;
- FIG. 5 is a schematic flowchart of the sub-steps of S500 in FIG. 2;
- FIG. 6 is a schematic flowchart of sub-steps of S400 in FIG. 2;
- FIG. 7 is a schematic diagram of a polygon formed by a face positioning point group according to an embodiment of the present application.
- FIG. 8 is another schematic diagram of the face positioning point group constituting a polygon in an embodiment of the present application.
- FIG. 9 is a schematic flowchart of sub-steps of S420 in FIG. 6;
- FIG. 10 is a schematic completion flowchart of a facial muscle training method according to an embodiment of the present application.
- FIG. 11 is a schematic structural diagram of a facial muscle training device according to an embodiment of the present application.
- FIG. 12 is a schematic structural diagram of a coordinate difference calculation module of a facial muscle training device according to an embodiment of the present application.
- FIG. 13 is a schematic structural diagram of an action completion degree calculation module of a facial muscle training device according to an embodiment of the present application.
- FIG. 14 is a schematic structural diagram of a preset action completion value update module of a facial muscle training device according to an embodiment of the present application
- 15 is a schematic structural diagram of an action completion value updating unit of a facial muscle training device according to an embodiment of the present application.
- 140-peripheral interface 150-RF unit; 160-communication bus/signal line; 170-camera unit;
- 180-display unit 200-facial muscle training device; 210-feature point group extraction module;
- 220-picture resolution adjustment module 220-picture resolution adjustment module; 230-coordinate difference calculation module;
- 252-Action completion degree calculation unit 252-Action completion degree calculation unit; 260-Judgment module.
- the current prior art method for facial muscle function rehabilitation training mainly uses traditional mirror therapy, that is, a mirror is placed in front of the patient. The patient observes the state of his face through the mirror and obtains specific details from the observation of facial movements. Feedback the training results, and then complete the rehabilitation training of facial muscle function.
- Facial muscle training can effectively promote the recovery of facial muscle motor function and improve the rehabilitation treatment effect of facial paralysis.
- traditional mirror therapy is simple to operate, it is also very convenient and easy for patients to implement, but because the mirror does not give the patient feedback on whether the degree of execution of the patient's training actions meets the requirements of rehabilitation training; and because mirror therapy is used, the mirror and the The patient does not have any interaction, which makes the training process more tedious, it is easy to make the patient lose interest in rehabilitation training, resulting in poor rehabilitation training.
- an improvement provided by the embodiments of the present application is that: through at least one feature point group of the target face, the at least one feature point group corresponding to the current frame picture is calculated After the current coordinate difference, the current action completion degree is generated from the current coordinate difference value and the initial coordinate difference value and the preset action completion value, and then the current action completion degree is used to determine whether the user has completed the current training action.
- FIG. 1 shows a schematic structural diagram of an electronic device 10 provided in an embodiment of the present application.
- the electronic device 10 may be, but is not limited to, a smartphone , Personal computer (PC), tablet computer, laptop computer, personal digital assistant (PDA), etc.
- the electronic device 10 includes a memory 110, a storage controller 130, one or more (only one shown in the figure) processor 120, a peripheral interface 140, a radio frequency unit 150, a camera unit 170, a display unit 180, and the like. These components communicate with each other via one or more communication buses/signal lines 160.
- the memory 110 may be used to store software programs and modules.
- the processor 120 runs the software programs and modules stored in the memory 110, thereby Perform various functional applications and image processing, such as the facial muscle training method provided by the embodiments of the present application.
- the memory 110 may be, but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read-only memory (Programmable Read-Only Memory, PROM) , Erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), electrical erasable read-only memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
- RAM Random Access Memory
- ROM read-only memory
- PROM programmable read-only memory
- EPROM Erasable Programmable Read-Only Memory
- EPROM Erasable Programmable Read-Only Memory
- EEPROM Electrical Erasable Programmable Read-Only Memory
- the processor 120 may be an integrated circuit chip with signal processing capabilities.
- the above processor 120 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a voice processor, a video processor, etc.; it may also be a digital signal processor, Application specific integrated circuits, field programmable gate arrays or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components.
- the methods, steps, and logical block diagrams disclosed in the embodiments of the present application may be implemented or executed.
- the general-purpose processor may be a microprocessor or the processor 120 may also be any conventional processor or the like.
- the peripheral interface 140 couples various input/input devices to the processor 120 and the memory 110.
- the peripheral interface 140, the processor 120, and the memory controller 130 may be implemented in a single chip. In some other embodiments of this application, they can also be implemented by separate chips.
- the radio frequency unit 150 is used to receive and transmit electromagnetic waves, realize the mutual conversion of electromagnetic waves and electrical signals, and thus communicate with a communication network or other devices.
- the camera unit 170 is used to take pictures so that the processor 120 can process the taken pictures.
- the display unit 180 is used to provide a graphical output interface for the user and display image information for the user to perform facial muscle training.
- FIG. 1 is merely an illustration, and the electronic device 10 may further include more or fewer components than those shown in FIG. 1 or have a configuration different from that shown in FIG. 1.
- Each component shown in FIG. 1 may be implemented using hardware, software, or a combination thereof.
- the electronic device 10 may not include the camera unit 170, and the electronic device 10 is used to establish a communication with a camera device for implementation.
- the camera device is used to take pictures, for example Take a picture of the patient, and then send the taken picture to the electronic device 10 through a wired or wireless network to implement the facial muscle training method provided in the embodiments of the present application.
- the electronic device 10 may not include the display unit 180, but is implemented by establishing communication between the electronic device 10 and a display device, which is implemented by the electronic device 10 through wired
- the image information during training is sent to the display device in a wireless network manner, so that the user can refer to the image information to complete facial muscle training.
- FIG. 2 shows a schematic flowchart of a facial muscle training method provided by an embodiment of the present application.
- the facial muscle training method includes the following steps:
- the electronic device 10 determines at least one feature point group according to the user's scene selection information, where each feature point group contains two feature points, and each feature point group is pre-configured in the electronic device 10 When the electronic device 10 determines at least one feature point group according to the user's scene selection information, the electronic device 10 also determines all the feature points participating in facial muscle training.
- the user's scene selection information characterizes the training scene corresponding to at least one feature point group.
- a plurality of training scenes are preset, and the correspondence between each training scene and the corresponding feature point group is corresponding.
- the device 10 receives the training scene selected by the user, the training scene is used as the user's scene selection information, and according to the selected training scene and the preset correspondence between each training scene and the corresponding feature point group, The feature point group corresponding to the user's scene selection information is determined.
- the electronic device 10 is preset with six training scenes of raising eyebrows, frowning, closing eyes, raising nose, showing teeth, pouting, and the corresponding relationship between each preset training scene and its corresponding feature point group They are: raised eyebrows corresponding to feature point group 1, frowning corresponding to feature point group 2 and feature point group 3, closed eyes corresponding to feature point group 4, feature point group 5 and feature point group 6, shrugging nose to feature point group 7, showing teeth Corresponding to feature point group 8 and feature point group 9, pouting mouth corresponds to feature point group 10, feature point group 11 and feature point group 12; when the electronic device 10 receives the user's scene selection information as raising eyebrows, it combines According to the set correspondence, at least one feature point group determined is the feature point group 1; when the electronic device 10 receives the user’s scene selection information as closed eyes, the closed eyes and the preset correspondence are combined to determine at least one
- the feature point groups are feature point group 4, feature point group 5 and feature point group 6.
- the electronic device 10 processes the collected user pictures to determine whether the user has completed the selected training scene. Therefore, after the electronic device 10 acquires at least one feature point group, the current coordinate values of all the feature points contained in each feature point group in the current frame picture are calculated according to the acquired at least one feature point group. The current coordinate difference corresponding to at least one feature point group, wherein a coordinate system is established in the current frame picture, and each feature point has a corresponding current coordinate value under the established coordinate system.
- the current coordinate value of each feature point in the current frame picture can be obtained by using the feature point set model preset in the electronic device 10.
- FIG. 3 is a schematic diagram of a face feature point set distribution model. All feature point distributions included in the face model feature point set can be obtained from the Dlib open source library.
- the electronic device 10 obtains the target face’s After at least one feature point group, combined with the face feature point set distribution model and the Dlib open source library, the current coordinate value of each feature point in the current frame picture can be obtained, and then the at least one feature point can be calculated The current coordinate difference corresponding to the group.
- the obtained at least one feature point group includes at least two feature point groups.
- frown corresponds to feature point group 2 and feature points.
- Group 3, closed eyes correspond to feature point group 4, feature point group 5 and feature point group 6. Therefore, as an implementation manner, please refer to FIG. 4, which is a schematic flowchart of the sub-steps of S300 in FIG. 2.
- S300 includes the following sub-steps:
- the electronic device 10 When the electronic device 10 obtains at least two feature point groups of the target face according to the user's scene selection information, first calculate the coordinate difference value corresponding to each of the at least two feature point groups. For example, in the above example, when training frowning, the determined feature point groups include feature point group 2 and feature point group 3, at this time, the coordinate difference ⁇ 2 and feature corresponding to feature point group 2 are calculated first. The coordinate difference ⁇ 3 corresponding to point group 3 .
- S320 Generate a current coordinate difference according to the coordinate difference corresponding to all feature point groups.
- the electronic device 10 when the determined feature point group includes feature point group 2 and feature point group 3, and the coordinate difference value ⁇ 2 corresponding to feature point group 2 and the coordinate difference value corresponding to feature point group 3 are calculated respectively ⁇ 3 , the electronic device 10 then generates the current coordinate difference value according to the coordinate difference values corresponding to the feature point group 2 and the feature point group 3, namely, the coordinate difference value ⁇ 2 and the coordinate difference value ⁇ 3 .
- the method of generating the current coordinate difference value may be an arithmetic average of the coordinate difference values corresponding to all feature point groups.
- the current coordinate difference may be an arithmetic average of the coordinate difference values corresponding to all feature point groups.
- the current coordinate difference can also be generated by calculating the geometric mean, for example, in the above example, the current coordinate difference
- the facial muscle training method before performing S300, the facial muscle training method further includes:
- the electronic device 10 Before calculating the current coordinate difference of the at least one feature point group in the current frame picture, the electronic device 10 first reduces the resolution of the current frame picture, thereby reducing the data size of the current frame picture, so that after adopting the reduced resolution in S300 The current frame picture of is used to calculate the current coordinate difference corresponding to the at least one feature point group, which improves the calculation rate of the electronic device 10.
- the electronic device 10 may reduce the resolution of the current frame picture by halving the pixel size of the current frame picture in both the width direction and the height direction to reduce the resolution of the current frame picture rate.
- the electronic device 10 may take only one frame for image processing for every two consecutive frames of pictures, and give up processing on another frame of pictures. Ways to increase the speed at which the electronic device 10 processes consecutive multi-frame pictures.
- a facial muscle training method provided by an embodiment of the present application reduces the resolution of the current frame picture so that the reduced resolution current frame picture is used to calculate at least one feature point group in the current frame picture
- the corresponding current coordinate difference value reduces the calculation amount of the data of the current frame picture during facial muscle training, thereby improving the processing speed of the picture during facial muscle training.
- S500 and generate the current action completion degree according to the current coordinate difference and the initial coordinate difference and the preset action completion value.
- the electronic device 10 Before performing facial muscle training on the user, the electronic device 10 needs to determine the initial coordinate difference value, which represents the coordinate difference value of the at least one feature point group in the initial state, which can be understood as
- the face state before facial muscle training for example, assume that the current user's facial muscle training content is eyebrow lifting, and the initial state is the face state before the user raises the eyebrow, generally the face state when the user is in a natural expression.
- the initial coordinate difference value is the coordinate difference value of the at least one feature point group in the preset frame picture. That is to say, before the user uses the electronic device 10 for facial muscle training, the electronic device 10 obtains a preset frame picture as the user's face state in the initial state, and then the electronic device 10 calculates the at least one feature point The coordinate difference value of the group in the preset frame picture is taken as the initial coordinate difference value.
- the electronic device 10 may re-acquire a new preset frame picture for calculating the initial coordinate difference. For example, in one cycle, the electronic device 10 is used to train the user to raise the eyebrows, the electronic device 10 uses the first preset frame picture to calculate the initial coordinate difference during the eyebrow training, and in another cycle, when the electronic device 10 is used to train the user When performing nose-twisting training, the electronic device 10 uses the second preset frame picture to calculate the initial coordinate difference during nose-twisting training.
- the method of presetting a fixed value in the electronic device 10 may also be used as the initial coordinate difference value.
- the initial coordinate difference may also be set to be different, depending on the initial coordinate difference set by the user for different training scenarios.
- the electronic device 10 combines the initial coordinate difference value and the preset action completion value according to the current coordinate difference value to calculate and generate a current action completion degree, which represents the user’s The degree of completion of facial muscle training.
- FIG. 5 is a schematic flowchart of the sub-steps of S500 in FIG. 2.
- S500 includes the following sub-steps:
- S520 Generate a current action completion degree according to the current action completion value and the preset action completion value.
- a quotient of calculating both the current action completion value and the preset action completion value is adopted as the current action completion degree.
- V t is the current action completion degree
- D t is the current action completion value
- D 0 is the preset action completion value.
- the distance between the electronic device 10 and the user's face may change at any time for different users or the same user at different times.
- the above-mentioned preset action completion value is a fixed size Value, and when the distance between the electronic device 10 and the user's face changes, the size of the target face in different frames of the picture may be different, especially the pictures used in different training cycles, which is As a result, the current degree of action completion is affected by the distance between the electronic device 10 and the user's face.
- the facial muscle training method before performing S500, the facial muscle training method further includes:
- S400 Update the preset action completion value according to the face positioning point group acquired in the current frame picture.
- the electronic device 10 also selects a face positioning point group.
- the face positioning point group includes at least two feature points. For example, it may include two feature points, or three feature points or It is four feature points, or it can contain five or more feature points.
- the preset action completion value is updated, so that the updated action completion value is used to calculate and generate the current action completion degree, thereby reducing The effect of the distance between the electronic device 10 and the user's face on the current degree of completion of the action.
- FIG. 6 is a schematic flowchart of the sub-steps of S400 in FIG. 2.
- S400 includes the following sub-steps:
- all the feature points included in the face positioning point group constitute a polygonal area. Since each feature point has a unique coordinate under the coordinate system established in the current frame picture, it is based on each The coordinate values of each feature point are calculated to obtain the current polygon area corresponding to the polygon formed by all the feature points included in the face positioning point group.
- all the feature points included in the face positioning point group can be selected in the manner of preset feature points, for example, in the schematic diagram shown in FIG. 3, the preset feature point 0 and the feature point 8 are combined as the face positioning point Group, or preset feature points 1, feature points 9 and feature points 26 as facial feature point groups, as long as at least two feature points can be determined to form a facial positioning point group, for example, as shown in Figure 3
- feature point 3, feature point 5, feature point 24 and feature point 15 may also be selected to form a face positioning point group, or more feature points may be included.
- the way that all the feature points included in the face positioning point group form a polygon can be as shown in FIG. 7, when the face positioning point group contains only two feature points, such as feature point 0 and feature in FIG. 7 Point 8, assuming that the coordinate of feature point 0 in the current frame picture is D 0 (x 0 ,y 0 ), and the coordinate of feature point 8 in the current frame picture is D 8 (x 8 ,y 8 ), then you can Make a straight line X 0 parallel to the x-axis and a straight line Y 0 parallel to the y-axis along the feature point 0.
- polygons can also be formed in other ways, such as connecting feature point 0 and feature point 8 to obtain a straight line l 0-8 , which consists of straight line l 0-8 and straight line Y 0 and straight line X
- the triangle enclosed by 8 serves as the polygon formed by all the feature points contained in the face feature point group in the current frame picture.
- the face positioning point group includes three feature points: feature point 0, feature point 8 and feature point 16, and the coordinate of feature point 0 in the current frame picture is D 0 (x 0 ,y 0 ), feature point 8
- the coordinates in the current frame picture are D 8 (x 8 , y 8 )
- the coordinates of the feature point 16 in the current frame picture are D 16 (x 16 , y 16 ), which are also parallel to the x axis along the feature point 0
- along the feature point 16 as a straight line Y 16 parallel to the y-axis, by X 0 , Y 0 , X 8
- a triangle formed by connecting feature point 0, feature point 8 and feature point 16 in sequence may also be used as a polygon formed by all the feature points included in the facial feature point group in the current frame picture .
- the above-mentioned method of forming the polygon is only an illustration, and other methods of forming the polygon can also be adopted, for example, the average value of multiple feature points is obtained by taking the average of coordinates and finally obtaining two average positioning coordinates to form a rectangle as a face
- the polygon formed by the feature point group may be any one as long as all the feature points included in the facial feature point group can form a certain polygon.
- the current polygon area is the polygon area formed by all the feature points contained in the face positioning point group in the current frame picture, such as the initial coordinate difference.
- the electronic device 10 Before performing facial muscle training on the user, the electronic device 10 also needs to determine The initial polygon area is obtained.
- the initial polygon area is the polygon area formed by all the feature points included in the face positioning point group in a preset frame picture, where the preset frame picture may be the same as the calculated initial coordinate difference Picture, and the initial polygon is constructed in exactly the same way as the current polygon.
- the current polygon adopts the triangle obtained by connecting the three feature points in the current frame picture as the current polygon as shown in Figure 8, and the initial polygon is In order to use a triangle obtained by sequentially connecting three feature points in a preset frame picture as an initial polygon.
- the preset completion value of the action is updated according to the obtained current polygon area and the initial polygon area.
- FIG. 9 is a schematic flowchart of sub-steps of sub-S420 in FIG. 6.
- S420 includes the following sub-steps:
- the calculated quotient value when updating the preset action completion value, the calculated quotient value may be first rooted, and then the result obtained by the root number updating the preset action completion Value, that is, the calculation formula for updating the preset action completion value is:
- S n is the current polygon area
- S 0 is the initial polygon area
- D 0 is the preset action completion value
- D 0 ′ is the updated action completion value
- the preset action completion value may also be updated in other ways, for example, the quotient value obtained by calculating the current polygon area and the initial polygon area and the preset action completion The product of the values is used as the updated action completion value, or the product of the quotient obtained by calculating the area of the current polygon and the initial polygon area and the preset scale factor can be used to update the preset action completion value.
- a facial muscle training method updates the preset action completion value based on the position information in the current frame picture of the face positioning point group, and then uses the updated action
- the completion value is used to calculate and generate the current action completion degree, which can reduce the influence of the distance between the electronic device 10 and the user's face on the current action completion degree, and improve facial muscle training accuracy.
- S600 to determine whether the current action completion degree is greater than a preset action completion threshold; when yes, determine the completion of the current training action; when no, use the subsequent frame picture of the current frame picture as the new For the current frame picture, continue to execute S300.
- the electronic device 10 compares the above-mentioned current action completion degree with a preset action completion degree threshold, and determines whether the current action completion degree is greater than a preset action completion degree threshold, and when the current action completion degree is greater than the preset action completion degree At the threshold, it indicates that the user's current facial muscle training action is completed, and the current facial muscle training action can be ended, so that the next cycle of facial muscle training action can be executed, or the training task is ended; otherwise, when the current action completion degree is less than or equal to The preset action completion threshold indicates that the user's current facial muscle training action has not been completed and needs to continue training. At this time, the subsequent frame picture of the current frame picture is used as the new current frame picture, such as the next frame of the current frame picture The picture, or the second picture after the current picture, continue to execute S300.
- the facial muscle training method includes S200
- the subsequent frame picture of the current frame picture is used As a new current frame picture, continue to execute S200.
- a facial muscle training method calculates the current coordinate difference corresponding to the at least one feature point group in the current frame picture by at least one feature point group of the target face After that, the current action completion degree is generated from the current coordinate difference value and the initial coordinate difference value and the preset action completion value, and then the current action completion degree is used to determine whether the user has completed the current training action.
- the current action completion degree is generated from the current coordinate difference value and the initial coordinate difference value and the preset action completion value, and then the current action completion degree is used to determine whether the user has completed the current training action.
- FIG. 10 illustrates a part of a facial muscle training method provided by an embodiment of the present application.
- FIG. 10 illustrates a part of a facial muscle training method provided by an embodiment of the present application.
- FIG. 11 shows a schematic structural diagram of a facial muscle training device 200 provided in an embodiment of the present application.
- the facial muscle training device 200 includes feature point group extraction Module 210, coordinate difference calculation module 230, action completion degree calculation module 250 and judgment module 260.
- the feature point group extraction module 210 is used to obtain at least one feature point group of the target face, wherein each feature point group includes two feature points.
- the coordinate difference calculation module 230 is used to calculate the current coordinate difference corresponding to the at least one feature point group, where the current coordinate difference value is the coordinate difference value of the at least one feature point group in the current frame picture.
- FIG. 12 shows a schematic structural diagram of a coordinate difference calculation module 230 of a facial muscle training device 200 provided by an embodiment of the present application.
- the coordinate difference calculation module 230 includes a sub-coordinate difference calculation unit 231 and a current coordinate difference calculation unit 232.
- the sub-coordinate difference calculation unit 231 is used to calculate the coordinate difference corresponding to each of the at least two feature point groups.
- the current coordinate difference calculation unit 232 is used to generate the current coordinate difference according to the coordinate difference corresponding to each of the feature point groups.
- the action completion degree calculation module 250 is used to generate a current action completion degree according to the current coordinate difference value and the initial coordinate difference value and the preset action completion value, wherein the initial coordinate difference value represents The coordinate difference of the at least one feature point group in the initial state.
- FIG. 13 illustrates a schematic structural diagram of an action completion degree calculation module 250 of a facial muscle training device 200 provided by an embodiment of the present application.
- the action completion degree calculation module 250 includes an action completion value calculation unit 251 and an action completion degree calculation unit 252.
- the action completion value calculation unit 251 is used to generate a current action completion value according to the current coordinate difference and the initial coordinate difference.
- the action completion degree calculation unit 252 is configured to generate the current action completion degree according to the current action completion value and the preset action completion value.
- the determination module 260 is used to determine whether the current action completion degree is greater than a preset action completion degree threshold, wherein, when the current action completion degree is greater than the preset action completion degree threshold, the current training is determined Action completion; when the current action completion degree is less than or equal to the preset action completion threshold, the subsequent frame picture of the current frame picture is used as the new current frame picture, the coordinate difference calculation module 230 restarts Performing calculation of the current coordinate difference corresponding to the at least one feature point group.
- the facial muscle training device 200 further includes a picture resolution adjustment module 220, the picture resolution adjustment module 220 is used to reduce the The resolution of the current frame picture, so that the reduced-resolution current frame picture is used to calculate the current coordinate difference corresponding to the at least one feature point group.
- the facial muscle training device 200 further includes a preset action completion value update module 240.
- the preset action completion value update module 240 is used for Based on the face positioning point group acquired in the current frame picture, the preset action completion value is updated, so that the updated action completion value is used to calculate and generate the current action completion degree, wherein
- the face positioning point group includes at least two feature points.
- FIG. 14 illustrates a schematic structural diagram of a preset action completion value update module 240 of a facial muscle training device 200 provided by an embodiment of the present application.
- the preset action completion value update module 240 includes a polygon area calculation unit 241 and an action completion value update unit 242.
- the polygon area calculation unit 241 is used to calculate the current polygon area formed by all the feature points included in the face positioning point group in the current frame picture.
- the action completion value update unit 242 is used to update the preset action completion value according to the current polygon area and the initial polygon area, where the initial polygon area is all the feature points included in the face positioning point group The area of the polygon formed in the preset frame picture.
- FIG. 15 shows a schematic structural diagram of an action completion value update unit 242 of a facial muscle training device 200 provided by an embodiment of the present application.
- the action completion value update unit 242 includes a quotient value calculation subunit 2421 and a completion value update subunit 2422.
- the quotient value calculation subunit 2421 is used to calculate the quotient value of the current polygon area and the initial polygon area.
- the completion value updating subunit 2422 is configured to update the preset action completion value according to the calculated quotient value.
- the function of the facial muscle training device 200 involved in this embodiment may be implemented by the electronic device 10 described above.
- the relevant data, instructions and functional modules involved in the above embodiments are stored in the memory 110 and then executed by the processor 120 to implement the facial muscle training method in the above embodiments.
- each block in the flowchart or block diagram may represent a module, program segment, or part of code that contains one or more of the Executable instructions.
- the functions noted in the block may occur out of the order noted in the figures. For example, two consecutive blocks can actually be executed substantially in parallel, and sometimes they can also be executed in reverse order, depending on the functions involved.
- each block in the block diagrams and/or flowcharts, and combinations of blocks in the block diagrams and/or flowcharts can be implemented with dedicated hardware-based systems that perform specified functions or actions Or, it can be realized by a combination of dedicated hardware and computer instructions.
- the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist alone, or two or more modules may be integrated to form an independent part.
- the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
- the technical solution of the present application essentially or part of the contribution to the existing technology or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to enable a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the method described in the embodiments of the present application.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program code .
- the facial muscle training method, device and electronic device calculate, through at least one feature point group of the target face, the at least one feature point group is calculated in the current frame picture
- the current action completion degree is generated from the current coordinate difference value and the initial coordinate difference value and the preset action completion value, and then the current action completion degree is used to determine whether the user has completed the current training action, compared to
- the user performs facial muscle training he can feedback whether the current training action is completed to ensure the quality of facial muscle training; and also reduce the resolution of the current frame picture so that the reduced current frame picture is used to calculate at least A feature point group corresponds to the current coordinate difference in the current frame picture, thereby reducing the amount of data calculation for the current frame picture during facial muscle training, thereby improving the processing speed of the picture during facial muscle training;
- the position information in the current frame picture of the face positioning point group to update the preset action completion value, and then use the updated action completion value to calculate and generate the current action
Landscapes
- Health & Medical Sciences (AREA)
- Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Human Computer Interaction (AREA)
- Physical Education & Sports Medicine (AREA)
- Epidemiology (AREA)
- Medical Informatics (AREA)
- Primary Health Care (AREA)
- Public Health (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
一种面肌训练方法、装置及电子设备,该面肌训练方法包括:获取目标人脸的至少一个特征点组(S100),其中,每个特征点组包含两个特征点;计算至少一个特征点组所对应的当前坐标差值(S300),其中,当前坐标差值为至少一个特征点组在当前帧图片中的坐标差值;依据当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度(S500),其中,初始坐标差值表征至少一个特征点组在初始状态下的坐标差值;当当前动作完成度大于预设的动作完成度阈值时,确定当前训练动作完成。该方法能够在用户进行面肌训练时,反馈当前训练动作是否完成,确保面肌训练质量。
Description
本申请属于虚拟康复训练领域,特别涉及一种面肌训练方法、装置及电子设备。
面瘫会使患者口斜眼歪,影响患者正常表情的表达,甚至会影响患者仪容仪表形象,对患者心理健康产生极大的负面影响,对患者的社会交往产生阻碍。我国面瘫患者众多,受面瘫危害非常严重,发病率呈逐年上升趋势,面瘫患者各个年龄段都有,由于年轻人社会工作压力的增大,发病呈年轻化趋势。
面瘫患者若能及早发现,及时治疗的话,是可以使面瘫完全康复的。面肌功能康复训练一般是通过患者自身对面部眼部、额头、嘴巴、鼻部等进行力量性锻炼的主动式康复训练,需要患者需每天坚持进行一定量的面肌功能康复训练,做出例如抬眉、皱眉、闭眼、耸鼻、示齿、噘嘴等动作。
发明内容
本申请提供了一种面肌训练方法、装置及电子设备,能够在用户进行面肌训练时,反馈当前训练动作是否完成,确保面肌训练质量。
为了实现上述目的,本申请提供了如下技术方案:
第一方面,本申请提供了一种面肌训练方法,所述方法包括:获取目标人脸的至少一个特征点组,其中,每个所述特征点组包含两个特征点;计算所述 至少一个特征点组所对应的当前坐标差值,其中,所述当前坐标差值为所述至少一个特征点组在当前帧图片中的坐标差值;依据所述当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度,其中,所述初始坐标差值表征所述至少一个特征点组在初始状态下的坐标差值;当所述当前动作完成度大于预设的动作完成度阈值时,确定当前训练动作完成。
本申请实施例采取的技术方案还包括:所述至少一个特征点组包括至少两个特征点组,所述计算所述至少一个特征点组所对应的当前坐标差值的步骤,包括:
分别计算所述至少两个特征点组中每个所述特征点组各自对应的坐标差值;
依据所有所述特征点组各自对应的坐标差值,生成所述当前坐标差值。
本申请实施例采取的技术方案还包括:根据用户的场景选择信息确定所述至少一个特征点组,其中,所述用户的场景选择信息表征所述至少一个特征点组对应的训练场景。
本申请实施例采取的技术方案还包括:在所述计算所述至少一个特征点组所对应的当前坐标差值的步骤之前,所述方法还包括:
降低所述当前帧图片的分辨率,以使所述降低分辨率后的当前帧图片用于计算所述至少一个特征点组所对应的当前坐标差值。
本申请实施例采取的技术方案还包括:所述降低所述当前帧图片的分辨率的步骤,包括:
将所述当前帧图片的在宽度方向和高度方向的像素大小均缩小一半,以降低所述当前帧图片的分辨率。
本申请实施例采取的技术方案还包括:所述依据所述当前坐标差值和初始 坐标差值以及预设的动作完成值,生成当前动作完成度的步骤,包括:
依据所述当前坐标差值和所述初始坐标差值,生成当前动作完成值;
依据所述当前动作完成值和所述预设的动作完成值,生成所述当前动作完成度。
本申请实施例采取的技术方案还包括:在所述依据所述当前坐标差值和预设的初始坐标差值以及预设的动作完成值,生成当前动作完成度的步骤之前,所述方法还包括:
依据在所述当前帧图片获取的脸部定位点组,更新所述预设的动作完成值,以使所述更新后的动作完成值用于计算生成所述当前动作完成度,其中,所述脸部定位点组包括至少两个特征点。
本申请实施例采取的技术方案还包括:所述依据在所述当前帧图片获取的脸部定位点组,更新所述预设的动作完成值的步骤,包括:
计算所述脸部定位点组包含的所有特征点在所述当前帧图片中所构成的当前多边形面积;
依据所述当前多边形面积以及初始多边形面积,更新所述预设的动作完成值,其中,所述初始多边形面积为所述脸部定位点组包含的所有特征点在所述预设帧图片中所构成的多边形面积。
本申请实施例采取的技术方案还包括:所述依据所述当前多边形面积以及初始多边形面积,更新所述预设的动作完成值的步骤,包括:
计算所述当前多边形面积与所述初始多边形面积的商值;
依据所述计算得到的商值更新所述预设的动作完成值。
本申请实施例采取的技术方案还包括:所述方法还包括:
当所述当前动作完成度小于或等于所述预设的动作完成度阈值时,以所述 当前帧图片的后续帧图片作为新的当前帧图片,继续执行所述计算所述至少一个特征点组所对应的当前坐标差值的步骤。
本申请实施例采取的技术方案还包括:所述初始坐标差值为所述至少一个特征点组在预设帧图片中的坐标差值。
第二方面,本申请提供了一种面肌训练装置,所述装置包括:特征点组提取模块,用于获取目标人脸的至少一个特征点组,其中,每个所述特征点组包含两个特征点;坐标差值计算模块,用于计算所述至少一个特征点组所对应的当前坐标差值,其中,所述当前坐标差值为所述至少一个特征点组在当前帧图片中的坐标差值;动作完成度计算模块,用于依据所述当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度,其中,所述初始坐标差值表征所述至少一个特征点组在初始状态下的坐标差值;判断模块,用于判断所述当前动作完成度是否大于预设的动作完成度阈值,其中,当所述当前动作完成度大于预设的动作完成度阈值时,确定当前训练动作完成。
第三方面,本申请提供了一种电子设备,所述电子设备包括存储器,用于存储一个或多个程序;处理器。当所述一个或多个程序被所述处理器执行时,实现上述的面肌训练方法。
相对于现有技术,本申请实施例产生的有益效果在于:本申请所提供的一种面肌训练方法、装置及电子设备,通过由目标人脸的至少一个特征点组,在计算得到该至少一个特征点组在当前帧图片中对应的当前坐标差值后,由当前坐标差值和初始坐标差值以及预设的动作完成值生成当前动作完成度,进而由该当前动作完成度判断用户是否完成了当前训练动作,相比于现有技术,能够在用户进行面肌训练时,反馈当前训练动作是否完成,确保面肌训练质量。
图1是本申请实施例的一种电子设备的一种示意性结构图;
图2是本申请实施例的一种面肌训练方法的一种示意性流程图;
图3本申请实施例的人脸特征点集分布模型的一种示意图;
图4是图2中S300的子步骤的一种示意性流程图;
图5是图2中S500的子步骤的一种示意性流程图;
图6是图2中S400的子步骤的一种示意性流程图;
图7是本申请实施例的脸部定位点组构成多边形的一种示意图;
图8是本申请实施例的脸部定位点组构成多边形的另一种示意图;
图9是图6中S420的子步骤的一种示意性流程图;
图10是本申请实施例的一种面肌训练方法的一种示意性完成流程图;
图11是本申请实施例的一种面肌训练装置的一种示意性结构图;
图12是本申请实施例的一种面肌训练装置的坐标差值计算模块一种示意性结构图;
图13是本申请实施例的一种面肌训练装置的动作完成度计算模块一种示意性结构图;
图14是本申请实施例的一种面肌训练装置的预设动作完成值更新模块一种示意性结构图;
图15是本申请实施例的一种面肌训练装置的动作完成值更新单元一种示意性结构图。
图中:10-电子设备;110-存储器;120-处理器;130-存储控制器;
140-外设接口;150-射频单元;160-通讯总线/信号线;170-摄像单元;
180-显示单元;200-面肌训练装置;210-特征点组提取模块;
220-图片分辨率调整模块;230-坐标差值计算模块;
231-子坐标差值计算单元;232-当前坐标差值计算单元;
240-预设动作完成值更新模块;241-多边形面积计算单元;
242-动作完成值更新单元;2421-商值计算子单元;2422-完成值更新子单元;
250-动作完成度计算模块;251-动作完成值计算单元;
252-动作完成度计算单元;260-判断模块。
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。
因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述, 而不能理解为指示或暗示相对重要性。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
下面结合附图,对本申请的一些实施方式作详细说明。在不冲突的情况下,下述的实施例及实施例中的特征可以相互组合。
当前现有技术对于面肌功能康复训练的方法主要采用传统的镜面疗法,也就是在患者面前放置一面镜子,患者通过镜子观察自己脸部的状态,从观察脸部动作的具体细节情况,从而获得反馈训练的成果,进而完成面肌功能的康复训练。
面肌训练能够有效促进面肌运动功能的恢复,提升面瘫康复治疗效果。上述传统的镜面疗法虽然操作简单,对于患者来讲实现起来也非常便捷容易,但由于镜子不会给患者反馈患者训练动作执行的程度是否达到康复训练的要求;并且由于采用镜面疗法时,镜子与患者不会有任何互动,导致训练过程较为单调乏味,很容易使患者对康复训练丧失兴趣,导致康复训练效果较差。
基于上述现有技术存在的缺陷,本申请实施例所提供的一种改进方式在于:通过由目标人脸的至少一个特征点组,在计算得到该至少一个特征点组在当前帧图片中对应的当前坐标差值后,由当前坐标差值和初始坐标差值以及预 设的动作完成值生成当前动作完成度,进而由该当前动作完成度判断用户是否完成了当前训练动作。
请参阅图1,图1示出了本申请实施例所提供的一种电子设备10的一种示意性结构图,在本申请实施例中,所述电子设备10可以是,但不限于智能手机、个人电脑(personal computer,PC)、平板电脑、膝上型便携计算机、个人数字助理(personal digital assistant,PDA)等等。所述电子设备10包括存储器110、存储控制器130、一个或多个(图中仅示出一个)处理器120、外设接口140、射频单元150、摄像单元170、显示单元180等。这些组件通过一条或多条通讯总线/信号线160相互通讯。
存储器110可用于存储软件程序以及模组,如本申请实施例所提供的面肌训练装置200对应的程序指令/模组,处理器120通过运行存储在存储器110内的软件程序以及模组,从而执行各种功能应用以及图像处理,如本申请实施例所提供的面肌训练方法。
其中,所述存储器110可以是,但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。
处理器120可以是一种集成电路芯片,具有信号处理能力。上述的处理器120可以是通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)、语音处理器以及视频处理器等;还可以是数字信号处理器、专用集成电路、现场可编程门阵列或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实 施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器120也可以是任何常规的处理器等。
外设接口140将各种输入/输入装置耦合至处理器120以及存储器110。在一些实施例中,外设接口140,处理器120以及存储控制器130可以在单个芯片中实现。在本申请其他的一些实施例中,他们还可以分别由独立的芯片实现。
射频单元150用于接收以及发送电磁波,实现电磁波与电信号的相互转换,从而与通讯网络或者其他设备进行通讯。
摄像单元170用于拍摄图片,以使处理器120对拍摄的照片进行处理。
显示单元180用于为用户提供图形输出界面,显示图像信息,以供用户进行面肌训练。
可以理解,图1所示的结构仅为示意,电子设备10还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置。图1中所示的各组件可以采用硬件、软件或其组合实现。
比如,对于上述电子设备10,其包含的部分单元或器件可以作为独立的设备存在。例如,在本申请实施例其他的一些实施方式中,电子设备10还可以不包含有摄像单元170,而采用电子设备10与一摄像设备建立通信的方案进行实现,摄像设备用于拍摄图片,例如拍摄患者的照片,然后再将拍摄的图片通过有线或者无线网络的方式发送给电子设备10以用于实现本申请实施例所提供的面肌训练方法。
可选地,在本申请实施例其他的一些实施方式中,电子设备10还可以不包含有显示单元180,而采用电子设备10与一显示设备建立通信的方式进行实现,由电子设备10通过有线或者无线网络的方式将训练时的图像信息发送给显示设备,以供用户参照图像信息完成面肌训练。
请参阅图2,图2示出了本申请实施例所提供的一种面肌训练方法的一种示意性流程图,在本申请实施例中,该面肌训练方法包括以下步骤:
S100,获取目标人脸的至少一个特征点组。
面肌训练时,电子设备10根据用户的场景选择信息确定至少一个特征点组,其中,每个特征点组包含有两个特征点,且在电子设备10中,预先配置有每个特征点组所包含的两个特征点的信息,当电子设备10根据用户的场景选择信息确定出至少一个特征点组时,电子设备10同时也确定出了所有参与面肌训练的特征点。
其中,用户的场景选择信息表征至少一个特征点组对应的训练场景,在电子设备10中,预设有多个训练场景,以及每个训练场景与各自对应的特征点组的对应关系,当电子设备10接收到用户所选择的训练场景时,即以该训练场景作为用户的场景选择信息,并依据所选择的训练场景和预设的每个训练场景与各自对应的特征点组的对应关系,确定出与用户的场景选择信息所对应的特征点组。
比如,假定电子设备10中预设有抬眉、皱眉、闭眼、耸鼻、示齿、噘嘴这六个训练场景,且预设的每个训练场景与各自对应的特征点组的对应关系为:抬眉对应特征点组1,皱眉对应特征点组2和特征点组3,闭眼对应特征点组4、特征点组5及特征点组6,耸鼻对应特征点组7,示齿对应特征点组8和特征点组9,噘嘴对应特征点组10、特征点组11及特征点组12;当电子设备10接收用户的场景选择信息为抬眉时,则结合抬眉及预设的对应关系,确定出的至少一个特征点组为特征点组1;当电子设备10接收用户的场景选择信息为闭眼时,则结合闭眼及预设的对应关系,确定出的至少一个特征点组为特征点组4、特征点组5及特征点组6。
S300,计算至少一个特征点组所对应的当前坐标差值。
在对用户进行面肌训练时,电子设备10通过对采集的用户图片进行处理,以判断用户是否完成所选定的训练场景。因此,在电子设备10获取到至少一个特征点组后,依据所获取到的至少一个特征点组,由每个特征点组包含的所有特征点各自在当前帧图片中的当前坐标值,计算该至少一个特征点组所对应的当前坐标差值,其中,当前帧图片中建立有坐标系,每个特征点在建立的坐标系下各自具有对应的当前坐标值。
其中,每个特征点在当前帧图片中的当前坐标值可以利用预设在电子设备10中的特征点集模型获得。比如,请参阅图3,图3为人脸特征点集分布模型的一种示意图,该人脸模型特征点集包含的所有特征点分布可以由Dlib开源库获得,电子设备10在获得目标人脸的至少一个特征点组后,则结合该人脸特征点集分布模型,以及结合Dlib开源库,即可得到每个特征点在当前帧图片中的当前坐标值,进而可以计算得到该至少一个特征点组所对应的当前坐标差值。
例如,以左脸部分训练耸鼻动作为例,假定在如图3所示的人脸模型中,左脸部分耸鼻对应的一个特征点组,其所包含的两个特征点分别为特征点31和特征点27,且特征点31在当前帧图片中的当前坐标值为D
31(x
31,y
31),特征点27在当前帧图片中的当前坐标值为D
27(x
27,y
27),则此时得到的当前坐标差值可计算为Δ
27-31=y
27-y
31。
可选地,在本申请实施例的一些应用场景中,所获取得到的至少一个特征点组中,包括有至少两个特征点组,比如在上述实例中,皱眉对应特征点组2和特征点组3,闭眼对应特征点组4、特征点组5及特征点组6。因此,作为一种实施方式,请参阅图4,图4为图2中S300的子步骤的一种示意性流程图, 在本申请实施例中,S300包括以下子步骤:
S310,分别计算至少两个特征点组中每个特征点组各自对应的坐标差值。
当电子设备10根据用户的场景选择信息获取到目标人脸的至少两个特征点组时,则先分别计算至少两个特征点组中的每个特征点组各自对应的坐标差值。比如在上述实例中,在训练皱眉时,确定出的特征点组包括特征点组2和特征点组3,在此时,则先分别计算得到特征点组2对应的坐标差值Δ
2和特征点组3对应的坐标差值Δ
3。
S320,依据所有特征点组各自对应的坐标差值,生成当前坐标差值。
如上述实例中,当确定出的特征点组包括特征点组2和特征点组3,且,分别计算得到了特征点组2对应的坐标差值Δ
2和特征点组3对应的坐标差值Δ
3,电子设备10再依据特征点组2和特征点组3各自对应的坐标差值,即坐标差值Δ
2和坐标差值Δ
3,生成当前坐标差值。
可选地,当电子设备10获得的当前帧图片的数据量较大时,电子设备10计算当前坐标差值的效率会降低。因此,作为一种实施方式,在执行S300之前,该面肌训练方法还包括:
S200,降低当前帧图片的分辨率。
电子设备10在计算至少一个特征点组在当前帧图片中的当前坐标差值前,先降低当前帧图片的分辨率,进而降低了当前帧图片的数据大小,以使S300中采用降低分辨率后的当前帧图片用于计算上述至少一个特征点组所对应的当前坐标差值,提升了电子设备10的运算速率。
可选地,作为一种实施方式,电子设备10降低当前帧图片的分辨率的方式可以采用:将当前帧图片的在宽度方向和高度方向的像素大小均缩小一半,以降低当前帧图片的分辨率。
并且,可选地,作为一种实施方式,电子设备10在对用户进行面肌训练时,可以采取每连续的两帧图片只取一帧进行图像处理,而放弃对另一帧图片进行处理的方式,来提升电子设备10对连续的多帧图片进行处理的速度。
基于上述设计,本申请实施例所提供的一种面肌训练方法,通过降低当前帧图片的分辨率,以使降低分辨率后的当前帧图片用于计算至少一个特征点组在当前帧图片中所对应的当前坐标差值,从而降低了面肌训练时对当前帧图片的数据计算量,进而提升了面肌训练时对图片的处理速度。
请继续参阅图2,S500,依据当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度。
在对用户进行面肌训练前,电子设备10需要确定出初始坐标差值,该初始坐标差值表征上述至少一个特征点组在初始状态下的坐标差值,该初始状态可以理解为用户在没有进行面肌训练前的脸部状态,比如假定当前用户进行面肌训练的内容为抬眉,初始状态则为用户抬眉前的脸部状态,一般为用户处于表情自然时的脸部状态。
可选地,作为一种实施方式,该初始坐标差值为上述至少一个特征点组在预设帧图片中的坐标差值。也就是说,用户在使用电子设备10进行面肌训练 前,由电子设备10获取一张预设帧图片作为用户在初始状态下的脸部状态,然后电子设备10再计算上述的至少一个特征点组在该预设帧图片中的坐标差值,作为初始坐标差值。
并且,可选地,作为一种实施方式,在每一次进行面肌训练时,电子设备10可重新获取新的预设帧图片用于计算初始坐标差值,比如,在一个循环中,电子设备10用于对用户进行抬眉训练时,电子设备10采用第一预设帧图片用于计算在训练抬眉时的初始坐标差值,而在另一个循环中,当电子设备10用于对用户进行耸鼻训练时,电子设备10则采用第二预设帧图片用于计算在耸鼻训练时的初始坐标差值。
值得说明的是,在本申请实施例其他的一些实施方式中,还可以采用在电子设备10中预设固定值的方式作为初始坐标差值,则此时在所有训练循环中,对于相同的训练场景,比如在多次循环训练耸鼻时,所有的初始坐标差值均相同。并且,对于不同的训练场景,比如训练耸鼻和训练示齿时,初始坐标差值还可以设置为不同,这取决于用户对不同的训练场景所设定的初始坐标差值。
并且,电子设备10在得到上述的当前坐标差值后,依据该当前坐标差值结合初始坐标差值以及预设的动作完成值,计算生成当前动作完成度,该当前动作完成度表征用户对当前面肌训练动作的完成程度。
可选地,作为一种实施方式,请参阅图5,图5为图2中S500的子步骤的一种示意性流程图,在本申请实施例中,S500包括以下子步骤:
S510,依据当前坐标差值和初始坐标差值,生成当前动作完成值。
可选地,作为一种实施方式,采用计算当前坐标差值与初始坐标差值两者的差值,作为当前动作完成值。也就是说,当前动作完成值D
t=|Δ
t-Δ
0|,其中, D
t为当前动作完成值,Δ
t为当前坐标差值,Δ
0为初始坐标差值。
值得说明的是,在本申请实施例其他的一些实施方式中,还可以采用其他的方式依据当前坐标差值和初始坐标差值来得到当前动作完成值,比如采用计算当前坐标差值与初始坐标差值两者的商,作为当前动作完成值。
S520,依据当前动作完成值和预设的动作完成值,生成当前动作完成度。
可选地,作为一种实施方式,采用计算当前动作完成值和预设的动作完成值两者的商,作为当前动作完成度。也就是说,当前动作完成度
其中,V
t为当前动作完成度,D
t为当前动作完成值,D
0为预设的动作完成值。
值得说明的是,在本申请实施例其他的一些实施方式中,还可以采用其他的方式依据当前动作完成值和预设的动作完成值来得到当前动作完成度,比如采用计算当前动作完成值与预设的动作完成值两者的差值,作为当前动作完成度。
一般来说,不同的用户,或者是同一用户在不同时刻,电子设备10与用户脸部之间的距离可能随时会发生变化,而在电子设备10中,上述预设的动作完成值为固定大小的值,而当电子设备10与用户脸部之间的距离发生变化时,目标人脸不同帧图片中所呈现的大小可能是不同的,尤其是在不同训练循环中所使用的图片,这就导致了当前动作完成度会受到电子设备10与用户脸部之间的距离的影响。
因此,作为一种实施方式,在执行S500之前,该面肌训练方法还包括:
S400,依据在当前帧图片获取的脸部定位点组,更新预设的动作完成值。
在面肌训练时,电子设备10还选定有脸部定位点组,该脸部定位点组包括至少有两个特征点,比如可以包含有两个特征点,也可以包含三个特征点或 者是四个特征点,还可以是包含五个特征点甚至是更多的特征点。通过该脸部定位点组包含的所有特征点在当前帧图片中的位置信息,进而更新预设的动作完成值,以使更新后的动作完成值用于计算生成当前动作完成度,进而减小电子设备10与用户脸部之间的距离对当前动作完成度的影响。
可选地,作为一种实施方式,请参阅图6,图6为图2中S400的子步骤的一种示意性流程图,在本申请实施例中,S400包括以下子步骤:
S410,计算脸部定位点组包含的所有特征点在当前帧图片中所构成的当前多边形面积。
在更新预设的动作完成值时,由脸部定位点组包含的所有特征点构成一个多边形面积,由于每个特征点在当前帧图片中建立的坐标系下各自具有唯一的坐标,则依据每个特征点各自的坐标值,计算得到脸部定位点组包含的所有特征点所构成的多边形对应的当前多边形面积。
其中,脸部定位点组包含的所有特征点的可以采用预设特征点的方式选定,比如在如图3所示的示意图中,预设特征点0和特征点8组合为脸部定位点组,或者是,预设特征点1、特征点9和特征点26作为脸部特征点组,只要能够确定至少两个特征点组合构成脸部定位点组即可,比如,在如图3所示的示意图中,还可以选定特征点3、特征点5、特征点24及特征点15构成脸部定位点组,或者是包括有更多的特征点。
并且,由脸部定位点组包含的所有特征点构成一个多边形的方式可以如图7所示,当脸部定位点组只包含有两个特征点时,比如图7中的特征点0和特征点8,假定特征点0在当前帧图片中的坐标为D
0(x
0,y
0),特征点8在当前帧图片中的坐标为D
8(x
8,y
8),则此时可以沿特征点0分别作平行于x轴的直线X
0 及平行于y轴的直线Y
0,同理沿特征点8分别作平行于x轴的直线X
8及平行于y轴的直线Y
8,由X
0、Y
0、X
8及Y
8所围成的矩形作为脸部特征点组包含的所有特征点在当前帧图片中所构成的多边形。
当然,在如图7所示的示意图中,还可以采用其他的方式构成多边形,比如连接特征点0及特征点8得到直线l
0-8,由直线l
0-8与直线Y
0及直线X
8所围成三角形作为脸部特征点组包含的所有特征点在当前帧图片中所构成的多边形。
当脸部定位点组包含的特征点的数量超过两个时,比如包含有三个特征点时,由脸部定位点组包含的所有特征点构成一个多边形的方式可以如图8所示,假定此时脸部定位点组包含有特征点0、特征点8及特征点16这三个特征点,且特征点0在当前帧图片中的坐标为D
0(x
0,y
0),特征点8在当前帧图片中的坐标为D
8(x
8,y
8),特征点16在当前帧图片中的坐标为D
16(x
16,y
16),同样沿特征点0分别作平行于x轴的直线X
0及平行于y轴的直线Y
0,沿特征点8作平行于x的直线X
8,沿特征点16作平行于y轴的直线Y
16,由X
0、Y
0、X
8及Y
16所围成的矩形则可作为脸部特征点组包含的所有特征点在当前帧图片中所构成的多边形。
如图8所示的示意图中,还可以采用将特征点0、特征点8及特征点16依次连接所构成的三角形作为脸部特征点组包含的所有特征点在当前帧图片中所构成的多边形。
可以理解,上述构成多边形的方式仅为示意,多边形的构成的方式还可以采用其他的方案,比如将多个特征点分别取坐标平均值后最终得到两个平均定位坐标来构成的矩形作为脸部特征点组所构成的多边形,只要是由脸部特征点组所包含的所有特征点能够构成一个确定的多边形即可。
S420,依据当前多边形面积以及初始多边形面积,更新预设的动作完成值。
如上所述,当前多边形面积为脸部定位点组包含的所有特征点在当前帧图片中所构成的多边形面积,如初始坐标差值,在对用户进行面肌训练前,电子设备10还需要确定出初始多边形面积,该初始多边形面积为上述脸部定位点组包含的所有特征点在预设帧图片中所构成的多边形面积,其中,该预设帧图片可以是与计算初始坐标差值相同的图片,并且,初始多边形的构成方式与当前多边形的构成方式完全相同,比如,当前多边形是采用如图8中将当前帧图片中的三个特征点依次相连得到的三角形作为当前多边形,初始多边形则为将预设帧图片中的三个特征点依次相连得到的三角形作为初始多边形。
由此,在计算得到当前多边形后,则依据所得到的当前多边形面积以及该初始多边形面积,更新预设的动作完成值。
可选地,作为一种实施方式,请参阅图9,图9为图6中子S420的子步骤的一种示意性流程图,在本申请实施例中,S420包括以下子步骤:
S421,计算当前多边形面积与初始多边形面积的商值。
S422,依据计算得到的商值更新预设的动作完成值。
作为一种实施方式,在本申请实施例中,更新预设的动作完成值时,可以采用先将计算得到的商值开根号,然后再由开根号得到的结果更新预设的动作完成值,也就是说,更新预设的动作完成值的计算公式为:
其中,S
n为当前多边形面积,S
0为初始多边形面积,D
0为预设的动作完成值,D
0′为更新后的动作完成值。
可以理解,在本申请实施例其他的一些实施方式中,还可以采用其他方式更新预设的动作完成值,比如,直接由计算当前多边形面积与初始多边形面积 得到的商值与预设的动作完成值的乘积作为该更新后的动作完成值,或者还可以由计算当前多边形面积与初始多边形面积得到的商值,与预设的比例系数的乘积,用于更新预设的动作完成值。
基于上述设计,本申请实施例所提供的一种面肌训练方法,通过依据在脸部定位点组当前帧图片中的位置信息,以更新预设的动作完成值,进而采用该更新后的动作完成值用于计算生成当前动作完成度,能够减小电子设备10与用户脸部之间的距离对当前动作完成度的影响,提升面肌训练准确度。
请继续参阅图2,S600,判断当前动作完成度是否大于预设的动作完成度阈值;当为是时,确定当前训练动作完成;当为否时,以当前帧图片的后续帧图片作为新的当前帧图片,继续执行S300。
电子设备10将上述的当前动作完成度与预设的动作完成度阈值相比对,判断当前动作完成度是否大于预设的动作完成度阈值,当当前动作完成度大于该预设的动作完成度阈值时,表征用户当前的面肌训练动作完成,可以结束当前的面肌训练动作,从而执行下一个面肌训练动作的循环,或者是结束训练任务;反之,当当前动作完成度小于或等于该预设的动作完成度阈值时,表征用户当前的面肌训练动作尚未完成,需要继续训练,此时则以当前帧图片的后续帧图片作为新的当前帧图片,比如当前帧图片的后一帧图片、或者是当前帧图片后的第二帧图片,继续执行S300。
值得说明的是,在本申请实施例中,当该面肌训练方法包含有S200,则在判定当前动作完成度小于或等于预设的动作完成度阈值时,即以当前帧图片的后续帧图片作为新的当前帧图片,继续执行S200。
基于上述设计,本申请实施例所提供的一种面肌训练方法,通过由目标人脸的至少一个特征点组,在计算得到该至少一个特征点组在当前帧图片中对应 的当前坐标差值后,由当前坐标差值和初始坐标差值以及预设的动作完成值生成当前动作完成度,进而由该当前动作完成度判断用户是否完成了当前训练动作,相比于现有技术,能够在用户进行面肌训练时,反馈当前训练动作是否完成,确保面肌训练质量。
基于上述实施例提供的面肌训练方法,下面给出完整方法流程的一种可能的实现方式,请参阅图10,图10示出了本申请实施例所提供的一种面肌训练方法的一种示意性完成流程图,其包含了上述实施例提供的所有步骤。
请参阅图11,图11示出了本申请实施例所提供的一种面肌训练装置200的一种示意性结构图,在本申请实施例中,该面肌训练装置200包括特征点组提取模块210、坐标差值计算模块230、动作完成度计算模块250及判断模块260。
特征点组提取模块210用于获取目标人脸的至少一个特征点组,其中,每个所述特征点组包含两个特征点。
坐标差值计算模块230用于计算所述至少一个特征点组所对应的当前坐标差值,其中,所述当前坐标差值为所述至少一个特征点组在当前帧图片中的坐标差值。
可选地,作为一种实施方式,请参阅图12,图12示出了本申请实施例所提供的一种面肌训练装置200的坐标差值计算模块230一种示意性结构图,在本申请实施例中,该坐标差值计算模块230包括子坐标差值计算单元231及当前坐标差值计算单元232。
子坐标差值计算单元231用于分别计算所述至少两个特征点组中每个所述特征点组各自对应的坐标差值。
当前坐标差值计算单元232用于依据所有所述特征点组各自对应的坐标 差值,生成所述当前坐标差值。
请继续参阅图10,动作完成度计算模块250用于依据所述当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度,其中,所述初始坐标差值表征所述至少一个特征点组在初始状态下的坐标差值。
可选地,作为一种实施方式,请参阅图13,图13示出了本申请实施例所提供的一种面肌训练装置200的动作完成度计算模块250一种示意性结构图,在本申请实施例中,该动作完成度计算模块250包括动作完成值计算单元251及动作完成度计算单元252。
动作完成值计算单元251用于依据所述当前坐标差值和所述初始坐标差值,生成当前动作完成值。
动作完成度计算单元252用于依据所述当前动作完成值和所述预设的动作完成值,生成所述当前动作完成度。
请继续参阅图11,判断模块260用于判断所述当前动作完成度是否大于预设的动作完成度阈值,其中,当所述当前动作完成度大于预设的动作完成度阈值时,确定当前训练动作完成;当所述当前动作完成度小于或等于所述预设的动作完成度阈值时,以所述当前帧图片的后续帧图片作为新的当前帧图片,所述坐标差值计算模块230重新执行计算所述至少一个特征点组所对应的当前坐标差值。
可选地,作为一种实施方式,请继续参阅图11,在本申请实施例中,该面肌训练装置200还包括图片分辨率调整模块220,该图片分辨率调整模块220用于降低所述当前帧图片的分辨率,以使所述降低分辨率后的当前帧图片用于计算所述至少一个特征点组所对应的当前坐标差值。
可选地,作为一种实施方式,请继续参阅图11,在本申请实施例中,该 面肌训练装置200还包括预设动作完成值更新模块240,该预设动作完成值更新模块240用于依据在所述当前帧图片获取的脸部定位点组,更新所述预设的动作完成值,以使所述更新后的动作完成值用于计算生成所述当前动作完成度,其中,所述脸部定位点组包括至少两个特征点。
可选地,作为一种实施方式,请参阅图14,图14示出了本申请实施例所提供的一种面肌训练装置200的预设动作完成值更新模块240一种示意性结构图,在本申请实施例中,该预设动作完成值更新模块240包括多边形面积计算单元241及动作完成值更新单元242。
多边形面积计算单元241用于计算所述脸部定位点组包含的所有特征点在所述当前帧图片中所构成的当前多边形面积。
动作完成值更新单元242用于依据所述当前多边形面积以及初始多边形面积,更新所述预设的动作完成值,其中,所述初始多边形面积为所述脸部定位点组包含的所有特征点在所述预设帧图片中所构成的多边形面积。
可选地,作为一种实施方式,请参阅图15,图15示出了本申请实施例所提供的一种面肌训练装置200的动作完成值更新单元242一种示意性结构图,在本申请实施例中,该动作完成值更新单元242包括商值计算子单元2421及完成值更新子单元2422。
商值计算子单元2421用于计算所述当前多边形面积与所述初始多边形面积的商值。
完成值更新子单元2422用于依据所述计算得到的商值更新所述预设的动作完成值。
可选地,本实施例中涉及的面肌训练装置200,其功能可以通过上述的电子设备10实现。比如,上述实施例中所涉及的相关数据、指令以及功能模块 存储在存储器110中,然后由处理器120进行执行后,实现上述实施例中的面肌训练方法。
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本申请实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本申请实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请实施例所述方法的全部或部分步骤。而前述的存储介质包括: U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
综上所述,本申请实施例所提供的一种面肌训练方法、装置及电子设备,通过由目标人脸的至少一个特征点组,在计算得到该至少一个特征点组在当前帧图片中对应的当前坐标差值后,由当前坐标差值和初始坐标差值以及预设的动作完成值生成当前动作完成度,进而由该当前动作完成度判断用户是否完成了当前训练动作,相比于现有技术,能够在用户进行面肌训练时,反馈当前训练动作是否完成,确保面肌训练质量;还通过降低当前帧图片的分辨率,以使降低分辨率后的当前帧图片用于计算至少一个特征点组在当前帧图片中所对应的当前坐标差值,从而降低了面肌训练时对当前帧图片的数据计算量,进而提升了面肌训练时对图片的处理速度;还通过依据在脸部定位点组当前帧图片中的位置信息,以更新预设的动作完成值,进而采用该更新后的动作完成值用于计算生成当前动作完成度,能够减小电子设备10与用户脸部之间的距离对当前动作完成度的影响,提升面肌训练准确度。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
对于本领域技术人员而言,显然本申请不限于上述示范性实施例的细节,而且在不背离本申请的精神或基本特征的情况下,能够以其它的具体形式实现本申请。因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本申请的范围由所附权利要求而不是上述说明限定,因此旨在将落 在权利要求的等同要件的含义和范围内的所有变化囊括在本申请内。不应将权利要求中的任何附图标记视为限制所涉及的权利要求。
Claims (13)
- 一种面肌训练方法,其特征在于,所述方法包括:获取目标人脸的至少一个特征点组,其中,每个所述特征点组包含两个特征点;计算所述至少一个特征点组所对应的当前坐标差值,其中,所述当前坐标差值为所述至少一个特征点组在当前帧图片中的坐标差值;依据所述当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度,其中,所述初始坐标差值表征所述至少一个特征点组在初始状态下的坐标差值;当所述当前动作完成度大于预设的动作完成度阈值时,确定当前训练动作完成。
- 根据权利要求1所述的方法,其特征在于,所述至少一个特征点组包括至少两个特征点组,所述计算所述至少一个特征点组所对应的当前坐标差值的步骤,包括:分别计算所述至少两个特征点组中每个所述特征点组各自对应的坐标差值;依据所有所述特征点组各自对应的坐标差值,生成所述当前坐标差值。
- 根据权利要求1或2所述的方法,其特征在于,根据用户的场景选择信息确定所述至少一个特征点组,其中,所述用户的场景选择信息表征所述至少一个特征点组对应的训练场景。
- 根据权利要求1所述的方法,其特征在于,在所述计算所述至少一个特征点组所对应的当前坐标差值的步骤之前,所述方法还包括:降低所述当前帧图片的分辨率,以使所述降低分辨率后的当前帧图片用于计算所述至少一个特征点组所对应的当前坐标差值。
- 根据权利要求4所述的方法,其特征在于,所述降低所述当前帧图片的分辨率的步骤,包括:将所述当前帧图片的在宽度方向和高度方向的像素大小均缩小一半,以降 低所述当前帧图片的分辨率。
- 根据权利要求1所述的方法,其特征在于,所述依据所述当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度的步骤,包括:依据所述当前坐标差值和所述初始坐标差值,生成当前动作完成值;依据所述当前动作完成值和所述预设的动作完成值,生成所述当前动作完成度。
- 根据权利要求1所述的方法,其特征在于,在所述依据所述当前坐标差值和预设的初始坐标差值以及预设的动作完成值,生成当前动作完成度的步骤之前,所述方法还包括:依据在所述当前帧图片获取的脸部定位点组,更新所述预设的动作完成值,以使所述更新后的动作完成值用于计算生成所述当前动作完成度,其中,所述脸部定位点组包括至少两个特征点。
- 根据权利要求7所述的方法,其特征在于,所述依据在所述当前帧图片获取的脸部定位点组,更新所述预设的动作完成值的步骤,包括:计算所述脸部定位点组包含的所有特征点在所述当前帧图片中所构成的当前多边形面积;依据所述当前多边形面积以及初始多边形面积,更新所述预设的动作完成值,其中,所述初始多边形面积为所述脸部定位点组包含的所有特征点在所述预设帧图片中所构成的多边形面积。
- 根据权利要求8所述的方法,其特征在于,所述依据所述当前多边形面积以及初始多边形面积,更新所述预设的动作完成值的步骤,包括:计算所述当前多边形面积与所述初始多边形面积的商值;依据所述计算得到的商值更新所述预设的动作完成值。
- 根据权利要求1所述的方法,其特征在于,所述方法还包括:当所述当前动作完成度小于或等于所述预设的动作完成度阈值时,以所述当前帧图片的后续帧图片作为新的当前帧图片,继续执行所述计算所述至少一个特征点组所对应的当前坐标差值的步骤。
- 根据权利要求1所述的方法,其特征在于,所述初始坐标差值为所述至少一个特征点组在预设帧图片中的坐标差值。
- 一种面肌训练装置,其特征在于,所述装置包括:特征点组提取模块,用于获取目标人脸的至少一个特征点组,其中,每个所述特征点组包含两个特征点;坐标差值计算模块,用于计算所述至少一个特征点组所对应的当前坐标差值,其中,所述当前坐标差值为所述至少一个特征点组在当前帧图片中的坐标差值;动作完成度计算模块,用于依据所述当前坐标差值和初始坐标差值以及预设的动作完成值,生成当前动作完成度,其中,所述初始坐标差值表征所述至少一个特征点组在初始状态下的坐标差值;判断模块,用于判断所述当前动作完成度是否大于预设的动作完成度阈值,其中,当所述当前动作完成度大于预设的动作完成度阈值时,确定当前训练动作完成。
- 一种电子设备,其特征在于,包括:存储器,用于存储一个或多个程序;处理器;当所述一个或多个程序被所述处理器执行时,实现如权利要求1-11中任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811506296.9 | 2018-12-10 | ||
CN201811506296.9A CN109659006B (zh) | 2018-12-10 | 2018-12-10 | 面肌训练方法、装置及电子设备 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020119665A1 true WO2020119665A1 (zh) | 2020-06-18 |
Family
ID=66113947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/124202 WO2020119665A1 (zh) | 2018-12-10 | 2019-12-10 | 面肌训练方法、装置及电子设备 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN109659006B (zh) |
WO (1) | WO2020119665A1 (zh) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837016A (zh) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837019A (zh) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837018A (zh) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN114550873A (zh) * | 2022-02-17 | 2022-05-27 | 上海交通大学医学院附属第九人民医院 | 一种面瘫康复训练方法及系统 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109659006B (zh) * | 2018-12-10 | 2021-03-23 | 深圳先进技术研究院 | 面肌训练方法、装置及电子设备 |
CN113327247B (zh) * | 2021-07-14 | 2024-06-18 | 中国科学院深圳先进技术研究院 | 一种面神经功能评估方法、装置、计算机设备及存储介质 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105678702A (zh) * | 2015-12-25 | 2016-06-15 | 北京理工大学 | 一种基于特征跟踪的人脸图像序列生成方法及装置 |
CN106980815A (zh) * | 2017-02-07 | 2017-07-25 | 王俊 | 基于h‑b分级评分监督下的面瘫客观评估方法 |
WO2017154581A1 (en) * | 2016-03-07 | 2017-09-14 | Canon Kabushiki Kaisha | Feature point detection method and apparatus, image processing system, and monitoring system |
CN107483834A (zh) * | 2015-02-04 | 2017-12-15 | 广东欧珀移动通信有限公司 | 一种图像处理方法、连拍方法及装置和相关介质产品 |
CN107633206A (zh) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | 眼球动作捕捉方法、装置及存储介质 |
CN108460345A (zh) * | 2018-02-08 | 2018-08-28 | 电子科技大学 | 一种基于人脸关键点定位的面部疲劳检测方法 |
CN109659006A (zh) * | 2018-12-10 | 2019-04-19 | 深圳先进技术研究院 | 面肌训练方法、装置及电子设备 |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2755721A4 (en) * | 2011-09-15 | 2015-05-06 | Sigma Instr Holdings Llc | A SYSTEM AND METHOD FOR THE TREATMENT OF THE SKIN AND ANIMAL WEAVE FOR IMPROVED HEALTH, FUNCTION AND / OR IMPROVED APPEARANCE |
KR102094723B1 (ko) * | 2012-07-17 | 2020-04-14 | 삼성전자주식회사 | 견고한 얼굴 표정 인식을 위한 특징 기술자 |
CN104331685A (zh) * | 2014-10-20 | 2015-02-04 | 上海电机学院 | 非接触式主动呼叫方法 |
CN108211241A (zh) * | 2017-12-27 | 2018-06-29 | 复旦大学附属华山医院 | 一种基于镜像视觉反馈的面部肌肉康复训练系统 |
-
2018
- 2018-12-10 CN CN201811506296.9A patent/CN109659006B/zh active Active
-
2019
- 2019-12-10 WO PCT/CN2019/124202 patent/WO2020119665A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107483834A (zh) * | 2015-02-04 | 2017-12-15 | 广东欧珀移动通信有限公司 | 一种图像处理方法、连拍方法及装置和相关介质产品 |
CN105678702A (zh) * | 2015-12-25 | 2016-06-15 | 北京理工大学 | 一种基于特征跟踪的人脸图像序列生成方法及装置 |
WO2017154581A1 (en) * | 2016-03-07 | 2017-09-14 | Canon Kabushiki Kaisha | Feature point detection method and apparatus, image processing system, and monitoring system |
CN106980815A (zh) * | 2017-02-07 | 2017-07-25 | 王俊 | 基于h‑b分级评分监督下的面瘫客观评估方法 |
CN107633206A (zh) * | 2017-08-17 | 2018-01-26 | 平安科技(深圳)有限公司 | 眼球动作捕捉方法、装置及存储介质 |
CN108460345A (zh) * | 2018-02-08 | 2018-08-28 | 电子科技大学 | 一种基于人脸关键点定位的面部疲劳检测方法 |
CN109659006A (zh) * | 2018-12-10 | 2019-04-19 | 深圳先进技术研究院 | 面肌训练方法、装置及电子设备 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113837016A (zh) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837019A (zh) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837018A (zh) * | 2021-08-31 | 2021-12-24 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837019B (zh) * | 2021-08-31 | 2024-05-10 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837018B (zh) * | 2021-08-31 | 2024-06-14 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN113837016B (zh) * | 2021-08-31 | 2024-07-02 | 北京新氧科技有限公司 | 一种化妆进度检测方法、装置、设备及存储介质 |
CN114550873A (zh) * | 2022-02-17 | 2022-05-27 | 上海交通大学医学院附属第九人民医院 | 一种面瘫康复训练方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
CN109659006B (zh) | 2021-03-23 |
CN109659006A (zh) | 2019-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020119665A1 (zh) | 面肌训练方法、装置及电子设备 | |
US20190384967A1 (en) | Facial expression detection method, device and system, facial expression driving method, device and system, and storage medium | |
EP3992919B1 (en) | Three-dimensional facial model generation method and apparatus, device, and medium | |
US20190138791A1 (en) | Key point positioning method, terminal, and computer storage medium | |
EP3454250A1 (en) | Facial image processing method and apparatus and storage medium | |
WO2016145830A1 (zh) | 图像处理方法、终端及计算机存储介质 | |
CN109961496B (zh) | 表情驱动方法及表情驱动装置 | |
WO2020119584A1 (zh) | 面瘫程度评价方法、装置、电子设备及存储介质 | |
CN105096353B (zh) | 一种图像处理方法及装置 | |
US10395096B2 (en) | Display method for recommending eyebrow style and electronic apparatus thereof | |
WO2020244074A1 (zh) | 表情交互方法、装置、计算机设备及可读存储介质 | |
KR20210113948A (ko) | 가상 아바타 생성 방법 및 장치 | |
TWI780919B (zh) | 人臉影像的處理方法、裝置、電子設備及儲存媒體 | |
WO2020151156A1 (zh) | 视频流播放方法、系统、计算机装置及可读存储介质 | |
WO2020244160A1 (zh) | 终端设备控制方法、装置、计算机设备及可读存储介质 | |
WO2023010796A1 (zh) | 图像处理方法及相关装置 | |
WO2023143126A1 (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2023132790A2 (zh) | 表情驱动方法和装置、表情驱动模型的训练方法和装置 | |
CN112232128A (zh) | 基于视线追踪的老年残障人士照护需求识别方法 | |
WO2022016996A1 (zh) | 图像处理方法、装置、电子设备及计算机可读存储介质 | |
CN107886568B (zh) | 一种利用3D Avatar重建人脸表情的方法及系统 | |
JP4659722B2 (ja) | 人体特定領域抽出・判定装置、人体特定領域抽出・判定方法、人体特定領域抽出・判定プログラム | |
WO2021197230A1 (zh) | 三维头部模型的构建方法、装置、系统及存储介质 | |
CN105931204A (zh) | 图片还原方法及系统 | |
CN113421333A (zh) | 牙齿局部坐标系确定方法和系统、设备及计算机存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19895227 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19895227 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 02/11/2021) |