CN110008911B - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN110008911B
CN110008911B CN201910285762.3A CN201910285762A CN110008911B CN 110008911 B CN110008911 B CN 110008911B CN 201910285762 A CN201910285762 A CN 201910285762A CN 110008911 B CN110008911 B CN 110008911B
Authority
CN
China
Prior art keywords
feature
facial
facial feature
feature points
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910285762.3A
Other languages
Chinese (zh)
Other versions
CN110008911A (en
Inventor
廖声洋
唐文斌
吴文昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910285762.3A priority Critical patent/CN110008911B/en
Publication of CN110008911A publication Critical patent/CN110008911A/en
Application granted granted Critical
Publication of CN110008911B publication Critical patent/CN110008911B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The application relates to the field of image processing, and discloses an image processing method, an image processing device, electronic equipment and a computer-readable storage medium, wherein the image processing method comprises the following steps: determining an image frame to be processed in the video information, and detecting whether the image frame to be processed comprises facial features; then if one or more facial features are included, acquiring a plurality of basic feature points corresponding to each facial feature, and determining a fitting polynomial coefficient corresponding to each facial feature according to the plurality of basic feature points corresponding to each facial feature; and then based on the acquired densification processing parameters, performing densification processing on a plurality of basic feature points corresponding to each facial feature according to the fitting polynomial coefficient corresponding to each facial feature. According to the method, the number of the feature points is greatly increased, the accuracy of specific processing of the facial features and the natural degree of the facial features after the specific processing are greatly improved, and the user experience is improved.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
Background
With the development of science and technology, the improvement of the application level of technology industrialization and the development of intelligent terminal technology, the performance of the intelligent terminal is better and better, and the hardware configuration is more and more complete. With the increasingly competitive market of intelligent terminals, upgrading of simple hardware configurations has not attracted more consumers of electronic products. Based on this, most of manufacturers of intelligent terminals pursue differentiated functional planning and design of products, for example, the technical application of processing image information such as face unlocking, face remodeling, 3D beauty, 3D polishing and the like.
However, in the specific implementation process, the inventor of the present application finds that: when the existing technology for processing image information is applied to face-related processing, the number of acquired feature points is often small, and specific processing cannot be accurately performed on the face, so that the user experience is poor.
Disclosure of Invention
The purpose of the present application is to solve at least one of the above technical drawbacks, and to provide the following solutions:
in a first aspect, an image processing method is provided, including:
determining an image frame to be processed in the video information, and detecting whether the image frame to be processed comprises facial features;
if the facial features comprise one or more facial features, acquiring a plurality of basic feature points corresponding to each facial feature respectively, and determining a fitting polynomial coefficient corresponding to each facial feature respectively according to the plurality of basic feature points corresponding to each facial feature respectively;
and based on the acquired densification processing parameters, performing densification processing on a plurality of basic feature points corresponding to each facial feature according to the fitting polynomial coefficient corresponding to each facial feature.
Specifically, determining a fitting polynomial coefficient of any facial feature according to a plurality of basic feature points of the facial feature comprises the following steps:
determining the abscissa and the ordinate which respectively correspond to each basic feature point of any facial feature to obtain each corresponding coordinate point;
and calculating the deviation square sum of each coordinate point to the fitting curve, and respectively calculating partial derivatives of each polynomial coefficient according to each basic characteristic point and on the basis of the deviation square sum to determine the fitting polynomial coefficient of any facial characteristic.
Further, the densifying processing parameters include an interpolation step length, and based on the obtained densifying processing parameters, according to the fitting polynomial coefficients corresponding to each facial feature, perform densifying processing on a plurality of basic feature points corresponding to each facial feature, respectively, including:
and based on the fitting polynomial coefficient corresponding to any facial feature, inserting corresponding feature points into a plurality of basic feature points corresponding to any facial feature according to the interpolation step length to obtain the dense feature points after the densification of any facial feature.
Further, the densification processing parameter includes a distance threshold value, and after the insertion of the corresponding feature points is performed on the multiple basic feature points corresponding to any facial feature, the method further includes:
calculating the distance between the current dense feature point and the previous dense feature point;
detecting whether the distance is greater than a distance threshold value;
and if so, determining the current dense feature points as distortion points, and rejecting the distortion points.
Further, after the inserting of the corresponding feature points is performed on a plurality of basic feature points corresponding to any facial feature, the method further includes:
and outputting the dense feature points after the densification of any facial feature after the abnormal points are removed.
In a second aspect, there is provided an image processing apparatus comprising:
the first processing module is used for determining an image frame to be processed in the video information and detecting whether the image frame to be processed comprises facial features;
the second processing module is used for acquiring a plurality of basic feature points corresponding to each facial feature when the image frame to be processed comprises one or a plurality of facial features, and determining a fitting polynomial coefficient corresponding to each facial feature according to the plurality of basic feature points corresponding to each facial feature;
and the third processing module is used for carrying out densification processing on a plurality of basic feature points corresponding to each facial feature according to the fitting polynomial coefficient corresponding to each facial feature based on the acquired densification processing parameters.
Specifically, the second processing module comprises a determining submodule and a calculating submodule;
the determining submodule is used for determining the abscissa and the ordinate which respectively correspond to each basic feature point of any facial feature to obtain each corresponding coordinate point;
and the calculating submodule is used for calculating the deviation square sum of each coordinate point to the fitting curve, and performing partial derivative calculation on each polynomial coefficient according to each basic feature point and on the basis of the deviation square sum to determine the fitting polynomial coefficient of any facial feature.
Further, the densification processing parameter includes an interpolation step length, and the third processing module is specifically configured to perform, based on a fitting polynomial coefficient corresponding to any facial feature, insertion of corresponding feature points for a plurality of basic feature points corresponding to any facial feature according to the interpolation step length, to obtain a densified feature point of any facial feature after densification processing.
Furthermore, the intensive processing parameters comprise a distance threshold value, and the device also comprises a distortion point eliminating module;
the distortion point eliminating module is used for calculating the distance between the current dense feature point and the previous dense feature point; and detecting whether the distance is greater than a distance threshold value, determining the current dense feature points as distortion points when the distance is greater than the distance threshold value, and removing the distortion points.
Further, the device also comprises an output module;
and the output module is used for outputting the intensive feature points after the intensive processing of any facial feature after the abnormal points are removed.
In a third aspect, an electronic device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method when executing the program.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the image processing method described above.
According to the image processing method provided by the embodiment of the application, based on the acquired intensive processing parameters, according to the fitting polynomial coefficients corresponding to the facial features respectively, the intensive processing is performed on the plurality of basic feature points corresponding to the facial features respectively, the number of the feature points can be greatly increased on the basis of the acquired basic feature points, so that the accuracy of specific processing on the facial features and the natural degree of the facial features after the specific processing are improved to a great extent, the user experience is greatly improved, the processing based on the facial feature points is convenient to popularize into more application scenes, and the application range is improved.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a plurality of basic feature points for obtaining facial features according to an embodiment of the present application;
FIG. 3 is a schematic view of a plurality of base feature points of a mouth according to an embodiment of the present application;
FIG. 4 is a schematic view of dense feature points of a mouth after densification processing according to an embodiment of the present application;
fig. 5 is a schematic process diagram of performing a densification process on a plurality of basic feature points of each facial feature according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a basic structure of an image processing apparatus according to an embodiment of the present application;
FIG. 7 is a detailed structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary only for the purpose of explaining the present application and are not to be construed as limiting the present application.
As used herein, the singular forms "a", "an", "the" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will be understood that when an element is referred to as being "connected" or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may also be present. Further, "connected" or "coupled" as used herein may include wirelessly connected or wirelessly coupled. As used herein, the term "and/or" includes all or any element and all combinations of one or more of the associated listed items.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
The following describes the technical solutions of the present application and how to solve the above technical problems with specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present application will be described below with reference to the accompanying drawings.
Example one
An embodiment of the present application provides an image processing method, as shown in fig. 1, including:
step S110, determining an image frame to be processed in the video information, and detecting whether a facial feature is included in the image frame to be processed.
Specifically, the terminal device or the application program applying the image processing method determines the image frame to be processed from the acquired video information, that is, determines the image frame to be processed in the acquired video information. The video information may be acquired in real time by a multimedia acquisition device, or may be acquired from a locally pre-stored video library.
Further, after the image frame to be processed in the video information is determined, the image to be processed is detected to determine whether facial features, such as eyebrows, eyes, nose, mouth, face contour, and the like, are included in the image to be processed. Wherein, if one or more facial features are included in the image to be processed, step S120 is executed.
Step S120, if one or more facial features are included, obtaining a plurality of basic feature points corresponding to each facial feature, and determining a fitting polynomial coefficient corresponding to each facial feature according to the plurality of basic feature points corresponding to each facial feature.
Specifically, when one or more facial features are included in the image to be processed, a plurality of basic feature points corresponding to the respective facial features are acquired. If the image to be processed comprises the facial features of the eyes, a plurality of basic feature points corresponding to the eyes are obtained, and if the image to be processed comprises the two facial features of the mouth and the nose, a plurality of basic feature points corresponding to the mouth and a plurality of basic feature points corresponding to the nose are obtained respectively. As shown in fig. 2, the image on the left side of fig. 2 is a current image to be processed, and includes a plurality of facial features such as eyebrows, eyes, nose, mouth, and face contour, and the image on the right side of fig. 2 is a schematic diagram of a plurality of basic feature points corresponding to each facial feature.
Further, after a plurality of basic feature points corresponding to each facial feature are obtained, fitting polynomial coefficients corresponding to each facial feature are determined according to the plurality of basic feature points corresponding to each facial feature, and a necessary foundation is laid for performing densification processing on the basic feature points.
And step S130, based on the acquired densification processing parameters, performing densification processing on a plurality of basic feature points corresponding to each facial feature according to the fitting polynomial coefficient corresponding to each facial feature.
Specifically, after determining the fitting polynomial coefficients corresponding to each facial feature according to the plurality of basic feature points corresponding to each facial feature, the plurality of basic feature points corresponding to each facial feature may be subjected to the densification processing according to the fitting polynomial coefficients corresponding to each facial feature based on the obtained densification processing parameters, so as to obtain each feature point corresponding to each facial feature after the densification processing. The intensive processing parameter may be a default parameter value preset by the terminal device or the application program, or a parameter value manually adjusted by the user according to the user's needs, and the intensive processing parameter may be synchronously acquired when the terminal device or the application program is started.
Compared with the prior art, the image processing method provided by the embodiment of the application has the advantages that based on the acquired intensive processing parameters, according to the fitting polynomial coefficients corresponding to the facial features respectively, the plurality of basic feature points corresponding to the facial features are subjected to intensive processing, the number of the feature points can be greatly increased on the basis of the acquired basic feature points, the accuracy of specific processing on the facial features and the natural degree of the facial features after the specific processing are improved to a great extent, the user experience is greatly improved, the processing based on the facial feature points is conveniently popularized to more application scenes, and the application range is improved.
Another possible implementation manner is provided in the embodiments of the present application, where determining a fitting polynomial coefficient of any facial feature according to a plurality of basic feature points of the facial feature includes:
determining the abscissa and the ordinate which respectively correspond to each basic feature point of any facial feature to obtain each corresponding coordinate point;
and calculating the deviation square sum of each coordinate point to the fitting curve, and respectively calculating partial derivatives of each polynomial coefficient according to each basic characteristic point and on the basis of the deviation square sum to determine the fitting polynomial coefficient of any facial characteristic.
Specifically, the densifying processing parameters include an interpolation step length, and based on the obtained densifying processing parameters, according to fitting polynomial coefficients corresponding to each facial feature, perform densifying processing on a plurality of basic feature points corresponding to each facial feature, respectively, including:
and based on the fitting polynomial coefficient corresponding to any facial feature, inserting corresponding feature points into a plurality of basic feature points corresponding to any facial feature according to the interpolation step length to obtain the dense feature points after the densification of any facial feature.
Specifically, the densification processing parameter includes a distance threshold, and after the insertion of the corresponding feature points into the multiple basic feature points corresponding to any facial feature, the method further includes:
calculating the distance between the current dense feature point and the previous dense feature point;
detecting whether the distance is greater than a distance threshold value;
and if so, determining the current dense feature points as distortion points, and rejecting the distortion points.
Specifically, after the insertion of the corresponding feature points is performed on a plurality of basic feature points corresponding to any facial feature, the method further includes:
and outputting the dense feature points after the densification of any facial feature after the abnormal points are removed.
The following describes details of the present embodiment as follows:
in particular, the process of determining the fitting polynomial coefficients is the same for any facial feature, whether it is an eyebrow, eye, nose, mouth or face contour. However, different fitting polynomial coefficients can be determined according to the fitting polynomial function because of different basic feature points of each facial feature, that is, each facial feature can determine a corresponding fitting polynomial coefficient according to a plurality of corresponding basic feature points, that is, each facial feature has a corresponding fitting polynomial coefficient, so that the fitting polynomial coefficient of each facial feature in the image to be processed needs to be determined.
In other words, the fitting polynomial coefficient of each facial feature is determined by its corresponding plurality of base feature points. In general, since a plurality of basic feature points of a certain facial feature photographed in the front are different from a plurality of basic feature points of the certain facial feature photographed in the side, a polynomial fitting coefficient of the certain facial feature photographed in the front is different from a polynomial fitting coefficient of the certain facial feature photographed in the side. Based on this, after the image to be processed is determined each time, the fitting polynomial coefficient of each facial feature needs to be determined according to the plurality of basic feature points corresponding to each facial feature in the image to be processed.
Since the process of determining the fitting polynomial coefficients of each facial feature is the same, the process of determining the fitting polynomial coefficients will be specifically described by taking any facial feature (for example, mouth) as an example, and specifically described as follows:
if the form Y is adoptedn(x)=anxn+an-1xn-1+…+a1x+a0Wherein n is a set fitting order, a0、a1……anIs the fitting polynomial coefficient to be determined. Firstly, separating the abscissa (namely X coordinate) and the ordinate (namely Y coordinate) of each basic feature point of any facial feature to obtain corresponding coordinates (X, Y), then calculating the deviation square sum of each coordinate point to a fitting curve, respectively calculating the partial derivative of each polynomial coefficient based on the deviation square sum according to each basic feature point, and determining the fitting polynomial coefficient of any facial feature. And the fitting curve is obtained by fitting according to the fitting polynomial function.
In practical applications, the above specific process can be described as steps a to f as follows according to needs:
step a: is determined to be as Yn(x)=anxn+an-1xn-1+…+a1x+a0Fitting polynomial function of (1).
Step b: separating the coordinate X of the basic feature point of any facial feature and storing the coordinate X into an M _ X matrix, wherein the corresponding code implementation can be as follows:
for(int i=0;i<List.size();i++)
M_X[i]=List[i].x;
step c: separating the coordinate Y of the basic feature point of any facial feature and storing the coordinate Y into an M _ Y matrix, wherein the corresponding code implementation can be as follows:
for(int i=0;i<List.size();i++)
M_Y[i]=List[i].y;
step d: from Yn(x)=anxn+an-1xn-1++a1x+a0Thus, it can be seen that: the sum of the distances (i.e., the sum of squared deviations) of the respective coordinate points (X, Y) to the fitted curve is as follows:
Figure BDA0002023211420000091
step e: for each polynomial coefficient a in the right part of the deviation sum of squares equation in step diCalculating partial derivatives, and obtaining the following after sorting:
Figure BDA0002023211420000092
step f: the above-mentioned basic feature points (List [ i ]].xi,List[i]Substituting yi) into steps d to e to obtain the parameter a0、a1……anI.e. the fitting polynomial coefficients mentioned above.
It should be noted that the process of determining the fitting polynomial coefficients of other facial features is the same as the above process, and is not described herein again. Further, the fitting polynomial coefficient described above may be written in the form of a one-dimensional matrix, and in this case, the fitting polynomial coefficient may be referred to as a coefficient matrix, which may be written as M.
Further, the densification processing parameter includes an interpolation step size, which may be a default value or may be set according to a user requirement. After the fitting polynomial coefficient of any facial feature is obtained, feature points are inserted into a plurality of basic feature points corresponding to any facial feature according to interpolation step length based on the fitting polynomial coefficient corresponding to any facial feature, and therefore dense feature points after the densification processing of any facial feature are obtained.
Further, fig. 3 shows a schematic diagram of a plurality of acquired basic feature points of the mouth, and fig. 4 shows a schematic diagram of a plurality of basic feature points corresponding to the mouth after feature point insertion, that is, a schematic diagram of dense feature points after densification processing.
Further, the densification processing parameter includes a distance threshold (e.g., d0), which may be a default value or may be set according to a user requirement. After the corresponding feature points are inserted into a plurality of basic feature points corresponding to any facial feature, the distance between the current dense feature point and the previous dense feature point can be calculated, whether the distance is greater than a distance threshold value d0 or not is detected, if the distance is greater than a distance threshold value d0, the current dense feature point is determined to be a distortion point, and then the determined distortion point is removed.
If the current point is noted as (x)n,yn) The former point is denoted as (x)n-1,yn-1) And recording the distance between the current dense feature point and the previous dense feature point as dis, and then:
Figure BDA0002023211420000101
if dis is greater than d0, then the current dense feature point (x)n,yn) Determining as a distortion point, and then removing the determined distortion point (x)n,yn)。
Further, after the abnormal points are removed, the dense feature points after the densification processing of any facial feature after the abnormal points are removed are output, so that a user can intuitively feel the dense feature points aiming at any facial feature.
Optionally, in another possible implementation manner provided in this embodiment of the present application, different facial features may correspond to different densification processing parameters, so that an appropriate densification processing parameter may be set according to characteristics of each facial feature. For example, the interpolation step size and/or distance threshold value may be different for the three facial features, glasses, nose, and mouth, respectively.
Fig. 5 is a schematic diagram of a process of performing an intensive processing on a plurality of basic feature points respectively corresponding to each facial feature according to an embodiment of the present application, and the image processing method according to the embodiment of the present application can be implemented according to the steps shown in fig. 5.
Example two
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure, and as shown in fig. 6, the apparatus 60 may include a first processing module 61, a second processing module 62, and a third processing module 63, where:
the first processing module 61 is configured to determine an image frame to be processed in the video information, and detect whether a facial feature is included in the image frame to be processed;
the second processing module 62 is configured to, when the image frame to be processed includes one or more facial features, obtain a plurality of basic feature points corresponding to each facial feature, and determine a fitting polynomial coefficient corresponding to each facial feature according to the plurality of basic feature points corresponding to each facial feature;
the third processing module 63 is configured to perform densification processing on a plurality of basic feature points corresponding to each facial feature according to the fitting polynomial coefficient corresponding to each facial feature based on the obtained densification processing parameters.
The device that this application embodiment provided, compared with the prior art, through based on the intensive processing parameter that acquires, according to the fitting polynomial coefficient that each facial feature corresponds respectively, carry out intensive processing to a plurality of basic feature points that each facial feature corresponds respectively, not only can be on the basis of the basic feature point that acquires, increase substantially the quantity of feature point, thereby to a great extent improve the precision that carries out specific processing to facial feature and the natural degree of the facial feature after specific processing, greatly promote user experience, and be convenient for popularize the processing based on facial feature point to more application scenarios, improve and use the popularity.
Specifically, fig. 7 is a detailed structural diagram of the image processing apparatus according to the embodiment of the present disclosure, and the apparatus 70 may include a first processing module 71, a second processing module 72, a third processing module 73, a distortion point eliminating module 74 and an output module 75. The functions implemented by the first processing module 71 in fig. 7 are the same as the first processing module 61 in fig. 6, the functions implemented by the second processing module 72 in fig. 7 are the same as the second processing module 62 in fig. 6, and the functions implemented by the third processing module 73 in fig. 7 are the same as the third processing module 63 in fig. 6, which are not repeated herein. The image processing apparatus shown in fig. 7 is described in detail below:
in particular, the second processing module 72 comprises a determination submodule 721 and a calculation submodule 722, as shown in fig. 7, wherein:
the determining submodule 721 is configured to determine an abscissa and an ordinate corresponding to each basic feature point of any facial feature, so as to obtain corresponding coordinate points;
the calculating submodule 722 is configured to calculate a deviation square sum of each coordinate point to the fitting curve, and perform partial derivative calculation on each polynomial coefficient based on the deviation square sum according to each basic feature point, to determine a fitting polynomial coefficient of any facial feature.
Further, the densification processing parameter includes an interpolation step length, and the third processing module 73 is specifically configured to perform, based on a fitting polynomial coefficient corresponding to any facial feature, insertion of corresponding feature points for a plurality of basic feature points corresponding to any facial feature according to the interpolation step length, so as to obtain a densified feature point of any facial feature after densification processing.
Further, the densification processing parameter includes a distance threshold, and the apparatus further includes a distortion point elimination module 74, as shown in fig. 7, wherein:
the distorted point eliminating module 74 is configured to calculate a distance between a current dense feature point and a previous dense feature point; and detecting whether the distance is greater than a distance threshold value, determining the current dense feature points as distortion points when the distance is greater than the distance threshold value, and removing the distortion points.
Further, the apparatus further comprises an output module 75, as shown in fig. 7, wherein:
the output module 75 is configured to output the dense feature points after the densification of any facial feature from which the abnormal points are removed.
It should be noted that the present embodiment is an apparatus embodiment corresponding to the first embodiment (i.e., the method embodiment), and the present embodiment can be implemented in cooperation with the first embodiment. The related technical details mentioned in the first embodiment are still valid in this embodiment, and are not described herein again in order to reduce repetition. Accordingly, the related art details mentioned in the present embodiment can also be applied to the first embodiment.
EXAMPLE III
An embodiment of the present application provides an electronic device, as shown in fig. 8, an electronic device 800 shown in fig. 8 includes: a processor 801 and a memory 803. Wherein the processor 801 is coupled to a memory 803, such as via a bus 802. Further, the electronic device 800 may also include a transceiver 804. It should be noted that the transceiver 804 is not limited to one in practical applications, and the structure of the electronic device 800 is not limited to the embodiment of the present application.
The processor 801 is applied to the embodiment of the present application, and is used to implement the functions of the first processing module, the second processing module, and the third processing module shown in fig. 6 or fig. 7.
The processor 801 may be a CPU, GPU, general purpose processor, DSP, ASIC, FPGA or other programmable logic device, transistor logic device, hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. The processor 801 may also be a combination of computing functions, e.g., comprising one or more microprocessors, a combination of a DSP and a microprocessor, or the like.
Bus 802 may include a path that transfers information between the above components. The bus 602 may be a PCI bus or an EISA bus, etc. The bus 802 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 8, but this is not intended to represent only one bus or type of bus.
The memory 803 may be, but is not limited to, a ROM or other type of static storage device that can store static information and instructions, a RAM or other type of dynamic storage device that can store information and instructions, an EEPROM, a CD-ROM or other optical disk storage, optical disk storage (including compact disk, laser disk, optical disk, digital versatile disk, blu-ray disk, etc.), a magnetic disk storage medium or other magnetic storage device, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
The memory 803 is used for storing application program code for performing the present solution and is controlled in execution by the processor 801. The processor 801 is configured to execute application program codes stored in the memory 803 to realize the actions of the image processing apparatus provided by the embodiment shown in fig. 6 or fig. 7.
The electronic device provided by the embodiment of the application comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, and when the processor executes the program, compared with the prior art, the electronic device can realize that: through based on the intensive processing parameter that acquires, according to the fitting polynomial coefficient that each facial feature corresponds respectively, carry out intensive processing to a plurality of basic feature points that each facial feature corresponds respectively, not only can be on the basis of the basic feature point that acquires, increase substantially the quantity of feature point, thereby carry out the precision of specific processing and the natural degree of facial feature after specific processing to facial feature at to a great extent improvement, greatly promote user experience, be convenient for in popularizing the processing based on facial feature point to more application scenes, improve and use the wide degree.
The embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the method shown in the first embodiment. Compared with the prior art, through based on the intensive processing parameters that acquire, according to the fitting polynomial coefficient that each facial feature corresponds respectively, carry out intensive processing to a plurality of basic feature points that each facial feature corresponds respectively, not only can be on the basis of the basic feature point that acquires, increase substantially the quantity of feature point, thereby carry out the precision of specific processing and the natural degree of facial feature after specific processing to facial feature at to a great extent improvement, greatly promote user experience, and be convenient for popularize the processing based on facial feature point to more application scenes, improve and use the popularity.
The computer-readable storage medium provided by the embodiment of the application is suitable for any embodiment of the method. And will not be described in detail herein.
The embodiment of the application further provides a computer program, and the computer program can be stored on a cloud or a local storage medium. When being executed by a computer or a processor, the computer program is used for executing the corresponding steps of any embodiment of the method and realizing the corresponding modules in the image processing device of the second embodiment of the application.
It should be understood that, although the steps in the flowcharts of the figures are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and may be performed in other orders unless explicitly stated herein. Moreover, at least a portion of the steps in the flow chart of the figure may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the sub-steps or stages of other steps.
The foregoing is only a partial embodiment of the present application, and it should be noted that, for those skilled in the art, several modifications and decorations can be made without departing from the principle of the present application, and these modifications and decorations should also be regarded as the protection scope of the present application.

Claims (10)

1. An image processing method, comprising:
determining an image frame to be processed in video information, and detecting whether a facial feature is included in the image frame to be processed;
if the facial features comprise one or more facial features, acquiring a plurality of basic feature points corresponding to each facial feature respectively, and determining a fitting polynomial coefficient corresponding to each facial feature respectively according to the plurality of basic feature points corresponding to each facial feature respectively;
and based on the acquired densification processing parameters corresponding to the facial features, performing densification processing on a plurality of basic feature points corresponding to the facial features according to fitting polynomial coefficients corresponding to the facial features respectively.
2. The method of claim 1, wherein determining fitting polynomial coefficients for any facial feature based on a plurality of base feature points for the any facial feature comprises:
determining the abscissa and the ordinate which respectively correspond to each basic feature point of any facial feature to obtain each corresponding coordinate point;
and calculating the deviation square sum of each coordinate point to the fitting curve, and respectively calculating partial derivatives of each polynomial coefficient according to each basic feature point based on the deviation square sum to determine the fitting polynomial coefficient of any facial feature.
3. The method according to claim 1 or 2, wherein the densification processing parameter includes an interpolation step length, and performing densification processing on a plurality of basic feature points corresponding to each facial feature according to a fitting polynomial coefficient corresponding to each facial feature based on the obtained densification processing parameter includes:
and based on the fitting polynomial coefficient corresponding to any facial feature, inserting corresponding feature points into a plurality of basic feature points corresponding to any facial feature according to the interpolation step length to obtain the dense feature points of any facial feature after the densification processing.
4. The method according to claim 3, wherein the densification processing parameter comprises a distance threshold value, and after the inserting of the corresponding feature points into the plurality of base feature points corresponding to the any facial feature, further comprises:
calculating the distance between the current dense feature point and the previous dense feature point;
detecting whether the distance is greater than the distance threshold value;
and if so, determining the current dense feature points as distortion points, and eliminating the distortion points.
5. The method according to claim 4, further comprising, after the inserting of the respective feature points for a plurality of base feature points corresponding to the any of the facial features:
and outputting the intensive feature points after the intensive processing of any facial feature after the abnormal points are removed.
6. An image processing apparatus characterized by comprising:
the device comprises a first processing module, a second processing module and a third processing module, wherein the first processing module is used for determining an image frame to be processed in video information and detecting whether a facial feature is included in the image frame to be processed;
the second processing module is used for acquiring a plurality of basic feature points corresponding to each facial feature when the image frame to be processed comprises one or a plurality of facial features, and determining a fitting polynomial coefficient corresponding to each facial feature according to the plurality of basic feature points corresponding to each facial feature;
and the third processing module is used for carrying out densification processing on a plurality of basic feature points corresponding to each facial feature according to the fitting polynomial coefficient corresponding to each facial feature based on the acquired densification processing parameters corresponding to each facial feature.
7. The apparatus of claim 6, wherein the second processing module comprises a determining submodule and a calculating submodule;
the determining submodule is used for determining the abscissa and the ordinate which correspond to each basic feature point of any facial feature respectively to obtain each corresponding coordinate point;
and the calculating submodule is used for calculating the deviation square sum of each coordinate point to a fitting curve, and performing partial derivative calculation on each polynomial coefficient based on the deviation square sum according to each basic feature point to determine the fitting polynomial coefficient of any facial feature.
8. The apparatus according to claim 6 or 7, wherein the densification processing parameter includes an interpolation step length, and the third processing module is specifically configured to perform, based on a fitting polynomial coefficient corresponding to any facial feature, insertion of corresponding feature points for a plurality of basic feature points corresponding to the any facial feature according to the interpolation step length, to obtain densified feature points of the any facial feature after densification processing.
9. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the image processing method according to any of claims 1 to 5 when executing the program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a computer program which, when being executed by a processor, implements the image processing method of any one of claims 1 to 5.
CN201910285762.3A 2019-04-10 2019-04-10 Image processing method, image processing device, electronic equipment and computer readable storage medium Active CN110008911B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910285762.3A CN110008911B (en) 2019-04-10 2019-04-10 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910285762.3A CN110008911B (en) 2019-04-10 2019-04-10 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110008911A CN110008911A (en) 2019-07-12
CN110008911B true CN110008911B (en) 2021-08-17

Family

ID=67170854

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910285762.3A Active CN110008911B (en) 2019-04-10 2019-04-10 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN110008911B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111083513B (en) * 2019-12-25 2022-02-22 广州酷狗计算机科技有限公司 Live broadcast picture processing method and device, terminal and computer readable storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522869A (en) * 2018-11-30 2019-03-26 深圳市脸萌科技有限公司 Face image processing process, device, terminal device and computer storage medium
CN109544444A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Image processing method, device, electronic equipment and computer storage medium

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4532419B2 (en) * 2006-02-22 2010-08-25 富士フイルム株式会社 Feature point detection method, apparatus, and program
CN101339670B (en) * 2008-08-07 2010-06-09 浙江工业大学 Computer auxiliary three-dimensional craniofacial rejuvenation method
CN103116902A (en) * 2011-11-16 2013-05-22 华为软件技术有限公司 Three-dimensional virtual human head image generation method, and method and device of human head image motion tracking
CN104616347A (en) * 2015-01-05 2015-05-13 掌赢信息科技(上海)有限公司 Expression migration method, electronic equipment and system
CN104778712B (en) * 2015-04-27 2018-05-01 厦门美图之家科技有限公司 A kind of face chart pasting method and system based on affine transformation
CN106127104A (en) * 2016-06-06 2016-11-16 安徽科力信息产业有限责任公司 Prognoses system based on face key point and method thereof under a kind of Android platform
CN108022308A (en) * 2017-11-30 2018-05-11 深圳市唯特视科技有限公司 A kind of facial alignment schemes based on three-dimensional face model fitting
CN109409262A (en) * 2018-10-11 2019-03-01 北京迈格威科技有限公司 Image processing method, image processing apparatus, computer readable storage medium
CN109584180A (en) * 2018-11-30 2019-04-05 深圳市脸萌科技有限公司 Face image processing process, device, electronic equipment and computer storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109522869A (en) * 2018-11-30 2019-03-26 深圳市脸萌科技有限公司 Face image processing process, device, terminal device and computer storage medium
CN109544444A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Image processing method, device, electronic equipment and computer storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
A fast parallel matrix multiplication reconfigurable unit utilized in face recognitions systems;Ioannis Sotiropoulos,and etc;《2009 International Conference on Field Programmable Logic and Applications》;20090929;第276-281页 *

Also Published As

Publication number Publication date
CN110008911A (en) 2019-07-12

Similar Documents

Publication Publication Date Title
CN107705333B (en) Space positioning method and device based on binocular camera
CN108830892B (en) Face image processing method and device, electronic equipment and computer readable storage medium
CN107423306B (en) Image retrieval method and device
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
CN105608699B (en) A kind of image processing method and electronic equipment
CN108470364A (en) A kind of curve-fitting method and device
US20190392632A1 (en) Method and apparatus for reconstructing three-dimensional model of object
CN109948441B (en) Model training method, image processing method, device, electronic equipment and computer readable storage medium
CN107203962B (en) Method for making pseudo-3D image by using 2D picture and electronic equipment
CN114693760A (en) Image correction method, device and system and electronic equipment
CN110782534B (en) 3D character color gradient method, medium, equipment and device
CN110008911B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113723317B (en) Reconstruction method and device of 3D face, electronic equipment and storage medium
CN107844803B (en) Picture comparison method and device
CN106934372B (en) Point cloud classification method based on adding color information into traditional vfh descriptor
CN105893578B (en) A kind of method and device of photo selection
US20140192045A1 (en) Method and apparatus for generating three-dimensional caricature using shape and texture of face
CN110009683B (en) Real-time on-plane object detection method based on MaskRCNN
CN109816709B (en) Monocular camera-based depth estimation method, device and equipment
CN115861515A (en) Three-dimensional face reconstruction method, computer program product and electronic device
CN112348069B (en) Data enhancement method, device, computer readable storage medium and terminal equipment
CN111241990B (en) Image processing method and device, computer equipment and computer readable storage medium
CN113223128B (en) Method and apparatus for generating image
CN109344710B (en) Image feature point positioning method and device, storage medium and processor
CN113408452A (en) Expression redirection training method and device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant