CN112686171A - Data processing method, electronic equipment and related product - Google Patents
Data processing method, electronic equipment and related product Download PDFInfo
- Publication number
- CN112686171A CN112686171A CN202011639072.2A CN202011639072A CN112686171A CN 112686171 A CN112686171 A CN 112686171A CN 202011639072 A CN202011639072 A CN 202011639072A CN 112686171 A CN112686171 A CN 112686171A
- Authority
- CN
- China
- Prior art keywords
- training set
- time
- model
- neural network
- parameter model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses a data processing method, electronic equipment and related products, which are applied to the electronic equipment, wherein the method comprises the following steps: inputting the first training set into a first neural network model for operation to obtain a first parameter model; inputting the second training set into a second neural network model for operation to obtain a second parameter model; calculating the second training set according to the first parameter model to obtain a second reference training set; calculating the first training set according to the second parameter model to obtain a first reference training set; and inputting the first reference training set into the first parameter model for operation, inputting the second reference training set into the second parameter model for operation, and taking the converged neural network model of the first reference training set and the second reference training set as a trained neural network model. By adopting the method and the device, the accuracy of the neural network model can be improved based on an unsupervised learning mode.
Description
Technical Field
The present application relates to the field of image processing technologies, and in particular, to a data processing method, an electronic device, and a related product.
Background
In the past, research on face recognition is slow, because face recognition usually needs large-scale or even hundred million-level data support to train an ideal effect, many large-scale public data sets with artificial labels are open sources, which undoubtedly promotes the rapid development of face recognition and also brings precision improvement to the field of face recognition. In recent years, face recognition is increasingly studied, and face recognition has been widely applied to various fields such as monitoring scenes, community access control, mobile phones and the like. However, in practical applications, even if a well trained model is trained using a large-scale public data set, significant scene differences often result in significant degradation of accuracy if deployed directly to a new scene. In order to solve the generalization ability of the model, human face data acquisition, classification and manual screening are often required to be performed in a new scene, and as the data scale is larger and larger, the cost of the manual screening is correspondingly increased, the manual screening also becomes impossible, and further, otherwise, the model precision is reduced. Therefore, the problem of how to improve the accuracy of the neural network model is to be solved urgently.
Disclosure of Invention
The embodiment of the application provides a data processing method, electronic equipment and related products, and can improve the precision of a neural network model.
In a first aspect, an embodiment of the present application provides a data processing method applied to an electronic device, where the method includes:
acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the first neural network model and the second neural network model have the same network structure but different model parameters;
executing the following steps S1-S4N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
and taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
In a second aspect, an embodiment of the present application provides a data processing apparatus, which is applied to an electronic device, and the apparatus includes: an acquisition unit, a determination unit, an arithmetic unit and a construction unit, wherein,
the acquisition unit is used for acquiring an initial training set aiming at the human face;
the determining unit is used for determining a first training set and a second training set based on the initial training set;
the operation unit is used for inputting the first training set into a first neural network model for operation to obtain a first parameter model;
the operation unit is further configured to input the second training set into a second neural network model for operation to obtain a second parameter model, where the first neural network model and the second neural network model have the same network structure but different model parameters;
the execution unit is configured to execute the following steps S1-S4N times, where N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
and the determining unit is used for taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the program includes instructions for executing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that the data processing method, the electronic device, and the related product described in the embodiments of the present application are applied to an electronic device, an initial training set for a human face is obtained, based on the initial training set, a first training set and a second training set are determined, the first training set is input to a first neural network model for operation, a first parameter model is obtained, the second training set is input to a second neural network model for operation, a second parameter model is obtained, the first neural network model and the second neural network model have the same network structure but different model parameters, the following steps S1-S4 are performed N times, where N is a positive integer: s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer; s2, constructing an ith second parameter model according to the (i-1) th second parameter model and model parameters of the ith second neural network model; s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time; s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time according to the second reference training set of the i-th time to perform calculation to obtain a second parameter model of the i-th time; the relatively convergent neural network model in the Nth first parameter model and the Nth second parameter model is used as a well-trained neural network model, and the parameter model is the accumulated average of the model parameters in the past period, so that the decoupling performance is higher, the stability of cooperative supervision can be improved, the outputs of the model parameters and the cooperative supervision can be more independent and complementary, the past average model of the complementary network is used for generating supervision, the pseudo label predictions of the two networks can be better correlated, the wrong amplification and overfitting can be better avoided, the high-precision neural network model can be obtained, and the improvement of the precision of the neural network model is facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 1B is a schematic flowchart of a data processing method according to an embodiment of the present application;
fig. 1C is a schematic flowchart of a data processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of another data processing method provided in the embodiments of the present application;
fig. 3 is a schematic structural diagram of another electronic device provided in an embodiment of the present application;
fig. 4 is a block diagram of functional units of a data processing apparatus according to an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may include other steps or elements not listed or inherent to such process, method, article, or apparatus in one possible example.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The electronic device according to the embodiment of the present application may be a handheld device, an intelligent robot, a vehicle-mounted device, a wearable device, a computing device or other processing devices connected to a wireless modem, and various forms of User Equipment (UE), a mobile station (mobile station, MS), a terminal device (terminal device), and the like, and the electronic device may also be a server or an intelligent home device.
In the embodiment of the application, the smart home device may be at least one of the following: refrigerator, washing machine, electricity rice cooker, intelligent (window) curtain, intelligent lamp, intelligent bed, intelligent garbage bin, microwave oven, steam ager, air conditioner, lampblack absorber, server, intelligent door, smart window, door wardrobe, intelligent audio amplifier, intelligent house, intelligent chair, intelligent clothes hanger, intelligent shower, water dispenser, water purifier, air purifier, doorbell, monitored control system, intelligent garage, TV set, projecting apparatus, intelligent dining table, intelligent sofa, massage armchair, treadmill etc. of course, can also include other equipment.
As shown in fig. 1A, fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure. The electronic device includes a processor, a Memory, a signal processor, a transceiver, a display screen, a speaker, a microphone, a Random Access Memory (RAM), a camera, a sensor, a network module, and the like. The storage, the signal processor DSP, the loudspeaker, the microphone, the RAM, the camera, the sensor and the network module are connected with the processor, and the transceiver is connected with the signal processor.
The Processor is a control center of the electronic device, connects various parts of the whole electronic device by using various interfaces and lines, executes various functions and processes data of the electronic device by running or executing software programs and/or modules stored in the memory and calling the data stored in the memory, thereby performing overall monitoring on the electronic device, and may be a Central Processing Unit (CPU), a Graphics Processing Unit (GPU) or a Network Processing Unit (NPU).
Further, the processor may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes various functional applications and data processing of the electronic equipment by operating the software programs and/or modules stored in the memory. The memory mainly comprises a program storage area and a data storage area, wherein the program storage area can store an operating system, a software program required by at least one function and the like; the storage data area may store data created according to use of the electronic device, and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
Wherein the sensor comprises at least one of: light-sensitive sensors, gyroscopes, infrared proximity sensors, vibration detection sensors, pressure sensors, etc. Among them, the light sensor, also called an ambient light sensor, is used to detect the ambient light brightness. The light sensor may include a light sensitive element and an analog to digital converter. The photosensitive element is used for converting collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the light sensor may further include a signal amplifier, and the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The camera may be a visible light camera (general view angle camera, wide angle camera), an infrared camera, or a dual camera (having a distance measurement function), which is not limited herein.
The network module may be at least one of: a bluetooth module, a wireless fidelity (Wi-Fi), etc., which are not limited herein.
Based on the electronic device described in fig. 1A, the following data processing method can be executed, and the specific steps are as follows:
acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the first neural network model and the second neural network model have the same network structure but different model parameters;
executing the following steps S1-S4N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
and taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
It can be seen that, in the electronic device described in this embodiment of the present application, an initial training set for a human face is obtained, based on the initial training set, a first training set and a second training set are determined, the first training set is input to a first neural network model for operation to obtain a first parameter model, the second training set is input to a second neural network model for operation to obtain a second parameter model, the first neural network model and the second neural network model have the same network structure but different model parameters, the following steps S1 to S4 are performed N times, where N is a positive integer: s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer; s2, constructing an ith second parameter model according to the (i-1) th second parameter model and model parameters of the ith second neural network model; s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time; s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time according to the second reference training set of the i-th time to perform calculation to obtain a second parameter model of the i-th time; the converged neural network model in the Nth first parameter model and the Nth second parameter model is used as the trained neural network model, and the parameter model is the accumulated average of the model parameters in the past period, so that the decoupling performance is stronger, the stability of cooperative supervision can be improved, and the outputs of the model and the model can be more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo label predictions of the two networks can be better correlated, so that wrong amplification and overfitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model is improved.
Referring to fig. 1B, fig. 1B is a schematic flowchart of a data processing method according to an embodiment of the present application, and as shown in the drawing, the data processing method is applied to the electronic device shown in fig. 1A, and includes:
101. an initial training set for a face is obtained.
In the embodiment of the present application, the initial training set may include a plurality of face images. The electronic device may obtain the set of facial images to be processed from a cloud server or locally (e.g., photo album). In specific implementation, the electronic device can collect face data from a monitoring scene in a large scale, and carry out preprocessing such as detection and alignment on the face, and further, can cluster the collected data by using a K-means clustering algorithm, and allocate a pseudo label to each picture to make a training set.
In one possible example, the step 101 of obtaining an initial training set for a human face may include the following steps:
11. acquiring an initial face image set;
12. performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
13. and selecting a facial image quality evaluation value larger than a preset image quality evaluation value from the plurality of facial image quality evaluation values, and taking a facial image corresponding to the facial image quality evaluation value as the initial training set.
In this embodiment, the preset image quality evaluation value may be pre-stored in the electronic device, and may be set by the user or default by the system.
In specific implementation, the electronic device may acquire an initial face image set, and may perform image quality evaluation on each face image in the face image set by using at least one image quality evaluation index to obtain a plurality of face image quality evaluation values, where the image quality evaluation index may be at least one of the following: face deviation degree, face integrity degree, definition degree, feature point distribution density, average gradient, information entropy, signal-to-noise ratio and the like, which are not limited herein. Furthermore, the electronic device may select a face image quality evaluation value larger than a preset image quality evaluation value from the plurality of face image quality evaluation values, and use a face image corresponding to the face image as a face image set to be processed. The human face deviation degree is the deviation degree between the human face angle in the image and the human face angle of the front face, and the human face integrity degree is the ratio of the area of the human face in the image to the area of the complete human face.
In one possible example, in step 12, performing image quality evaluation on each facial image in the facial image set to obtain a plurality of facial image quality evaluation values, the method may include the following steps:
121. acquiring a target face deviation degree of a face image i, a target face integrity degree of the face image i, a target feature point distribution density and a target information entropy of the face image i, wherein the face image i is any one face image in the face image set;
122. when the target face deviation degree is greater than a preset deviation degree and the target face integrity degree is greater than a preset integrity degree, determining a target first reference evaluation value corresponding to the target face deviation degree according to a mapping relation between the preset face deviation degree and the first reference evaluation value;
123. determining a target second reference evaluation value corresponding to the target face integrity according to a preset mapping relation between the face integrity and the second reference evaluation value;
124. determining a target weight pair corresponding to the target feature point distribution density according to a preset mapping relation between the feature point distribution density and the weight pair, wherein the target weight pair comprises a target first weight and a target second weight, the target first weight is a weight corresponding to the first reference evaluation value, and the target second weight is a weight corresponding to the second reference evaluation value;
125. performing weighted operation according to the target first weight, the target second weight, the target first reference evaluation value and the target second reference evaluation value to obtain a first reference evaluation value;
126. determining a first image quality evaluation value corresponding to the target feature point distribution density according to a preset mapping relation between the feature point distribution density and the image quality evaluation value;
127. determining a target image quality deviation value corresponding to the target information entropy according to a mapping relation between a preset information entropy and an image quality deviation value;
128. acquiring a first shooting parameter of the face image i;
129. determining a target optimization coefficient corresponding to the first shooting parameter according to a mapping relation between preset shooting parameters and optimization coefficients;
130. adjusting the first image quality evaluation value according to the target optimization coefficient and the target image quality deviation value to obtain a second reference evaluation value;
131. acquiring a target environment parameter corresponding to the face image i;
132. determining a target weight coefficient pair corresponding to the target environment parameter according to a mapping relation between preset environment parameters and the weight coefficient pair, wherein the target weight coefficient pair comprises a target first weight coefficient and a target second weight coefficient, the target first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, and the target second weight coefficient is a weight coefficient corresponding to the second reference evaluation value;
133. and performing weighting operation according to the target first weight coefficient, the target second weight coefficient, the first reference evaluation value and the second reference evaluation value to obtain a face image quality evaluation value of the face image i.
In the embodiment of the application, the preset deviation degree and the preset integrity degree can be set by a user or defaulted by a system, and the preset deviation degree and the preset integrity degree can be successfully recognized by the human face only if the preset deviation degree and the preset integrity degree are within a certain range. The electronic device may pre-store a mapping relationship between a preset face deviation degree and a first reference evaluation value, a mapping relationship between a preset face integrity degree and a second reference evaluation value, and a mapping relationship between a preset feature point distribution density and a weight pair, where the weight pair may include a first weight and a second weight, a sum of the first weight and the second weight is 1, the first weight is a weight corresponding to the first reference evaluation value, and the second weight is a weight corresponding to the second reference evaluation value. The electronic device may further store a mapping relationship between a preset feature point distribution density and an image quality evaluation value, a mapping relationship between a preset information entropy and an image quality deviation value, a mapping relationship between a preset shooting parameter and an optimization coefficient, and a mapping relationship between a preset environment parameter and a weight coefficient pair in advance. The weight coefficient pair may include a first weight coefficient and a second weight coefficient, the first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, the second weight coefficient is a weight coefficient corresponding to the second reference evaluation value, and a sum of the first weight coefficient and the second weight coefficient is 1.
The value range of the image quality evaluation value can be 0-1, or 0-100. The image quality deviation value may be a positive real number, for example, 0 to 1, or may be greater than 1. The value range of the optimization coefficient can be-1 to 1, for example, the optimization coefficient can be-0.1 to 0.1. In the embodiment of the present application, the shooting parameter may be at least one of the following: exposure time, shooting mode, sensitivity ISO, white balance parameters, focal length, focus, region of interest, etc., without limitation. The environmental parameter may be at least one of: ambient brightness, ambient temperature, ambient humidity, weather, atmospheric pressure, magnetic field interference strength, etc., and are not limited thereto.
In specific implementation, taking a face image i as an example, the face image i is any face image in a face image set, and the electronic device may obtain a target face deviation degree of the face image i, a target face integrity degree of the face image i, a target feature point distribution density of the face image i, and a target information entropy, where the target feature point distribution density may be a ratio between a total number of feature points of the face image i and an area of the face image i.
Furthermore, when the degree of deviation of the target face is greater than the preset degree of deviation and the degree of integrity of the target face is greater than the preset degree of integrity, the electronic device may determine a target first reference evaluation value corresponding to the degree of deviation of the target face according to a mapping relationship between the preset degree of deviation of the face and the first reference evaluation value, may also determine a target second reference evaluation value corresponding to the degree of integrity of the target face according to a mapping relationship between the preset degree of integrity of the face and the second reference evaluation value, and determine a target weight pair corresponding to the distribution density of the target feature points according to a mapping relationship between the preset feature point distribution density and the weight pair, where the target weight pair includes a target first weight and a target second weight, the target first weight is a weight corresponding to the first reference evaluation value, and the target second weight is a weight corresponding to the second reference evaluation value, and then, may determine the target first weight, the target second weight, the, And performing weighted operation on the target second weight, the target first reference evaluation value and the target second reference evaluation value to obtain a first reference evaluation value, wherein a specific calculation formula is as follows:
the first reference evaluation value is a target first reference evaluation value and a target first weight and the target second reference evaluation value is a target second weight
Furthermore, the quality of the image can be evaluated in terms of the human face angle and the human face integrity.
Further, the electronic device may determine a first image quality evaluation value corresponding to the target feature point distribution density according to a mapping relationship between a preset feature point distribution density and an image quality evaluation value, and determine a target image quality deviation value corresponding to the target information entropy according to a mapping relationship between a preset information entropy and an image quality deviation value. The electronic equipment can determine a target image quality deviation value corresponding to the target information entropy according to a mapping relation between the preset information entropy and the image quality deviation value, and because some noises are generated due to external (weather, light, angle, jitter and the like) or internal (system, GPU) reasons when an image is generated, and the noises can bring some influences on the image quality, the image quality can be adjusted to a certain degree, so that the objective evaluation on the image quality is ensured.
Further, the electronic device may further obtain a first shooting parameter of the target face image, determine a target optimization coefficient corresponding to the first shooting parameter according to a mapping relationship between preset shooting parameters and optimization coefficients, where the shooting parameter setting may also bring a certain influence on image quality evaluation, and therefore, it is necessary to determine an influence component of the shooting parameter on the image quality, and finally, adjust the first image quality evaluation value according to the target optimization coefficient and the target image quality deviation value to obtain a second reference evaluation value, where the second reference evaluation value may be obtained according to the following formula:
when the image quality evaluation value is a percentile system, the specific calculation formula is as follows:
second reference evaluation value ═ (first image quality evaluation value + target image quality deviation value) (1+ target optimization coefficient)
In the case where the image quality evaluation value is a percentage, the specific calculation formula is as follows:
the second reference evaluation value (first image quality evaluation value (1+ target image quality deviation value) (1+ target optimization coefficient))
Further, the electronic device may acquire a target environment parameter corresponding to the face image i, and determine a target weight coefficient pair corresponding to the target environment parameter according to a mapping relationship between a preset environment parameter and the weight coefficient pair, where the target weight coefficient pair includes a target first weight coefficient and a target second weight coefficient, the target first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, and the target second weight coefficient is a weight coefficient corresponding to the second reference evaluation value, and further, may perform a weighting operation according to the target first weight coefficient, the target second weight coefficient, the first reference evaluation value, and the second reference evaluation value to obtain a face image quality evaluation value of the face image i, where a specific calculation formula is as follows:
the face image quality evaluation value of the face image i is equal to a first reference evaluation value target first weight coefficient + a second reference evaluation value target second weight coefficient
Therefore, the image quality can be objectively evaluated by combining the influences of internal and external environment factors, shooting setting factors, human face angles, integrity and the like, and the evaluation accuracy of the human face image quality is improved.
102. Based on the initial training set, a first training set and a second training set are determined.
The electronic device may copy the initial training set to 1 time to obtain a first training set and a second training set, and may further process the initial training set in two different processing manners to obtain the first training set and the second training set. In particular implementations, for example, a training set x can be madenThen, the training set x is copiednTo make a training set xm。
Optionally, the step 102 of determining the first training set and the second training set based on the initial training set may include the following steps:
21. performing first enhancement processing on the initial training set to obtain a first training set;
22. and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
The electronic device may perform first enhancement processing on the initial training set by using a first enhancement algorithm to obtain a first training set, where the first enhancement algorithm may be at least one of: gray scale stretching, histogram equalization, smoothing, filtering, noise reduction, and the like, without limitation.
In addition, the electronic device may also perform second enhancement processing on the initial training set by using a second enhancement algorithm to obtain a second training set, where the second enhancement algorithm may be at least one of: gray scale stretching, histogram equalization, smoothing, filtering, noise reduction, and the like, without limitation.
Wherein the first enhancement processing and the second enhancement processing have different enhancement effects.
103. And inputting the first training set into a first neural network model for operation to obtain a first parameter model.
Wherein the first neural network model may be at least one of: convolutional neural network models, impulse neural network models, fully-connected neural network models, recurrent neural network models, and the like, without limitation. In specific implementation, the electronic device may input the first training set into the first neural network model for operation to obtain the first parameter model.
104. And inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the first neural network model and the second neural network model have the same network structure but different model parameters.
Wherein the second neural network model may be at least one of: convolutional neural network models, impulse neural network models, fully-connected neural network models, recurrent neural network models, and the like, without limitation. In a specific implementation, the electronic device may input the second training set into the second neural network model for operation to obtain the second parameter model.
In a specific implementation, the first neural network model and the second neural network model have the same network structure but different model parameters. The model parameters may be initialization parameters for each layer of the neural network model, such as initialization parameters for normal distribution random initialization or uniform distribution random initialization.
105. Executing the following steps S1-S4N times, wherein N is a positive integer: s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer; s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time; s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time; and S4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time according to the second reference training set of the i-th time to perform calculation to obtain a second parameter model of the i-th time.
In a specific implementation, the N may be set by a user or default by the system. The larger N, the higher the model accuracy. The electronic device may configure a coefficient for each model parameter of the first parametric model and the first neural network model, and then concatenate the two to construct the first parametric model. Similarly, a second parametric model may also be constructed, and then the second training set is trained using the first parametric model, and the first parametric model is trained using the second parametric model.
Optionally, in the step S1, constructing the first parameter model at the ith time according to the model parameters of the first parameter model at the ith-1 time and the first neural network model at the ith time may include the following steps:
s11, obtaining a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
s12, calculating according to the first weight factor, the second weight factor, the i-1 st parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
The first weight factor pair may include a first weight factor and a second weight factor, a sum of the first weight factor and the second weight factor is 1, and both the first weight factor and the second weight factor may be preset or default to the system.
Furthermore, the electronic device may perform operation according to the first weight factor, the second weight factor, the i-1 st parameter model and the i-th model parameter of the first neural network model to obtain the first parameter model, where the specific calculation formula is as follows:
the model parameters of the first parameter model at the ith time are the first weight factor and the model parameters of the first parameter model at the ith time + the second weight factor and the first parameter model at the ith time.
That is, in a specific implementation, for step S2, the electronic device may configure a coefficient for the model parameters of the second parametric model at the i-1 st time and the model parameters of the second neural network model at the i-th time, and then join the coefficients together to construct the second parametric model at the i-th time. The specific implementation manner is similar to step S1.
The specific calculation formula is as follows:
the model parameters of the ith second parametric model are the first weight factor, the ith-1 th second parametric model and the ith first parametric model
In a specific implementation, the electronic device may input the second training set to the first reference model for training to obtain a second reference training set.
Optionally, in the step S4, the operation is performed on the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain the second reference training set of the i-th time, which may include the following steps:
s41, determining the sample characteristics of each training sample in the second training set of the (i-1) th time based on the first parameter model of the (i) th time to obtain a plurality of sample characteristics;
s42, determining cosine distances among samples according to the sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
Specifically, the electronic device may determine a sample feature of each training sample in the second training set of the (i-1) th time based on the first parameter model of the ith time to obtain a plurality of sample features, determine cosine distances between samples according to the plurality of sample features, perform clustering based on the cosine distances to obtain the second reference training set of the ith time, that is, may improve the classification accuracy of the samples.
In specific implementation, for step S3, the electronic device may input the first training set of the i-1ci to the second reference model of the ith time for training, to obtain the first reference training set of the ith time. The step S3 can be specifically realized by referring to the step S4.
Specifically, the electronic device may determine a sample feature of each training sample in the first training set based on the second parameter model to obtain a plurality of sample features, determine cosine distances between samples according to the plurality of sample features, perform clustering based on the cosine distances to obtain a first reference training set, that is, may improve the classification accuracy of the samples. 106. And taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
In specific implementation, the electronic device can enter the more convergent neural network model in the nth first parameter model and the nth second parameter model as a trained neural network model, so that a high-precision neural network model can be obtained, and the face recognition efficiency can be improved.
The parameter model is the accumulated average of the past period of the model parameters, so that the decoupling performance is stronger, the stability of cooperative supervision can be improved, and the outputs of the model parameters and the cooperative supervision are more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo label predictions of the two networks can be better correlated, so that wrong amplification and overfitting can be better avoided, and a high-precision neural network model can be obtained.
For example, in a specific implementation, as shown in fig. 1C, data may be prepared, that is, the number of human faces is collected from a monitoring scene in a large scaleAccording to the method, preprocessing such as detection, alignment and the like is carried out on the human face; clustering the preprocessed data by using a K-means clustering algorithm, and firstly allocating a pseudo label y to each sampleiMaking training set xnThen copy the training set xnMaking training set xm。
Then, network collaborative training can be performed to design two deep convolutional neural networks DCNN1 and DCNN2, such as randomly initializing DCNN1 and DCNN2, performing dropout on output features of DCNN1 and DCNN2 to increase the difference of the two networks, and then randomly performing two training sets xnAnd xmDifferent data enhancement methods are used to increase the difference between the two training sets and avoid overfitting, and then the training set x is usednInputting the training data into DCNN1 to perform iterative training, and obtaining a training set xmInput to DCNN2 for iterative training.
Furthermore, the tag is updated using an average parametric model (λ ═ 0.5)
A. Training one period per iteration, and calculating the average parameter model f of DCNN1 and DCNN2 in the current periodt(θ1) And ft(θ2),ft(θ1) And ft(θ2) The method for updating parameters is similar to the method for updating weights by Momentum in a neural network, and is corresponding to the network parameters theta1And theta2With respect to the cumulative weighted average of the momentum coefficients λ:
ft(θ1)=λft-1(θ1)+(1-λ)θ1,t≠0 (1)
ft(θ2)=λft-1(θ2)+(1-λ)θ2,t≠0 (2)
wherein, lambda belongs to [0,1), theta1Is the current network parameter, θ, of DCNN12Is the current network parameter of DCNN2, t represents the tth iteration cycle, when t is 0:
f0(θ1)=θ1, (3)
f0(θ2)=θ2. (4)
B. throughIterative training of t periods, and calculating an average parameter model f according to formula (1)t(θ1) Computing a training set xmAnd calculating the cosine distance between the samples according to the characteristics of each sample, and re-clustering.
C. Average parameter model f calculated according to formula (2)t(θ2) Computing a training set xnAnd calculating the cosine distance between the samples according to the characteristics of each sample, and re-clustering.
The average parameter model is the accumulated average of the past period of the model parameters, so that the decoupling performance is stronger, the stability of cooperative supervision can be improved, and the outputs of the model parameters and the cooperative supervision are more independent and complementary. By using a past-average model of the complementary networks to generate the supervisors, the pseudo-label predictions of the two networks can be better correlated, thereby better avoiding false amplifications and overfitting.
And finally, continuously iterating and repeatedly updating the label by using the average parameter model (lambda is 0.5), iterating until the network converges, and stabilizing the Loss. The value of λ is in the range of 0 to 1, for example, λ is 0.9.
Optionally, after the step 105, the following steps may be further included:
a1, determining a first convergence degree of the first parameter model at the Nth time and a second convergence degree of the second parameter model at the Nth time;
a2, determining a first weight corresponding to the first parameter model at the Nth time and a second weight corresponding to the second parameter model at the Nth time according to the first convergence and the second convergence;
a3, performing weighting operation according to the Nth first parameter model, the first weight, the Nth second parameter model and the second weight to obtain a reference neural network model;
a4, fine-tuning the reference neural network model through the Nth first reference training set or the Nth second reference training set to obtain a final reference neural network model, wherein the convergence of the final reference neural network model is greater than a preset convergence.
In a specific implementation, the electronic device may obtain a first convergence of the nth first parameter model and a second convergence of the nth second parameter model, and further determine a first weight corresponding to the nth first parameter model and a second weight corresponding to the nth second parameter model according to the first convergence and the second convergence, where the first weight is equal to the first convergence/(the first convergence + the second convergence), and the second weight is equal to the second convergence/(the first convergence + the second convergence). The preset convergence may be set by the user or default by the system.
Further, the electronic device may perform weighting operation according to the nth first parameter model, the nth first weight, the nth second parameter model, and the nth second weight to obtain a reference neural network model, which is specifically as follows:
the reference neural network model is the first parameter model of the Nth time, the first weight value and the second parameter model of the Nth time, the second weight value
Furthermore, the electronic device can fine-tune the reference neural network model through the nth first reference training set or the nth second reference training set to obtain a final reference neural network model, the convergence of the final reference neural network model is greater than the preset convergence, further, the corresponding weight can be adjusted through the convergence of the two neural network models, and a new neural network model can be constructed by combining the weight and the advantages of the two neural network models, so that the performance of the model can be improved.
It can be seen that the data processing method described in the embodiment of the present application is applied to an electronic device, obtains an initial training set for a human face, determines a first training set and a second training set based on the initial training set, inputs the first training set into a first neural network model for operation to obtain a first parameter model, inputs the second training set into a second neural network model for operation to obtain a second parameter model, where the first neural network model and the second neural network model have the same network structure but different model parameters, and executes the following steps S1-S4N times, where N is a positive integer: s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer; s2, constructing an ith second parameter model according to the (i-1) th second parameter model and model parameters of the ith second neural network model; s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time; s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time according to the second reference training set of the i-th time to perform calculation to obtain a second parameter model of the i-th time; the relatively convergent neural network model in the Nth first parameter model and the Nth second parameter model is used as a well-trained neural network model, and the parameter model is the accumulated average of the model parameters in the past period, so that the decoupling performance is higher, the stability of cooperative supervision can be improved, the outputs of the model parameters and the cooperative supervision can be more independent and complementary, the past average model of the complementary network is used for generating supervision, the pseudo label predictions of the two networks can be better correlated, the wrong amplification and overfitting can be better avoided, the high-precision neural network model can be obtained, and the improvement of the precision of the neural network model is facilitated.
Referring to fig. 2, in accordance with the embodiment shown in fig. 1B, fig. 2 is a schematic flowchart of a data processing method provided in an embodiment of the present application, and the data processing method is applied to the electronic device shown in fig. 1A, and the data processing method includes:
201. an initial set of face images is obtained.
202. And evaluating the image quality of each face image in the face image set to obtain a plurality of face image quality evaluation values.
203. And selecting a facial image quality evaluation value larger than a preset image quality evaluation value from the plurality of facial image quality evaluation values, and taking a facial image corresponding to the facial image quality evaluation value as an initial training set.
204. Based on the initial training set, a first training set and a second training set are determined.
205. And inputting the first training set into a first neural network model for operation to obtain a first parameter model.
206. And inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the first neural network model and the second neural network model have the same network structure but different model parameters.
207. Executing the following steps S1-S4N times, wherein N is a positive integer: s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer; s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time; s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time; and S4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time according to the second reference training set of the i-th time to perform calculation to obtain a second parameter model of the i-th time.
208. And taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
For the detailed description of the steps 201 to 208, reference may be made to corresponding steps of the data processing method described in the foregoing fig. 1B, and details are not repeated here.
It can be seen that the data processing method described in the embodiment of the present application is applied to an electronic device, and since the parameter model is an accumulated average of the past cycles of the model parameters, the parameter model has a stronger decoupling property, which not only can increase the stability of the cooperative supervision, but also can make the outputs of the two more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo label predictions of the two networks can be better correlated, so that wrong amplification and overfitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model is improved.
Referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and as shown in the drawing, the electronic device includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in an embodiment of the present application, the programs include instructions for performing the following steps:
acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the first neural network model and the second neural network model have the same network structure but different model parameters;
executing the following steps S1-S4N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
and taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
It can be seen that, in the electronic device described in the embodiment of the present application, since the parameter model is an accumulated average of past cycles of the model parameters, the decoupling performance is stronger, the stability of cooperative supervision can be increased, and the outputs of the two are more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo label predictions of the two networks can be better correlated, so that wrong amplification and overfitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model is improved.
Optionally, in said determining the first training set and the second training set based on the initial training set, the program comprises instructions for:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
Optionally, in the aspect of constructing the first parameter model at the ith time according to the first parameter model at the ith-1 st time and the model parameters of the first neural network model at the ith time, the program includes instructions for executing the following steps:
obtaining a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
and calculating according to the first weight factor, the second weight factor, the i-1 st parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
Optionally, in the aspect that the second training set of the i-1 th time is operated according to the first parameter model of the i-th time to obtain the second reference training set of the i-th time, the program includes instructions for executing the following steps:
determining the sample characteristics of each training sample in the second training set of the (i-1) th time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
and determining cosine distances among samples according to the sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
Optionally, in the aspect of acquiring the initial training set for the face, the program includes instructions for performing the following steps:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
and selecting a facial image quality evaluation value larger than a preset image quality evaluation value from the plurality of facial image quality evaluation values, and taking a facial image corresponding to the facial image quality evaluation value as the initial training set.
Optionally, the program further comprises instructions for performing the steps of:
determining a first convergence degree of the first parameter model at the Nth time and a second convergence degree of the second parameter model at the Nth time;
determining a first weight corresponding to the first parameter model of the Nth time and a second weight corresponding to the second parameter model of the Nth time according to the first convergence and the second convergence;
performing weighting operation according to the Nth first parameter model, the first weight, the Nth second parameter model and the second weight to obtain a reference neural network model;
and fine-tuning the reference neural network model through the Nth first reference training set or the Nth second reference training set to obtain a final reference neural network model, wherein the convergence of the final reference neural network model is greater than a preset convergence.
The above description has introduced the solution of the embodiment of the present application mainly from the perspective of the method-side implementation process. It is understood that in order to implement the above functions, it includes corresponding hardware structures and/or software modules for performing the respective functions. Those of skill in the art will readily appreciate that the present application is capable of hardware or a combination of hardware and computer software implementing the various illustrative elements and algorithm steps described in connection with the embodiments provided herein. Whether a function is performed as hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiment of the present application, the functional units may be divided according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit. It should be noted that the division of the unit in the embodiment of the present application is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
Fig. 4 is a block diagram of functional units of a data processing apparatus 400 according to an embodiment of the present application, where the apparatus 400 is applied to an electronic device, and the apparatus 400 includes: an acquisition unit 401, a determination unit 402, an arithmetic unit 403, and an execution unit 404, wherein,
the acquiring unit 401 is configured to acquire an initial training set for a human face;
the determining unit 402 is configured to determine a first training set and a second training set based on the initial training set;
the operation unit 403 is configured to input the first training set into a first neural network model for operation, so as to obtain a first parameter model;
the operation unit 403 is further configured to input the second training set into a second neural network model for operation, so as to obtain a second parameter model, where the first neural network model and the second neural network model have the same network structure but different model parameters;
the execution unit 404 is configured to execute the following steps S1-S4N times, where N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
the determining unit 402 is configured to use a more convergent neural network model in the nth first parametric model and the nth second parametric model as a trained neural network model.
It can be seen that, the data processing apparatus described in the embodiment of the present application is applied to an electronic device, and since the parameter model is an accumulated average of the past cycles of the model parameters, the data processing apparatus has a stronger decoupling property, which not only can increase the stability of the cooperative supervision, but also can make the outputs of the two more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo label predictions of the two networks can be better correlated, so that wrong amplification and overfitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model is improved.
Optionally, in the aspect of determining the first training set and the second training set based on the initial training set, the determining unit 402 is specifically configured to:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
Optionally, in terms of constructing the first parameter model at the ith time according to the model parameters of the first parameter model at the ith-1 st time and the first neural network model at the ith time, the execution unit 404 is specifically configured to:
obtaining a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
and calculating according to the first weight factor, the second weight factor, the i-1 st parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
Optionally, in terms that the second training set of the i-1 th time is operated according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, the execution unit 404 is specifically configured to:
determining the sample characteristics of each training sample in the second training set of the (i-1) th time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
and determining cosine distances among samples according to the sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
Optionally, in terms of acquiring the initial training set for the face, the acquiring unit 401 is specifically configured to:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
and selecting a facial image quality evaluation value larger than a preset image quality evaluation value from the plurality of facial image quality evaluation values, and taking a facial image corresponding to the facial image quality evaluation value as the initial training set.
Optionally, the apparatus 400 is further specifically configured to:
determining a first convergence degree of the first parameter model at the Nth time and a second convergence degree of the second parameter model at the Nth time;
determining a first weight corresponding to the first parameter model of the Nth time and a second weight corresponding to the second parameter model of the Nth time according to the first convergence and the second convergence;
performing weighting operation according to the Nth first parameter model, the first weight, the Nth second parameter model and the second weight to obtain a reference neural network model;
and fine-tuning the reference neural network model through the Nth first reference training set or the Nth second reference training set to obtain a final reference neural network model, wherein the convergence of the final reference neural network model is greater than a preset convergence.
It is to be understood that the functions of each program module of the data processing apparatus in this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (10)
1. A data processing method is applied to an electronic device, and the method comprises the following steps:
acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the first neural network model and the second neural network model have the same network structure but different model parameters;
executing the following steps S1-S4N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
and taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
2. The method of claim 1, wherein determining a first training set and a second training set based on the initial training set comprises:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
3. The method according to claim 1 or 2, wherein constructing the first parametric model at the ith time from the first parametric model at the i-1 st time and the model parameters of the first neural network model at the ith time comprises:
obtaining a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
and calculating according to the first weight factor, the second weight factor, the i-1 st parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
4. The method according to claim 1 or 2, wherein the operating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain the second reference training set of the i-th time comprises:
determining the sample characteristics of each training sample in the second training set of the (i-1) th time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
and determining cosine distances among samples according to the sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
5. The method of claim 1 or 2, wherein the obtaining an initial training set for a human face comprises:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
and selecting a facial image quality evaluation value larger than a preset image quality evaluation value from the plurality of facial image quality evaluation values, and taking a facial image corresponding to the facial image quality evaluation value as the initial training set.
6. The method according to claim 1 or 2, characterized in that the method further comprises:
determining a first convergence degree of the first parameter model at the Nth time and a second convergence degree of the second parameter model at the Nth time;
determining a first weight corresponding to the first parameter model of the Nth time and a second weight corresponding to the second parameter model of the Nth time according to the first convergence and the second convergence;
performing weighting operation according to the Nth first parameter model, the first weight, the Nth second parameter model and the second weight to obtain a reference neural network model;
and fine-tuning the reference neural network model through the Nth first reference training set or the Nth second reference training set to obtain a final reference neural network model, wherein the convergence of the final reference neural network model is greater than a preset convergence.
7. A data processing apparatus, applied to an electronic device, the apparatus comprising: an acquisition unit, a determination unit, an arithmetic unit, and an execution unit, wherein,
the acquisition unit is used for acquiring an initial training set aiming at the human face;
the determining unit is used for determining a first training set and a second training set based on the initial training set;
the operation unit is used for inputting the first training set into a first neural network model for operation to obtain a first parameter model;
the operation unit is further configured to input the second training set into a second neural network model for operation to obtain a second parameter model, where the first neural network model and the second neural network model have the same network structure but different model parameters;
the execution unit is configured to execute the following steps S1-S4N times, where N is a positive integer:
s1, constructing a first parameter model of the ith time according to the model parameters of the first parameter model of the (i-1) th time and the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing a second parameter model of the ith time according to the second parameter model of the (i-1) th time and the model parameters of the second neural network model of the ith time;
s3, calculating the first training set of the i-1 th time according to the second parameter model of the i-th time to obtain a first reference training set of the i-th time, and inputting the first neural network model of the i-1 th time according to the first reference training set of the i-th time to perform calculation to obtain a first parameter model of the i-th time;
s4, calculating the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, and inputting the second neural network model of the i-1 th time to perform calculation according to the second reference training set of the i-th time to obtain a second parameter model of the i-th time;
and the determining unit is used for taking the more convergent neural network model in the first parameter model at the Nth time and the second parameter model at the Nth time as the trained neural network model.
8. The apparatus according to claim 7, wherein, in said determining a first training set and a second training set based on the initial training set, the determining unit is specifically configured to:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured for execution by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011639072.2A CN112686171B (en) | 2020-12-31 | 2020-12-31 | Data processing method, electronic equipment and related products |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011639072.2A CN112686171B (en) | 2020-12-31 | 2020-12-31 | Data processing method, electronic equipment and related products |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112686171A true CN112686171A (en) | 2021-04-20 |
CN112686171B CN112686171B (en) | 2023-07-18 |
Family
ID=75456629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011639072.2A Active CN112686171B (en) | 2020-12-31 | 2020-12-31 | Data processing method, electronic equipment and related products |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112686171B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330459A (en) * | 2017-06-28 | 2017-11-07 | 联想(北京)有限公司 | A kind of data processing method, device and electronic equipment |
US20180276560A1 (en) * | 2017-03-23 | 2018-09-27 | Futurewei Technologies, Inc. | Review machine learning system |
CN109816042A (en) * | 2019-02-01 | 2019-05-28 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of data classification model training |
US20190220697A1 (en) * | 2018-01-12 | 2019-07-18 | Microsoft Technology Licensing, Llc | Automated localized machine learning training |
-
2020
- 2020-12-31 CN CN202011639072.2A patent/CN112686171B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180276560A1 (en) * | 2017-03-23 | 2018-09-27 | Futurewei Technologies, Inc. | Review machine learning system |
CN107330459A (en) * | 2017-06-28 | 2017-11-07 | 联想(北京)有限公司 | A kind of data processing method, device and electronic equipment |
US20190220697A1 (en) * | 2018-01-12 | 2019-07-18 | Microsoft Technology Licensing, Llc | Automated localized machine learning training |
CN109816042A (en) * | 2019-02-01 | 2019-05-28 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of data classification model training |
Non-Patent Citations (1)
Title |
---|
李昆仑 等: "基于级联SVM的无参考人脸图像质量评价系统", 《现代电子技术》, no. 24, pages 108 - 110 * |
Also Published As
Publication number | Publication date |
---|---|
CN112686171B (en) | 2023-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220317641A1 (en) | Device control method, conflict processing method, corresponding apparatus and electronic device | |
US11655576B2 (en) | Operating mode determining method and operating mode determining device | |
CN110808063A (en) | Voice processing method and device for processing voice | |
CN112767443A (en) | Target tracking method, electronic equipment and related product | |
CN111124108B (en) | Model training method, gesture control method, device, medium and electronic equipment | |
CN108174096A (en) | Method, apparatus, terminal and the storage medium of acquisition parameters setting | |
US11631394B2 (en) | System and method for determining occupancy | |
US20200218456A1 (en) | Application Management Method, Storage Medium, and Electronic Apparatus | |
CN113190757A (en) | Multimedia resource recommendation method and device, electronic equipment and storage medium | |
CN111863020B (en) | Voice signal processing method, device, equipment and storage medium | |
CN106462772A (en) | Invariant-based dimensional reduction of object recognition features, systems and methods | |
EP3731507B1 (en) | Surface detection for mobile devices | |
CN114821236A (en) | Smart home environment sensing method, system, storage medium and electronic device | |
CN110782408B (en) | Intelligent beautifying method and system based on convolutional neural network | |
CN112597942B (en) | Face clustering method, electronic equipment and related products | |
CN110602827B (en) | Kara OK light effect implementation method, intelligent projector and related product | |
CN109086690A (en) | Image characteristic extracting method, target identification method and corresponding intrument | |
Song et al. | Auditory scene analysis-based feature extraction for indoor subarea localization using smartphones | |
CN111797867A (en) | System resource optimization method and device, storage medium and electronic equipment | |
CN109165722A (en) | Model expansion method and device, electronic equipment and storage medium | |
CN116403594B (en) | Speech enhancement method and device based on noise update factor | |
CN109376674A (en) | Method for detecting human face, device and storage medium | |
CN108230312A (en) | A kind of image analysis method, equipment and computer readable storage medium | |
CN112686171A (en) | Data processing method, electronic equipment and related product | |
CN115205157A (en) | Image processing method and system, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |