CN112686171B - Data processing method, electronic equipment and related products - Google Patents
Data processing method, electronic equipment and related products Download PDFInfo
- Publication number
- CN112686171B CN112686171B CN202011639072.2A CN202011639072A CN112686171B CN 112686171 B CN112686171 B CN 112686171B CN 202011639072 A CN202011639072 A CN 202011639072A CN 112686171 B CN112686171 B CN 112686171B
- Authority
- CN
- China
- Prior art keywords
- training set
- target
- model
- neural network
- evaluation value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the application discloses a data processing method, electronic equipment and related products, which are applied to the electronic equipment, wherein the method comprises the following steps: inputting the first training set into a first neural network model for operation to obtain a first parameter model; inputting the second training set into a second neural network model for operation to obtain a second parameter model; operating the second training set according to the first parameter model to obtain a second reference training set; operating the first training set according to the second parameter model to obtain a first reference training set; the first reference training set is input into the first parameter model for operation, the second reference training set is input into the second parameter model for operation, and the more convergent neural network model is used as the trained neural network model. By adopting the embodiment of the application, the accuracy of the neural network model can be improved based on an unsupervised learning mode.
Description
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a data processing method, an electronic device, and a related product.
Background
In the past, the research on face recognition is slow, because the face recognition usually needs a large-scale data support of hundreds of millions of levels to train an ideal effect, a plurality of large-scale public data sets with manual labels are already open-source nowadays, which clearly promotes the rapid development of the face recognition and brings precision improvement to the face recognition field. In recent years, more and more research is carried out on face recognition, and the face recognition is widely applied to various fields such as monitoring scenes, community access control, mobile phones and the like. However, in practical applications, even with models trained using large-scale public data sets, significant scene differences often result in significant degradation of accuracy if deployed directly into a new scene. In order to solve the generalization capability of the model, human face data acquisition, classification and manual screening are often required to be carried out in a new scene, and as the data scale is larger and larger, the cost of manual screening is correspondingly increased, and finally, the manual screening is also disabled, and further, the model accuracy is reduced. Therefore, the problem of how to improve the accuracy of the neural network model is to be solved.
Disclosure of Invention
The embodiment of the application provides a data processing method, electronic equipment and related products, which can improve the accuracy of a neural network model.
In a first aspect, an embodiment of the present application provides a data processing method, applied to an electronic device, where the method includes:
acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different;
the following steps S1-S4 are executed N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
S4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
and taking the more converged neural network model of the first parameter model of the Nth time and the second parameter model of the Nth time as a trained neural network model.
In a second aspect, an embodiment of the present application provides a data processing apparatus, applied to an electronic device, where the apparatus includes: an acquisition unit, a determination unit, an operation unit and a construction unit, wherein,
the acquisition unit is used for acquiring an initial training set aiming at a human face;
the determining unit is used for determining a first training set and a second training set based on the initial training set;
the operation unit is used for inputting the first training set into a first neural network model for operation to obtain a first parameter model;
the operation unit is further configured to input the second training set into a second neural network model for operation, so as to obtain a second parameter model, where the first neural network model has the same network structure as the second neural network model but different model parameters;
The execution unit is configured to execute the following steps S1 to S4N times, where N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
the determining unit is used for taking a more converged neural network model of the first parameter model of the nth time and the second parameter model of the nth time as a trained neural network model.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and the programs include instructions for performing the steps in the first aspect of the embodiment of the present application.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, wherein the computer program product comprises a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
By implementing the embodiment of the application, the following beneficial effects are achieved:
It can be seen that, the data processing method, the electronic device and the related products described in the embodiments of the present application are applied to the electronic device, and are applied to the electronic device, to obtain an initial training set for a face, determine a first training set and a second training set based on the initial training set, input the first training set into a first neural network model to perform an operation to obtain a first parameter model, input the second training set into a second neural network model to perform an operation to obtain a second parameter model, where the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different, and execute the following steps S1 to S4N times, where N is a positive integer: s1, constructing an ith first parameter model according to model parameters of the ith-1 th first parameter model and an ith first neural network model, wherein i is a positive integer; s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model; s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to perform calculation according to the first reference training set of the ith time to obtain a first parameter model of the ith time; s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to perform calculation according to the second reference training set of the ith time to obtain a second parameter model of the ith time; the method has the advantages that the convergent neural network model in the first parameter model of the Nth time and the convergent neural network model in the second parameter model of the Nth time is used as a trained neural network model, and because the parameter model is the accumulated average of the previous periods of model parameters, the model has stronger decoupling property, the stability of collaborative supervision can be improved, the output of the model and the model is more independent and complementary, the supervision is generated by using the past average model of the complementary network, the pseudo label predictions of the two networks can be better related to each other, and therefore error amplification and overfitting are better avoided, a high-precision neural network model can be obtained, and the accuracy of the neural network model is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present application;
FIG. 1B is a schematic flow chart of a data processing method according to an embodiment of the present application;
FIG. 1C is a flowchart of a data processing method according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of another data processing method according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of another electronic device according to an embodiment of the present application;
fig. 4 is a block diagram of functional units of a data processing apparatus according to an embodiment of the present application.
Detailed Description
The terms first, second and the like in the description and in the claims of the present application and in the above-described figures, are used for distinguishing between different objects and not for describing a particular sequential order. Furthermore, the terms "comprise" and "have," as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to the list of steps or elements but may include, in one possible example, other steps or elements not listed or inherent to such process, method, article, or apparatus.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment may be included in at least one embodiment of the present application. The appearances of such phrases in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Those of skill in the art will explicitly and implicitly appreciate that the embodiments described herein may be combined with other embodiments.
In order to make the present application solution better understood by those skilled in the art, the following description will clearly and completely describe the technical solution in the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art based on the embodiments herein without making any inventive effort, are intended to be within the scope of the present application.
The electronic device according to the embodiment of the present application may be a handheld device, an intelligent robot, a vehicle-mounted device, a wearable device, a computing device, or other processing devices connected to a wireless modem, and various forms of user devices (UserEquipment, UE), mobile stations (MobileStation, MS), terminal devices (terminal devices), and so on, and the electronic device may also be a server or an intelligent home device.
In this embodiment of the present application, the smart home device may be at least one of the following: refrigerator, washing machine, electric rice cooker, intelligent (window) curtain, intelligent lamp, intelligent bed, intelligent garbage bin, microwave oven, steam ager, air conditioner, lampblack absorber, server, intelligent door, smart window, window and door wardrobe, intelligent audio amplifier, intelligent house, intelligent chair, intelligent clothes hanger, intelligent shower, water dispenser, water purifier, air purifier, doorbell, monitored control system, intelligent garage, TV set, projector, intelligent dining table, intelligent sofa, massage armchair, treadmill etc. of course, can also include other equipment.
As shown in fig. 1A, fig. 1A is a schematic structural diagram of an electronic device according to an embodiment of the present application. The electronic device includes a processor, memory, signal processor, transceiver, display, speaker, microphone, random access memory (Random Access Memory, RAM), camera, sensor, network module, and the like. The system comprises a memory, a signal processor DSP, a loudspeaker, a microphone, a RAM, a camera, a sensor and a network module, wherein the memory, the signal processor DSP, the loudspeaker, the microphone, the RAM, the camera, the sensor and the network module are connected with the processor, and the transceiver is connected with the signal processor.
The Processor is a control center of the electronic device, and uses various interfaces and lines to connect various parts of the whole electronic device, and executes various functions of the electronic device and processes data by running or executing software programs and/or modules stored in a memory and calling the data stored in the memory, so as to perform overall monitoring on the electronic device, and the Processor can be a central processing unit (Central Processing Unit/Processor, CPU), a graphics Processor (Graphics Processing Unit, GPU) or a network Processor (nerve-network Processing Unit, NPU).
Further, the processor may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor.
The memory is used for storing software programs and/or modules, and the processor executes the software programs and/or modules stored in the memory so as to execute various functional applications of the electronic device and data processing. The memory may mainly include a memory program area and a memory data area, wherein the memory program area may store an operating system, a software program required for at least one function, and the like; the storage data area may store data created according to the use of the electronic device, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
Wherein the sensor comprises at least one of: light-sensitive sensors, gyroscopes, infrared proximity sensors, vibration detection sensors, pressure sensors, etc. Wherein a light sensor, also called ambient light sensor, is used to detect the ambient light level. The light sensor may comprise a photosensitive element and an analog-to-digital converter. The photosensitive element is used for converting the collected optical signals into electric signals, and the analog-to-digital converter is used for converting the electric signals into digital signals. Optionally, the optical sensor may further include a signal amplifier, where the signal amplifier may amplify the electrical signal converted by the photosensitive element and output the amplified electrical signal to the analog-to-digital converter. The photosensitive element may include at least one of a photodiode, a phototransistor, a photoresistor, and a silicon photocell.
The camera may be a visible light camera (a general view camera, a wide angle camera), an infrared camera, or a dual camera (having a distance measuring function), and is not limited herein.
The network module may be at least one of: bluetooth module, wireless fidelity (wireless fidelity, wi-Fi), etc., without limitation.
Based on the electronic device described in fig. 1A, the following data processing method can be executed, which specifically includes the following steps:
acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different;
the following steps S1-S4 are executed N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
S2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
and taking the more converged neural network model of the first parameter model of the Nth time and the second parameter model of the Nth time as a trained neural network model.
It can be seen that, in the electronic device described in the embodiment of the present application, an initial training set for a face is obtained, a first training set and a second training set are determined based on the initial training set, the first training set is input into a first neural network model to perform operation, so as to obtain a first parameter model, the second training set is input into a second neural network model to perform operation, so as to obtain a second parameter model, the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different, and the following steps S1 to S4 are performed N times, where N is a positive integer: s1, constructing an ith first parameter model according to model parameters of the ith-1 th first parameter model and an ith first neural network model, wherein i is a positive integer; s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model; s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to perform calculation according to the first reference training set of the ith time to obtain a first parameter model of the ith time; s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to perform calculation according to the second reference training set of the ith time to obtain a second parameter model of the ith time; the more convergent neural network model in the first parameter model of the Nth time and the second parameter model of the Nth time is used as a trained neural network model, and the parameter model is the accumulated average of the previous periods of the model parameters, so that the model has stronger decoupling property, not only can the stability of collaborative supervision be increased, but also the output of the first parameter model and the second parameter model is more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo tag predictions of the two networks can be better correlated, so that false amplification and over fitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model can be improved.
Referring to fig. 1B, fig. 1B is a flow chart of a data processing method according to an embodiment of the present application, as shown in the drawing, applied to an electronic device shown in fig. 1A, where the data processing method includes:
101. an initial training set for a face is obtained.
In this embodiment of the present application, the initial training set may include a plurality of face images. The electronic device may obtain the set of face images to be processed from a cloud server or locally (e.g., an album). In the specific implementation, the electronic equipment can collect face data on a large scale from a monitoring scene, perform pretreatment such as detection and alignment on the face, and further, can cluster the collected data by using a K-means clustering algorithm, allocate pseudo labels to each picture and manufacture a training set.
In one possible example, the step 101 of obtaining an initial training set for a face may include the following steps:
11. acquiring an initial face image set;
12. performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
13. and selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as the initial training set.
In this embodiment of the present application, the preset image quality evaluation value may be stored in the electronic device in advance, and may be set by the user or default by the system.
In a specific implementation, the electronic device may acquire an initial face image set, and may perform image quality evaluation on each face image in the face image set by using at least one image quality evaluation index, so as to obtain a plurality of face image quality evaluation values, where the image quality evaluation index may be at least one of the following: face bias, face integrity, sharpness, feature point distribution density, average gradient, information entropy, signal-to-noise ratio, and the like, are not limited herein. Furthermore, the electronic device may select a face image quality evaluation value greater than a preset image quality evaluation value from a plurality of face image quality evaluation values, and use a face image corresponding to the face image quality evaluation value as a face image set to be processed. The face deviation degree is the deviation degree between the face angle in the image and the face angle of the front face, and the face integrity degree is the ratio between the face area in the image and the whole face area.
In one possible example, the step 12 of performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values may include the following steps:
121. Acquiring a target face deviation degree of a face image i, a target face integrity degree of the face image i, a target feature point distribution density and a target information entropy of the face image i, wherein the face image i is any face image in the face image set;
122. when the target face deviation degree is larger than a preset deviation degree and the target face integrity degree is larger than a preset integrity degree, determining a target first reference evaluation value corresponding to the target face deviation degree according to a mapping relation between the preset face deviation degree and the first reference evaluation value;
123. determining a target second reference evaluation value corresponding to the target face integrity according to a mapping relation between the preset face integrity and the second reference evaluation value;
124. determining a target weight pair corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the weight pair, wherein the target weight pair comprises a target first weight and a target second weight, the target first weight is a weight corresponding to the first reference evaluation value, and the target second weight is a weight corresponding to the second reference evaluation value;
125. performing weighted operation according to the target first weight, the target second weight, the target first reference evaluation value and the target second reference evaluation value to obtain a first reference evaluation value;
126. Determining a first image quality evaluation value corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the image quality evaluation value;
127. determining a target image quality deviation value corresponding to the target information entropy according to a mapping relation between a preset information entropy and the image quality deviation value;
128. acquiring a first shooting parameter of the face image i;
129. determining a target optimization coefficient corresponding to the first shooting parameter according to a mapping relation between a preset shooting parameter and the optimization coefficient;
130. adjusting the first image quality evaluation value according to the target optimization coefficient and the target image quality deviation value to obtain a second reference evaluation value;
131. acquiring a target environment parameter corresponding to the face image i;
132. determining a target weight coefficient pair corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a weight coefficient pair, wherein the target weight coefficient pair comprises a target first weight coefficient and a target second weight coefficient, the target first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, and the target second weight coefficient is a weight coefficient corresponding to the second reference evaluation value;
133. And carrying out weighting operation according to the target first weight coefficient, the target second weight coefficient, the first reference evaluation value and the second reference evaluation value to obtain a face image quality evaluation value of the face image i.
In this embodiment of the present application, the preset deviation degree and the preset integrity degree may be set by the user or default by the system, and only if they are within a certain range, they may be successfully identified by face recognition. The mapping relation between the preset face deviation degree and the first reference evaluation value, the mapping relation between the preset face completeness degree and the second reference evaluation value and the mapping relation between the preset feature point distribution density and the weight pair can be stored in the electronic equipment in advance, the weight pair can comprise a first weight and a second weight, the sum of the first weight and the second weight is 1, the first weight is the weight corresponding to the first reference evaluation value, and the second weight is the weight corresponding to the second reference evaluation value. The electronic device may further store a mapping relationship between a preset feature point distribution density and an image quality evaluation value, a mapping relationship between a preset information entropy and an image quality deviation value, a mapping relationship between a preset shooting parameter and an optimization coefficient, and a mapping relationship between a preset environmental parameter and a weight coefficient pair in advance. The weight coefficient pair may include a first weight coefficient and a second weight coefficient, where the first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, the second weight coefficient is a weight coefficient corresponding to the second reference evaluation value, and a sum of the first weight coefficient and the second weight coefficient is 1.
The range of the image quality evaluation value may be 0 to 1, or may be 0 to 100. The image quality deviation value may be a positive real number, for example, 0 to 1, or may be greater than 1. The value range of the optimization coefficient can be between-1 and 1, for example, the optimization coefficient can be between-0.1 and 0.1. In this embodiment of the present application, the shooting parameter may be at least one of the following: exposure time, photographing mode, sensitivity ISO, white balance parameter, focal length, focus, region of interest, and the like, are not limited herein. The environmental parameter may be at least one of: ambient brightness, ambient temperature, ambient humidity, weather, barometric pressure, magnetic field disturbance strength, etc., are not limited herein.
In a specific implementation, taking a face image i as an example, the face image i is any face image in a face image set, and the electronic device can acquire a target face deviation degree of the face image i, a target face integrity degree of the face image i, a target feature point distribution density of the face image i and a target information entropy, wherein the target feature point distribution density can be a ratio between the total number of feature points of the face image i and the area of the face image i.
Furthermore, when the deviation degree of the target face is greater than the preset deviation degree and the integrity degree of the target face is greater than the preset integrity degree, the electronic device may determine a target first reference evaluation value corresponding to the deviation degree of the target face according to a mapping relation between the preset deviation degree of the face and a first reference evaluation value, may determine a target second reference evaluation value corresponding to the integrity degree of the target face according to a mapping relation between the preset integrity degree of the face and a second reference evaluation value, and determine a target weight pair corresponding to the distribution density of the target feature points according to a mapping relation between the distribution density of the preset feature points and a weight pair, where the target weight pair includes a target first weight and a target second weight, and the target first weight is a weight corresponding to the first reference evaluation value, and the target second weight is a weight corresponding to the second reference evaluation value, and then may perform a weighted operation according to the target first weight, the target second weight, the target first reference evaluation value and the target second reference evaluation value to obtain the first reference evaluation value, and the target second reference evaluation value, and the specific calculation formula is as follows:
First reference evaluation value=target first reference evaluation value first weight value+target second reference evaluation value second weight value
Further, the quality of the image can be evaluated as a whole from the face point of view and the face integrity.
Further, the electronic device may determine a first image quality evaluation value corresponding to the target feature point distribution density according to a mapping relationship between the preset feature point distribution density and the image quality evaluation value, and determine a target image quality deviation value corresponding to the target information entropy according to a mapping relationship between the preset information entropy and the image quality deviation value. The electronic device can determine the target image quality deviation value corresponding to the target information entropy according to the mapping relation between the preset information entropy and the image quality deviation value, and because when the image is generated, some noise is generated due to external (weather, light, angle, jitter and the like) or internal (system, GPU) reasons, and the noise can have some influence on the image quality, the image quality can be adjusted to a certain extent, so that objective evaluation on the image quality is ensured.
Further, the electronic device may further obtain a first shooting parameter of the target face image, determine a target optimization coefficient corresponding to the first shooting parameter according to a mapping relationship between the preset shooting parameter and the optimization coefficient, and the setting of the shooting parameter may also have a certain influence on the image quality evaluation, so that an influence component of the shooting parameter on the image quality needs to be determined, and finally, adjust the first image quality evaluation value according to the target optimization coefficient and the target image quality deviation value to obtain a second reference evaluation value, where the second reference evaluation value may be obtained according to the following formula:
In the case where the image quality evaluation value is a percentile, the specific calculation formula is as follows:
second reference evaluation value= (first image quality evaluation value+target image quality deviation value) × (1+target optimization coefficient)
In the case where the image quality evaluation value is a percentage, a specific calculation formula is as follows:
second reference evaluation value=first image quality evaluation value (1+ target image quality deviation value) ×1+ target optimization coefficient
Further, the electronic device may acquire a target environmental parameter corresponding to the face image i, determine a target weight coefficient pair corresponding to the target environmental parameter according to a mapping relationship between a preset environmental parameter and a weight coefficient pair, where the target weight coefficient pair includes a target first weight coefficient and a target second weight coefficient, the target first weight coefficient is a weight coefficient corresponding to a first reference evaluation value, and the target second weight coefficient is a weight coefficient corresponding to a second reference evaluation value, and further perform a weighting operation according to the target first weight coefficient, the target second weight coefficient, the first reference evaluation value and the second reference evaluation value to obtain a face image quality evaluation value of the face image i, where a specific calculation formula is as follows:
Face image quality evaluation value of face image i=first reference evaluation value target first weight coefficient+second reference evaluation value target second weight coefficient
Therefore, the image quality can be objectively evaluated by combining the influences of internal and external environment factors, shooting setting factors, face angles, integrity and the like, and the face image quality evaluation accuracy is improved.
102. Based on the initial training set, a first training set and a second training set are determined.
The electronic device can copy the initial training set into 1 time to obtain a first training set and a second training set, and can process the initial training set in two different processing modes to obtain the first training set and the second training set. In particular implementations, for example, a training set x can be made n Then, the training set x is duplicated n To make training set x m 。
Optionally, the step 102 of determining the first training set and the second training set based on the initial training set may include the following steps:
21. performing first enhancement processing on the initial training set to obtain a first training set;
22. and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
The electronic device may perform a first enhancement process on the initial training set by using a first enhancement algorithm to obtain a first training set, where the first enhancement algorithm may be at least one of: gray stretching, histogram equalization, smoothing, filtering, noise reduction, and the like, are not limited herein.
In addition, the electronic device may perform a second enhancement process on the initial training set by using a second enhancement algorithm to obtain a second training set, where the second enhancement algorithm may be at least one of the following: gray stretching, histogram equalization, smoothing, filtering, noise reduction, and the like, are not limited herein.
Wherein the enhancement effect of the first enhancement process is different from that of the second enhancement process.
103. And inputting the first training set into a first neural network model for operation to obtain a first parameter model.
Wherein the first neural network model may be at least one of: convolutional neural network model, impulse neural network model, fully connected neural network model, cyclic neural network model, and the like, without limitation. In a specific implementation, the electronic device may input the first training set into the first neural network model to perform operation, so as to obtain a first parameter model.
104. And inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different.
Wherein the second neural network model may be at least one of: convolutional neural network model, impulse neural network model, fully connected neural network model, cyclic neural network model, and the like, without limitation. In a specific implementation, the electronic device may input the second training set into the second neural network model to perform operation, so as to obtain a second parameter model.
In a specific implementation, the first neural network model has the same network structure as the second neural network model but different model parameters. The model parameters may be initialization parameters of each layer of the neural network model, such as normal distribution random initialization or uniform distribution random initialization.
105. The following steps S1-S4 are executed N times, wherein N is a positive integer: s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer; s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model; s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time; s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time.
In a specific implementation, the N may be set by the user or default by the system. The greater N, the higher the model accuracy. The electronic device may configure a coefficient for model parameters of the first parametric model and the first neural network model, respectively, and then splice the two together to construct the first parametric model. Similarly, a second parametric model may also be constructed, and in turn, the second training set may be trained using the first parametric model, and the first parametric model may be trained using the second parametric model.
Optionally, the step S1 may further include the steps of:
s11, acquiring a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
s12, calculating according to the first weight factor, the second weight factor, the i-1 th first parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
The first weight factor pair may include a first weight factor and a second weight factor, where the sum of the first weight factor and the second weight factor is 1, and both weight factors may be preset or default.
Furthermore, the electronic device may perform an operation according to the first weight factor, the second weight factor, the i-1 th first parameter model and the model parameters of the i-th first neural network model to obtain a first parameter model, where a specific calculation formula is as follows:
first parametric model of ith time = first weight factor @ first parametric model of ith-1 time + second weight factor @ model parameters of first parametric model of ith time.
That is, in a specific implementation, for step S2, the electronic device may configure a coefficient for the model parameters of the ith-1 th second parameter model and the ith second neural network model, and then splice the two together to construct the ith second parameter model. The specific implementation is similar to step S1.
The specific calculation formula is as follows:
second parametric model of ith time = first weight factor. Second parametric model of ith-1 time + model parameters of first parametric model of second weight factor. Ith time
In a specific implementation, the electronic device may input the second training set to the first reference model for training, to obtain the second reference training set.
Optionally, step S4, performing an operation on the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain a second reference training set of the i-th time, may include the following steps:
S41, determining sample characteristics of each training sample in the second training set of the ith-1 time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
s42, determining cosine distances among samples according to the plurality of sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
Specifically, the electronic device may determine sample features of each training sample in the ith-1 th second training set based on the ith first parameter model, obtain a plurality of sample features, determine cosine distances between samples according to the plurality of sample features, and perform clustering based on the cosine distances to obtain the ith second reference training set, that is, improve classification accuracy of the samples.
In specific implementation, for step S3, the electronic device may input the first training set of the i-1ci to the second reference model of the i-th time to perform training, so as to obtain the first reference training set of the i-th time. Specific implementation of step S3 may refer to step S4.
Specifically, the electronic device may determine sample features of each training sample in the first training set based on the second parameter model, to obtain a plurality of sample features, determine cosine distances between samples according to the plurality of sample features, and perform clustering based on the cosine distances to obtain a first reference training set, so that classification accuracy of the samples may be improved. 106. And taking the more converged neural network model of the first parameter model of the Nth time and the second parameter model of the Nth time as a trained neural network model.
In a specific implementation, the electronic device can take the converged neural network model in the first parameter model of the nth time and the second parameter model of the nth time as a trained neural network model, and the trained neural network model is accessed, so that a high-precision neural network model can be obtained, and the face recognition efficiency is improved.
The parameter model is the accumulated average of the previous periods of the model parameters, so that the parameter model has stronger decoupling property, not only can the stability of collaborative supervision be increased, but also the output of the parameter model and the model can be more independent and complementary. By using the past-averaged model of the complementary network to generate the supervision, the pseudo tag predictions of the two networks can be better correlated, thereby better avoiding false amplification and overfitting, and enabling a highly accurate neural network model.
For example, in a specific implementation, as shown in fig. 1C, data may be prepared, that is, face data is collected from a monitored scene in a large scale, and preprocessing such as detection and alignment is performed on the face; clustering the preprocessed data by using a K-means clustering algorithm, and firstly distributing a pseudo tag y for each sample i Make training set x n Then copy training set x n Make training set x m 。
Then, the network co-training can design two deep convolutional neural networks DCNN1 and DCNN2, such as randomly initializing the DCNN1 and DCNN2, conducting dropout on the output characteristics of the DCNN1 and the DCNN2, increasing the difference of the two networks, and randomly carrying out the two training sets x n And x m Different data enhancement methods are used, so that the difference of two training sets is increased, overfitting is avoided, and thenTraining set x n Input to DCNN1 for iterative training, and training set x m Input to DCNN2 iterative training.
Furthermore, the tags are updated with the average parameter model (λ=0.5)
A. Training for one period every iteration, and calculating an average parameter model f of the current period of the DCNN1 and the DCNN2 cut-off t (θ 1 ) And f t (θ 2 ),f t (θ 1 ) And f t (θ 2 ) The method for updating the parameters is similar to the method for updating the weights by using the Momentum of the neural network, and corresponds to the network parameters theta 1 And theta 2 Cumulative weighted average for momentum coefficient λ:
f t (θ 1 )=λf t-1 (θ 1 )+(1-λ)θ 1 ,t≠0 (1)
f t (θ 2 )=λf t-1 (θ 2 )+(1-λ)θ 2 ,t≠0 (2)
wherein, lambda is E [0, 1), theta 1 Is the current network parameter of DCNN1, θ 2 Is the current network parameter of DCNN2, t represents the t-th iteration period, when t=0:
f 0 (θ 1 )=θ 1 , (3)
f 0 (θ 2 )=θ 2 . (4)
B. after t periods of iterative training, calculating an average parameter model f according to a formula (1) t (θ 1 ) Calculate training set x m The features of each sample are then used to calculate the cosine distance between the samples and re-cluster.
C. The average parameter model f calculated according to formula (2) t (θ 2 ) Calculate training set x n The features of each sample are then used to calculate the cosine distance between the samples and re-cluster.
The average parameter model is the accumulated average of the previous periods of the model parameters, so that the model has stronger decoupling property, not only can the stability of collaborative supervision be increased, but also the output of the model parameter model and the model parameter model can be more independent and complementary. By using a past-average model of the complementary network to generate supervision, pseudo tag predictions for the two networks can be better correlated, thereby better avoiding false amplification and overfitting.
And finally, repeatedly updating the label by using the average parameter model (lambda=0.5) continuously and iteratively until the network converges and the Loss tends to be stable. The value of λ ranges from 0 to 1, for example, λ=0.9.
Optionally, after the step 105, the following steps may be further included:
a1, determining a first convergence degree of the first parameter model of the nth time and a second convergence degree of the second parameter model of the nth time;
a2, determining a first weight corresponding to the first parameter model of the nth time and a second weight corresponding to the second parameter model of the nth time according to the first convergence degree and the second convergence degree;
A3, carrying out weighting operation according to the first parameter model of the Nth time, the first weight, the second parameter model of the Nth time and the second weight to obtain a reference neural network model;
a4, fine tuning the reference neural network model through the first reference training set of the Nth time or the second reference training set of the Nth time to obtain a final reference neural network model, wherein the convergence degree of the final reference neural network model is larger than a preset convergence degree.
In a specific implementation, the electronic device may obtain a first convergence degree of the first parameter model of the nth time and a second convergence degree of the second parameter model of the nth time, and further determine a first weight corresponding to the first parameter model of the nth time and a second weight corresponding to the second parameter model of the nth time according to the first convergence degree and the second convergence degree, where the first weight=the first convergence degree/(the first convergence degree+the second convergence degree), and the second weight=the second convergence degree/(the first convergence degree+the second convergence degree). The preset convergence may be set by the user himself or by default.
Further, the electronic device may perform a weighting operation according to the nth first parameter model, the first weight, the nth second parameter model, and the second weight to obtain a reference neural network model, which is specifically as follows:
Reference neural network model = nth first parameter model × first weight + nth second parameter model × second weight
Furthermore, the electronic device can perform fine adjustment on the reference neural network model through the first reference training set of the nth time or the second reference training set of the nth time to obtain a final reference neural network model, the convergence degree of the final reference neural network model is larger than the preset convergence degree, and further, corresponding weights can be adjusted through the convergence degrees of the two neural network models, and a new neural network model can be constructed by combining the weights and the advantages of the two neural network models, so that the performance of the model can be improved.
It can be seen that, the data processing method described in the embodiments of the present application is applied to an electronic device, an initial training set for a face is obtained, a first training set and a second training set are determined based on the initial training set, the first training set is input into a first neural network model to perform operation, so as to obtain a first parameter model, the second training set is input into a second neural network model to perform operation, so as to obtain a second parameter model, the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different, and the following steps S1 to S4 are executed N times, where N is a positive integer: s1, constructing an ith first parameter model according to model parameters of the ith-1 th first parameter model and an ith first neural network model, wherein i is a positive integer; s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model; s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to perform calculation according to the first reference training set of the ith time to obtain a first parameter model of the ith time; s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to perform calculation according to the second reference training set of the ith time to obtain a second parameter model of the ith time; the method has the advantages that the convergent neural network model in the first parameter model of the Nth time and the convergent neural network model in the second parameter model of the Nth time is used as a trained neural network model, and because the parameter model is the accumulated average of the previous periods of model parameters, the model has stronger decoupling property, the stability of collaborative supervision can be improved, the output of the model and the model is more independent and complementary, the supervision is generated by using the past average model of the complementary network, the pseudo label predictions of the two networks can be better related to each other, and therefore error amplification and overfitting are better avoided, a high-precision neural network model can be obtained, and the accuracy of the neural network model is improved.
In accordance with the embodiment shown in fig. 1B, please refer to fig. 2, fig. 2 is a schematic flow chart of a data processing method according to an embodiment of the present application, which is applied to the electronic device shown in fig. 1A, and the data processing method includes:
201. an initial face image set is acquired.
202. And carrying out image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values.
203. And selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as an initial training set.
204. Based on the initial training set, a first training set and a second training set are determined.
205. And inputting the first training set into a first neural network model for operation to obtain a first parameter model.
206. And inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different.
207. The following steps S1-S4 are executed N times, wherein N is a positive integer: s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer; s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model; s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time; s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time.
208. And taking the more converged neural network model of the first parameter model of the Nth time and the second parameter model of the Nth time as a trained neural network model.
The specific description of the steps 201 to 208 may refer to the corresponding steps of the data processing method described in fig. 1B, and are not repeated herein.
It can be seen that the data processing method described in the embodiment of the application is applied to electronic equipment, and because the parameter model is the accumulated average of the previous periods of the model parameters, the method has stronger decoupling performance, not only can increase the stability of collaborative supervision, but also can enable the output of the parameter model and the model parameter to be more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo tag predictions of the two networks can be better correlated, so that false amplification and over fitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model can be improved.
In accordance with the above embodiments, referring to fig. 3, fig. 3 is a schematic structural diagram of an electronic device provided in an embodiment of the present application, as shown in the fig. 3, which includes a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, and in the embodiment of the present application, the programs include instructions for performing the following steps:
Acquiring an initial training set aiming at a human face;
determining a first training set and a second training set based on the initial training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different;
the following steps S1-S4 are executed N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
S4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
and taking the more converged neural network model of the first parameter model of the Nth time and the second parameter model of the Nth time as a trained neural network model.
It can be seen that, the electronic device described in the embodiment of the application has stronger decoupling performance because the parameter model is the cumulative average of the previous periods of the model parameters, so that not only can the stability of collaborative supervision be increased, but also the output of the parameter model and the model parameter can be more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo tag predictions of the two networks can be better correlated, so that false amplification and over fitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model can be improved.
Optionally, in said determining a first training set and a second training set based on said initial training set, the program comprises instructions for:
Performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
Optionally, in the constructing the first parametric model of the ith time according to the first parametric model of the ith-1 th time and the model parameters of the first neural network model of the ith time, the program includes instructions for:
acquiring a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
and carrying out operation according to the first weight factor, the second weight factor, the i-1 th first parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
Optionally, in the aspect that the second training set of the ith-1 st time is calculated according to the first parameter model of the ith time to obtain the second reference training set of the ith time, the program includes instructions for executing the following steps:
Determining sample characteristics of each training sample in the second training set of the ith-1 time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
and determining cosine distances among samples according to the plurality of sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
Optionally, in the acquiring an initial training set for a face, the program includes instructions for:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
and selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as the initial training set.
Optionally, the above program further comprises instructions for performing the steps of:
determining a first convergence of the nth time first parametric model and a second convergence of the nth time second parametric model;
determining a first weight corresponding to the nth first parameter model and a second weight corresponding to the nth second parameter model according to the first convergence degree and the second convergence degree;
Performing weighted operation according to the first parameter model of the Nth time, the first weight, the second parameter model of the Nth time and the second weight to obtain a reference neural network model;
and fine-tuning the reference neural network model through the first reference training set of the Nth time or the second reference training set of the Nth time to obtain a final reference neural network model, wherein the convergence degree of the final reference neural network model is larger than a preset convergence degree.
The foregoing description of the embodiments of the present application has been presented primarily in terms of a method-side implementation. It is to be understood that, in order to achieve the above-described functions, they comprise corresponding hardware structures and/or software modules that perform the respective functions. Those of skill in the art will readily appreciate that the elements and algorithm steps described in connection with the embodiments disclosed herein may be embodied as hardware or a combination of hardware and computer software. Whether a function is implemented as hardware or computer software driven hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The embodiment of the present application may perform the division of the functional units according to the above method example, for example, each functional unit may be divided corresponding to each function, or two or more functions may be integrated in one processing unit. The integrated units may be implemented in hardware or in software functional units. It should be noted that, in the embodiment of the present application, the division of the units is schematic, which is merely a logic function division, and other division manners may be implemented in actual practice.
Fig. 4 is a block diagram of functional units of a data processing apparatus 400 according to an embodiment of the present application, the apparatus 400 being applied to an electronic device, the apparatus 400 comprising: an acquisition unit 401, a determination unit 402, an operation unit 403, and an execution unit 404, wherein,
the acquiring unit 401 is configured to acquire an initial training set for a face;
the determining unit 402 is configured to determine a first training set and a second training set based on the initial training set;
the operation unit 403 is configured to input the first training set into a first neural network model for operation, so as to obtain a first parameter model;
The operation unit 403 is further configured to input the second training set into a second neural network model to perform an operation, so as to obtain a second parameter model, where the network structure of the first neural network model is the same as that of the second neural network model, but model parameters are different;
the execution unit 404 is configured to execute the following steps S1 to S4N times, where N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
s4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
The determining unit 402 is configured to take, as a trained neural network model, a more converged neural network model of the nth first parameter model and the nth second parameter model.
It can be seen that the data processing device described in the embodiments of the present application is applied to an electronic device, and because the parametric model is an accumulated average of previous periods of model parameters, the data processing device has stronger decoupling, not only can increase stability of collaborative supervision, but also can make the outputs of the two more independent and complementary. By using the past average model of the complementary network to generate supervision, the pseudo tag predictions of the two networks can be better correlated, so that false amplification and over fitting can be better avoided, a high-precision neural network model can be obtained, and the precision of the neural network model can be improved.
Optionally, in the aspect of determining the first training set and the second training set based on the initial training set, the determining unit 402 is specifically configured to:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
Optionally, in terms of constructing the first parametric model of the ith time according to the first parametric model of the ith-1 th time and model parameters of the first neural network model of the ith time, the execution unit 404 is specifically configured to:
acquiring a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
and carrying out operation according to the first weight factor, the second weight factor, the i-1 th first parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
Optionally, in the aspect that the second training set of the ith-1 st time is calculated according to the first parameter model of the ith time to obtain the second reference training set of the ith time, the executing unit 404 is specifically configured to:
determining sample characteristics of each training sample in the second training set of the ith-1 time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
and determining cosine distances among samples according to the plurality of sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
Optionally, in the aspect of acquiring the initial training set for the face, the acquiring unit 401 is specifically configured to:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
and selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as the initial training set.
Optionally, the apparatus 400 is further specifically configured to:
determining a first convergence of the nth time first parametric model and a second convergence of the nth time second parametric model;
determining a first weight corresponding to the nth first parameter model and a second weight corresponding to the nth second parameter model according to the first convergence degree and the second convergence degree;
performing weighted operation according to the first parameter model of the Nth time, the first weight, the second parameter model of the Nth time and the second weight to obtain a reference neural network model;
and fine-tuning the reference neural network model through the first reference training set of the Nth time or the second reference training set of the Nth time to obtain a final reference neural network model, wherein the convergence degree of the final reference neural network model is larger than a preset convergence degree.
It may be understood that the functions of each program module of the data processing apparatus of the present embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not repeated herein.
The embodiment of the application also provides a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, where the computer program causes a computer to execute part or all of the steps of any one of the methods described in the embodiments of the method, where the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer-readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any one of the methods described in the method embodiments above. The computer program product may be a software installation package, said computer comprising an electronic device.
It should be noted that, for simplicity of description, the foregoing method embodiments are all expressed as a series of action combinations, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will also appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily required in the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to related descriptions of other embodiments.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, such as the above-described division of units, merely a division of logic functions, and there may be additional manners of dividing in actual implementation, such as multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, or may be in electrical or other forms.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units described above, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable memory. Based on such understanding, the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a memory, including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform all or part of the steps of the above-mentioned method of the various embodiments of the present application. And the aforementioned memory includes: a U-disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Those of ordinary skill in the art will appreciate that all or a portion of the steps in the various methods of the above embodiments may be implemented by a program that instructs associated hardware, and the program may be stored in a computer readable memory, which may include: flash disk, read-Only Memory (ROM), random access Memory (Random Access Memory, RAM), magnetic disk or optical disk.
The foregoing has outlined rather broadly the more detailed description of embodiments of the present application, wherein specific examples are provided herein to illustrate the principles and embodiments of the present application, the above examples being provided solely to assist in the understanding of the methods of the present application and the core ideas thereof; meanwhile, as those skilled in the art will have modifications in the specific embodiments and application scope in accordance with the ideas of the present application, the present description should not be construed as limiting the present application in view of the above.
Claims (10)
1. A data processing method, applied to an electronic device, the method comprising:
the method comprises the steps of obtaining an initial training set aiming at a human face, wherein the initial training set specifically comprises the following steps: collecting face data from a monitoring scene on a large scale, detecting and aligning the faces, clustering the collected data by using a K-means clustering algorithm, and distributing pseudo labels to each picture to prepare the initial training set;
Based on the initial training set, a first training set and a second training set are determined, specifically: processing the initial training set by adopting two different processing modes to obtain the first training set and the second training set;
inputting the first training set into a first neural network model for operation to obtain a first parameter model;
inputting the second training set into a second neural network model for operation to obtain a second parameter model, wherein the network structure of the first neural network model is the same as that of the second neural network model but model parameters are different;
the following steps S1-S4 are executed N times, wherein N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
S4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
taking a converged neural network model in the first parameter model of the nth time and the second parameter model of the nth time as a trained neural network model, wherein the neural network model is used for improving the face recognition efficiency;
the obtaining the initial training set for the face includes:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as the initial training set;
the step of performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values includes:
acquiring a target face deviation degree of a face image i, a target face integrity degree of the face image i, a target feature point distribution density and a target information entropy of the face image i, wherein the face image i is any face image in the face image set;
When the target face deviation degree is larger than a preset deviation degree and the target face integrity degree is larger than a preset integrity degree, determining a target first reference evaluation value corresponding to the target face deviation degree according to a mapping relation between the preset face deviation degree and the first reference evaluation value;
determining a target second reference evaluation value corresponding to the target face integrity according to a mapping relation between the preset face integrity and the second reference evaluation value;
determining a target weight pair corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the weight pair, wherein the target weight pair comprises a target first weight and a target second weight, the target first weight is a weight corresponding to the first reference evaluation value, and the target second weight is a weight corresponding to the second reference evaluation value;
performing weighted operation according to the target first weight, the target second weight, the target first reference evaluation value and the target second reference evaluation value to obtain a first reference evaluation value;
determining a first image quality evaluation value corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the image quality evaluation value;
Determining a target image quality deviation value corresponding to the target information entropy according to a mapping relation between a preset information entropy and the image quality deviation value;
acquiring a first shooting parameter of the face image i;
determining a target optimization coefficient corresponding to the first shooting parameter according to a mapping relation between a preset shooting parameter and the optimization coefficient;
adjusting the first image quality evaluation value according to the target optimization coefficient and the target image quality deviation value to obtain a second reference evaluation value;
acquiring a target environment parameter corresponding to the face image i;
determining a target weight coefficient pair corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a weight coefficient pair, wherein the target weight coefficient pair comprises a target first weight coefficient and a target second weight coefficient, the target first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, and the target second weight coefficient is a weight coefficient corresponding to the second reference evaluation value;
and carrying out weighting operation according to the target first weight coefficient, the target second weight coefficient, the first reference evaluation value and the second reference evaluation value to obtain a face image quality evaluation value of the face image i.
2. The method of claim 1, wherein the determining a first training set and a second training set based on the initial training set comprises:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
3. The method according to claim 1 or 2, wherein constructing the first parametric model of the ith time from model parameters of the first parametric model of the ith-1 time and the first neural network model of the ith time comprises:
acquiring a first weight factor pair, wherein the first weight factor pair comprises a first weight factor and a second weight factor, and the sum of the first weight factor and the second weight factor is 1;
and carrying out operation according to the first weight factor, the second weight factor, the i-1 th first parameter model and the model parameters of the i-th first neural network model to obtain the i-th first parameter model.
4. The method according to claim 1 or 2, wherein the computing the second training set of the i-1 th time according to the first parameter model of the i-th time to obtain the second reference training set of the i-th time includes:
Determining sample characteristics of each training sample in the second training set of the ith-1 time based on the first parameter model of the ith time to obtain a plurality of sample characteristics;
and determining cosine distances among samples according to the plurality of sample characteristics, and clustering based on the cosine distances to obtain the ith second reference training set.
5. The method according to claim 1 or 2, wherein the obtaining an initial training set for a face comprises:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
and selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as the initial training set.
6. The method according to claim 1 or 2, characterized in that the method further comprises:
determining a first convergence of the nth time first parametric model and a second convergence of the nth time second parametric model;
determining a first weight corresponding to the nth first parameter model and a second weight corresponding to the nth second parameter model according to the first convergence degree and the second convergence degree;
Performing weighted operation according to the first parameter model of the Nth time, the first weight, the second parameter model of the Nth time and the second weight to obtain a reference neural network model;
and fine-tuning the reference neural network model through the first reference training set of the Nth time or the second reference training set of the Nth time to obtain a final reference neural network model, wherein the convergence degree of the final reference neural network model is larger than a preset convergence degree.
7. A data processing apparatus for application to an electronic device, the apparatus comprising: an acquisition unit, a determination unit, an operation unit and an execution unit, wherein,
the acquiring unit is configured to acquire an initial training set for a face, specifically: collecting face data from a monitoring scene on a large scale, detecting and aligning the faces, clustering the collected data by using a K-means clustering algorithm, and distributing pseudo labels to each picture to prepare the initial training set;
the determining unit is configured to determine, based on the initial training set, a first training set and a second training set, specifically: processing the initial training set by adopting two different processing modes to obtain the first training set and the second training set;
The operation unit is used for inputting the first training set into a first neural network model for operation to obtain a first parameter model;
the operation unit is further configured to input the second training set into a second neural network model for operation, so as to obtain a second parameter model, where the first neural network model has the same network structure as the second neural network model but different model parameters;
the execution unit is configured to execute the following steps S1 to S4N times, where N is a positive integer:
s1, constructing a first parameter model of the ith time according to the first parameter model of the ith-1 time and model parameters of the first neural network model of the ith time, wherein i is a positive integer;
s2, constructing an ith second parameter model according to the ith second parameter model-1 th and model parameters of the ith second neural network model;
s3, calculating the first training set of the ith time according to the second parameter model of the ith time to obtain a first reference training set of the ith time, and inputting the first neural network model of the ith time to calculate according to the first reference training set of the ith time to obtain a first parameter model of the ith time;
S4, calculating the second training set of the ith time according to the first parameter model of the ith time to obtain a second reference training set of the ith time, and inputting the second neural network model of the ith time to calculate according to the second reference training set of the ith time to obtain a second parameter model of the ith time;
the determining unit is used for taking a converged neural network model in the first parameter model of the nth time and the second parameter model of the nth time as a trained neural network model, and the neural network model is used for improving the face recognition efficiency;
the obtaining the initial training set for the face includes:
acquiring an initial face image set;
performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values;
selecting a face image quality evaluation value larger than a preset image quality evaluation value from the face image quality evaluation values, and taking the face image corresponding to the face image quality evaluation value as the initial training set;
the step of performing image quality evaluation on each face image in the face image set to obtain a plurality of face image quality evaluation values includes:
acquiring a target face deviation degree of a face image i, a target face integrity degree of the face image i, a target feature point distribution density and a target information entropy of the face image i, wherein the face image i is any face image in the face image set;
When the target face deviation degree is larger than a preset deviation degree and the target face integrity degree is larger than a preset integrity degree, determining a target first reference evaluation value corresponding to the target face deviation degree according to a mapping relation between the preset face deviation degree and the first reference evaluation value;
determining a target second reference evaluation value corresponding to the target face integrity according to a mapping relation between the preset face integrity and the second reference evaluation value;
determining a target weight pair corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the weight pair, wherein the target weight pair comprises a target first weight and a target second weight, the target first weight is a weight corresponding to the first reference evaluation value, and the target second weight is a weight corresponding to the second reference evaluation value;
performing weighted operation according to the target first weight, the target second weight, the target first reference evaluation value and the target second reference evaluation value to obtain a first reference evaluation value;
determining a first image quality evaluation value corresponding to the target feature point distribution density according to a mapping relation between the preset feature point distribution density and the image quality evaluation value;
Determining a target image quality deviation value corresponding to the target information entropy according to a mapping relation between a preset information entropy and the image quality deviation value;
acquiring a first shooting parameter of the face image i;
determining a target optimization coefficient corresponding to the first shooting parameter according to a mapping relation between a preset shooting parameter and the optimization coefficient;
adjusting the first image quality evaluation value according to the target optimization coefficient and the target image quality deviation value to obtain a second reference evaluation value;
acquiring a target environment parameter corresponding to the face image i;
determining a target weight coefficient pair corresponding to the target environment parameter according to a mapping relation between a preset environment parameter and a weight coefficient pair, wherein the target weight coefficient pair comprises a target first weight coefficient and a target second weight coefficient, the target first weight coefficient is a weight coefficient corresponding to the first reference evaluation value, and the target second weight coefficient is a weight coefficient corresponding to the second reference evaluation value;
and carrying out weighting operation according to the target first weight coefficient, the target second weight coefficient, the first reference evaluation value and the second reference evaluation value to obtain a face image quality evaluation value of the face image i.
8. The apparatus according to claim 7, wherein in said determining a first training set and a second training set based on said initial training set, said determining unit is specifically configured to:
performing first enhancement processing on the initial training set to obtain a first training set;
and performing second enhancement processing on the initial training set to obtain a second training set, wherein the enhancement effect of the first enhancement processing is different from that of the second enhancement processing.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-6.
10. A computer-readable storage medium, characterized in that a computer program for electronic data exchange is stored, wherein the computer program causes a computer to perform the method according to any one of claims 1-6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011639072.2A CN112686171B (en) | 2020-12-31 | 2020-12-31 | Data processing method, electronic equipment and related products |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011639072.2A CN112686171B (en) | 2020-12-31 | 2020-12-31 | Data processing method, electronic equipment and related products |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112686171A CN112686171A (en) | 2021-04-20 |
CN112686171B true CN112686171B (en) | 2023-07-18 |
Family
ID=75456629
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011639072.2A Active CN112686171B (en) | 2020-12-31 | 2020-12-31 | Data processing method, electronic equipment and related products |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112686171B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330459A (en) * | 2017-06-28 | 2017-11-07 | 联想(北京)有限公司 | A kind of data processing method, device and electronic equipment |
CN109816042A (en) * | 2019-02-01 | 2019-05-28 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of data classification model training |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11138514B2 (en) * | 2017-03-23 | 2021-10-05 | Futurewei Technologies, Inc. | Review machine learning system |
US11481571B2 (en) * | 2018-01-12 | 2022-10-25 | Microsoft Technology Licensing, Llc | Automated localized machine learning training |
-
2020
- 2020-12-31 CN CN202011639072.2A patent/CN112686171B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107330459A (en) * | 2017-06-28 | 2017-11-07 | 联想(北京)有限公司 | A kind of data processing method, device and electronic equipment |
CN109816042A (en) * | 2019-02-01 | 2019-05-28 | 北京达佳互联信息技术有限公司 | Method, apparatus, electronic equipment and the storage medium of data classification model training |
Non-Patent Citations (1)
Title |
---|
基于级联SVM的无参考人脸图像质量评价系统;李昆仑 等;《现代电子技术》(第24期);108-110 * |
Also Published As
Publication number | Publication date |
---|---|
CN112686171A (en) | 2021-04-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220317641A1 (en) | Device control method, conflict processing method, corresponding apparatus and electronic device | |
US10565955B2 (en) | Display status adjustment method, display status adjustment device and display device | |
Shih et al. | Occupancy estimation using ultrasonic chirps | |
Dorfan et al. | Tree-based recursive expectation-maximization algorithm for localization of acoustic sources | |
CN112767443A (en) | Target tracking method, electronic equipment and related product | |
CN107507625B (en) | Sound source distance determining method and device | |
CN113190757A (en) | Multimedia resource recommendation method and device, electronic equipment and storage medium | |
CN105872205B (en) | A kind of information processing method and device | |
CN110347366B (en) | Volume adjusting method, terminal device, storage medium and electronic device | |
WO2019062369A1 (en) | Application management method and apparatus, storage medium, and electronic device | |
Xu et al. | Attention-based gait recognition and walking direction estimation in wi-fi networks | |
US20210201889A1 (en) | System and Method for Determining Occupancy | |
CN111930336A (en) | Volume adjusting method and device of audio device and storage medium | |
CN108304857A (en) | A kind of personal identification method based on multimodel perceptions | |
CN111818599A (en) | Network connection control method, device and storage medium | |
CN117762372A (en) | Multi-mode man-machine interaction system | |
CN112990429A (en) | Machine learning method, electronic equipment and related product | |
CN112597942A (en) | Face clustering method, electronic equipment and related products | |
CN111194000A (en) | Distance measurement method and system based on Bluetooth fusion hybrid filtering and neural network | |
CN112686171B (en) | Data processing method, electronic equipment and related products | |
CN116403594B (en) | Speech enhancement method and device based on noise update factor | |
Song et al. | Auditory scene analysis-based feature extraction for indoor subarea localization using smartphones | |
Chavdar et al. | Scarrie: A real-time system for sound event detection for assisted living | |
Shah et al. | Sherlock: A crowd-sourced system for automatic tagging of indoor floor plans | |
CN108230312A (en) | A kind of image analysis method, equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |