CN114463830B - Genetic relationship determination method, genetic relationship determination device, electronic device, and storage medium - Google Patents
Genetic relationship determination method, genetic relationship determination device, electronic device, and storage medium Download PDFInfo
- Publication number
- CN114463830B CN114463830B CN202210386635.4A CN202210386635A CN114463830B CN 114463830 B CN114463830 B CN 114463830B CN 202210386635 A CN202210386635 A CN 202210386635A CN 114463830 B CN114463830 B CN 114463830B
- Authority
- CN
- China
- Prior art keywords
- face
- family
- image
- feature
- face feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000002068 genetic effect Effects 0.000 title claims abstract description 48
- 238000000605 extraction Methods 0.000 claims abstract description 45
- 230000004927 fusion Effects 0.000 claims description 29
- 238000012549 training Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 11
- 230000006870 function Effects 0.000 claims description 7
- 238000004590 computer program Methods 0.000 claims description 5
- 230000003416 augmentation Effects 0.000 claims description 4
- 238000011282 treatment Methods 0.000 claims description 3
- 238000012545 processing Methods 0.000 abstract description 5
- 230000008859 change Effects 0.000 description 5
- 102000040350 B family Human genes 0.000 description 3
- 108091072128 B family Proteins 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000001815 facial effect Effects 0.000 description 3
- 238000013507 mapping Methods 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000005484 gravity Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000010008 shearing Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The embodiment of the invention relates to the field of computer image processing, and discloses a genetic relationship determination method and device, electronic equipment and a storage medium. The genetic relationship determination method includes: selecting a plurality of face images from a family image set of a family to be matched as input data to be input into a family face feature extraction model to obtain a first family face feature of the family to be matched; inputting the face features of the first family into an individual face feature prediction model to obtain a first predicted single face feature which has the same face attribute with the face image to be detected in the family to be matched; and comparing the single face feature extracted from the face image to be detected with the first predicted single face feature to determine the genetic relationship between the face image to be detected and the family to be matched. The method has the advantages that the judgment of the genetic relationship between the personnel to be detected and the family to be matched is more accurate, and the applicability is wider.
Description
Technical Field
The embodiment of the invention relates to the field of computer image processing, in particular to a genetic relationship determination method, a genetic relationship determination device, electronic equipment and a storage medium.
Background
The lost social news and the literary works of the children enable the lost children and family members thereof to gradually go into the visual field of the public, and how to find the lost children becomes the most important thing of the family members, and the life trend of a plurality of parents is also influenced. For the search of lost children, most of the lost children adopt young photos and emphasize some congenital facial features for assistance of social lovers. However, it is understood that the young and young age is the main time period for the human body to grow, the current facial features of the young photo and the person to be searched are not exactly the same, and the person to be searched is not accurately confirmed.
Meanwhile, the method has the mode that the current facial features are predicted according to the image data of the lost child when the lost child is young and the general growth rule, and the prediction result is matched with the face image data in the database so as to assist in searching the lost child; however, the current state of the lost child cannot be effectively predicted and obtained only by means of a prediction mode combining the personal image with the general growth rule without pertinence, so that the lost child can be searched and judged; moreover, there is a case that there is no image data before the child is lost, and prediction cannot be performed according to the image only by the dictation characteristics of the elder generation. That is, an efficient genetic relationship determination method based on face recognition is needed.
Disclosure of Invention
The embodiment of the invention aims to provide a genetic relationship determination method, electronic equipment and a storage medium, so that the genetic relationship between a person to be detected and a family to be matched is more accurately determined, and the applicability is wider.
In order to solve the above technical problem, an embodiment of the present invention provides a method for determining an affinity, including:
selecting a plurality of face images from a family image set of a family to be matched as input data to be input into a family face feature extraction model to obtain a first family face feature of the family to be matched; inputting the face features of the first family into an individual face feature prediction model to obtain a first predicted single face feature which has the same face attribute with a face image to be detected in the family to be matched; and comparing the single face feature extracted from the face image to be detected with the first predicted single face feature to determine the genetic relationship between the face image to be detected and the family to be matched.
An embodiment of the present invention also provides an electronic device, including: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the affinity determination method described above.
Embodiments of the present invention also provide a computer-readable storage medium storing a computer program, which when executed by a processor implements the affinity determination method described above.
According to the family face feature-based genetic relationship judgment method, the face features (first family face features) of a family to be matched are obtained, and the individual predicted face features (first predicted single face features) with the same attributes as the face to be detected in the family to be matched are obtained through an individual face feature prediction model; and then comparing the predicted face features with the individual face image of the face image to be detected to determine whether the face image to be detected belongs to the family to be matched so as to determine the genetic relationship. According to the method and the device, family members closest to the attribute of the face to be detected in the family to be matched are predicted by combining the family characteristics of the family to be matched and the face attribute of the face to be detected, and whether the person to be detected is a missing person in the family to be matched is determined according to the similarity between the predicted single face characteristic and the face characteristic of the face to be detected. The genetic relationship judgment method considers the family characteristics and the growth rules of family members, is more accurate in judgment of the genetic relationship between the person to be detected and the family to be matched, does not need to consider too much influence of the change of the human face characteristics on the identification precision caused by the growth of the missing person in the missing period, and is wider in application range.
Drawings
One or more embodiments are illustrated by way of example in the accompanying drawings, which correspond to the figures in which like reference numerals refer to similar elements and which are not to scale unless otherwise specified.
Fig. 1 is a flowchart of a genetic relationship determination method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a family face feature extraction model provided according to an embodiment of the present application;
FIG. 3 is a schematic diagram of an individual face feature prediction model provided according to an embodiment of the present application;
FIG. 4 is a schematic diagram of an electronic device provided in accordance with an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, embodiments of the present invention will be described in detail below with reference to the accompanying drawings. However, it will be appreciated by those of ordinary skill in the art that numerous technical details are set forth in order to provide a better understanding of the present application in various embodiments of the present invention. However, the technical solution claimed in the present application can be implemented without these technical details and various changes and modifications based on the following embodiments. The following embodiments are divided for convenience of description, and should not constitute any limitation to the specific implementation manner of the present invention, and the embodiments may be mutually incorporated and referred to without contradiction.
The terms "first" and "second" in the embodiments of the present application are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In the description of the present application, the terms "comprise" and "have", as well as any variations thereof, are intended to cover a non-exclusive inclusion. For example, a system, product or apparatus that comprises a list of elements or components is not limited to only those elements or components but may alternatively include other elements or components not expressly listed or inherent to such product or apparatus. In the description of the present application, "plurality" means at least two, e.g., two, three, etc., unless explicitly specifically limited otherwise.
One embodiment of the present invention relates to a method for determining an affinity. The specific flow is shown in figure 1.
102, inputting the face features of a first family into an individual face feature prediction model to obtain a first predicted single face feature which has the same face attribute with a face image to be detected in a family to be matched;
and 103, comparing the single face feature extracted from the face image to be detected with the first predicted single face feature to determine the genetic relationship between the face image to be detected and the family to be matched.
In the embodiment, the predicted face features (first predicted single face features) of individuals with the same attribute as that of the face to be detected in the family to be matched are obtained through obtaining the face features (first family face features) of the family to be matched and through an individual face feature prediction model; and then comparing the predicted face features with the individual face image of the face image to be detected to determine whether the face image to be detected belongs to the family to be matched so as to determine the genetic relationship. According to the method and the device, family members closest to the attribute of the face to be detected in the family to be matched are predicted by combining the family characteristics of the family to be matched and the face attribute of the face to be detected, and whether the person to be detected is a missing person in the family to be matched is determined according to the similarity between the predicted single face characteristic and the face characteristic of the face to be detected. The genetic relationship judgment method considers the family characteristics and the growth rules of family members, is more accurate in judgment of the genetic relationship between the person to be detected and the family to be matched, does not need to consider too much influence of the change of the human face characteristics on the identification precision caused by the growth of the missing person in the missing period, and is wider in application range.
The following describes the details of the genetic relationship determination method according to the present embodiment in detail, and the following is only details provided for easy understanding and is not essential to the implementation of the present embodiment.
In step 101, selecting a plurality of face images from a family image set of a family to be matched as input data to be input into a family face feature extraction model to obtain a first family face feature of the family to be matched; the family face feature extraction model can be obtained by training in advance.
In one example, the family face feature extraction model comprises a feature extraction layer and a feature fusion layer, and the training process of the family face feature extraction model comprises the following steps: extracting single face features of each single image contained in the family image sample through a feature extraction layer; and fusing the single face features through a feature fusion layer to obtain the family face features corresponding to the family image sample.
Specifically, the feature extraction layer of the family face feature extraction model is used for extracting the familyThe human face features of each single human face image contained in the family human face image sample can be extracted by a convolutional neural network; the feature fusion layer may set a weight for the face features corresponding to each face image included in the current sample according to the fusion parameters of the fusion types by determining the fusion parameters of the fusion types (gender, age, and the like) to which the family belongs by performing similarity comparison between the family faces in advance, and fuse the face features of different face images together to obtain the family face features. The fusion parameters are used for measuring the importance of the fusion type during fusion, and the size of the fusion parameters determines the feature weight of the face features corresponding to the fusion type during fusion to obtain the family face features. As shown in fig. 2, the family face feature extraction model includes: a feature extraction layer and a feature fusion layer. When the familial human face feature extraction model is trained, a classifier can be connected behind the familial human face feature extraction model for training, and the classifier is not connected when the training is finished; inputting both the family image sample and the non-family image sample into the model for training during training; wherein X i To X j For j face images contained in a family image sample, e.g. X j Is the j individual face image in the family image sample; f. of j Extracting the jth personal face feature extracted by the feature extraction layer for the jth personal face image; f Family(s) And the family face features are generated after all face features corresponding to the family image sample are fused by the feature fusion layer.
In addition, in order to complete the training of the family face feature extraction model, it is necessary to obtain: firstly, a family image set: each family corresponds to a family image set, and each family image set comprises a plurality of face images corresponding to family members. For example: the A family face image set is represented as A = (a) 1 ,a 2 ,…,a n ) The B family face image set is represented as B = (B) 1 ,b 2 ,…,b n ) …, respectively; ② family image sample: each family image sample comprises a plurality of face images belonging to the same family, and the face images can be face images of the same person (different angles and different angles)Age), or may be face images of different persons to which part or all of them belong; thus, the same family can correspond to a plurality of family image samples containing not exactly the same face image, such as: family a image sample 1, family a image sample 2, family B image sample 1, family B image sample 2 …; the shape is as follows: the a family image sample 1 includes 4 face images, i.e., a1= (a) 1 ,a 2 ,a 3 ,a 4 ) The a family image sample 2 includes 4 face images, i.e., a2= (a) 1 ,a 2 ,a 6 ,a 7 ) (ii) a ③ non-family image sample: each non-family image sample comprises at least two face images belonging to the same non-family, namely the face images of at least two different people exist in each non-family image sample, such as: non-family image sample 1= (a) 1 ,a 2 ,a 3 ,c 4 ) Non-family image sample 2= (a) 1 ,d 2 ,a 3 ,c 4 ). Optionally, the non-family image samples may be identified by the same label.
The loss function for training the family face feature extraction model is constructed based on the loss between the single face feature of each single image and the family face feature and the loss between each family face feature and the corresponding family face label. Loss between each family face feature and the corresponding family face label is used for measuring the similarity of the family face features of different families obtained by the feature fusion layer, and the loss is controlled to enable the similarity between the family face features to be as low as possible, namely, the inter-class distance is as large as possible; for example: according to the family face labels of the two families of the A family and the B family, when the family face characteristics of the A family and the B family are obtained, the model training parameters are adjusted to enable the two family face characteristics to be close to the family face labels of the corresponding families respectively in distance, so that the similarity between the family face characteristics corresponding to the two families is reduced as much as possible, the distinguishing points between the families are clearer, and the distinguishing between the families is clearer when the subsequent prediction is carried out based on the family face characteristics. The loss function also comprises the loss between the single face features of each single image and the familial face features, and is used for measuring the feature similarity between each single face feature obtained by the feature extraction layer and the familial face features obtained by the fusion layer, and controlling the whole loss to be smaller than a certain range by adjusting model training parameters. For example, by calculating the loss between the a-family face features and the face features of each face image in the a-family face image sample, the obtained family features are closer to the single face features in the same family. When family label labeling is carried out on the face image, the family face label can be used for identification and distinction; the family image samples formed by the face images in the same family respectively correspond to a family face label according to the family to which the family image samples belong, and the family image samples formed by the face images in the non-same family jointly correspond to a family face label. That is, if all the family image samples belong to the family A, the family image samples correspond to the family face labels of the family A; if the face tags belong to the family B, the face tags correspond to the family B; if part of the family image sample belongs to the family a and part of the family image sample belongs to the family B, that is, the family image sample is composed of face images in different families, the family image sample corresponds to a certain family face label, for example, corresponds to a K label, and the K label indicates that the family image sample contains face images of more than one family.
In one example, the obtaining of the family face features corresponding to the family image sample by fusing each of the individual face features through the feature fusion layer includes: and setting fusion weights for the corresponding single face features according to the age group of each face image in the family image sample, and performing weighted fusion on each single face feature based on the fusion weights. Specifically, for a family of face features, the face features of children and the old generally change greatly with the change of ages, while the face features of adults in stages are relatively stable, so that when the family face features are constructed based on face feature fusion, the face images in different age stages can be subjected to weight-based fusion according to age stages, for example, the face features of adults have a high specific gravity in the fused family face features, while the face features of children and the old have a low specific gravity in the fused family face features, so that the obtained family face features are more stable, and the weights can be set specifically through an attention mechanism.
In one example, obtaining the family image set of the family to be matched includes: acquiring the face images of the family members to be matched, and performing at least one of the following augmentation treatments on the face images of the members to form a family image set of the family to be matched: blurring, occlusion, clipping, and random addition of noise. That is, when the number of face images of family members to be matched is small, and the family face features cannot be acquired or the acquired family face features are not good in effect, the face images can be subjected to augmentation processing, so that the family face images can be acquired, and the augmentation processing includes but is not limited to: the face images of the family members to be matched are subjected to treatments of blurring, shielding, shearing, random noise addition and the like, so that the number of the face images of the family members is expanded, and the effect of the obtained family face features is optimized.
In step 102, the first family of face features is input into the individual face feature prediction model, and a first predicted single face feature having the same face attribute with the face image to be detected in the family to be matched is obtained. In this embodiment, the individual face feature prediction model obtains a first predicted face feature of the first family face feature under the face attribute adjustment through the acquired first family face feature and the face attribute of the face image to be detected. That is, the features of the face image of the family to be matched under the face attribute of the face image to be matched are predicted, and the predicted face features are compared with the features extracted from the face image to be matched, for example, by comparing the similarity, the relationship between the face image to be matched and the family to be matched is judged.
In one example, the individual face feature prediction model includes a plurality of prediction branch networks corresponding to a plurality of face attributes, and the training process of the individual face feature prediction model includes: for each prediction branch network, the family members generated by the family face feature extraction model are processed by the prediction branch networkThe face features are mapped into single predicted face features with the same face attributes in the corresponding families as the predicted branch networks; and training the loss function of the prediction branch network based on the loss between the predicted single face feature corresponding to the familial face feature sample and the single face feature which is covered by the familial face feature sample and has the same face attribute as the face attribute corresponding to the prediction branch network. That is, the individual face feature prediction model includes a plurality of prediction branch networks, and the plurality of prediction branch networks correspond to a plurality of face attributes, such as adult males, adult females, elderly males, elderly females, and children; aiming at any prediction branch network, when the face attribute corresponding to the prediction branch network is Q, a single prediction face feature P with the attribute of Q is obtained by mapping the family face feature sample Q Then, a face image X meeting Q attribute is found from the input data of the familial face feature extraction model in the generation process of the familial face feature sample Q After the feature extraction layer of the family human face feature extraction model is processed, the human face image X with the attribute of Q is determined Q Face feature f of Q And using the face feature f with the attribute of Q Q And the single predicted face feature P with the attribute of Q obtained by mapping Q Taking the difference value as a loss to adjust the individual human face feature prediction model; and when the loss is less than the preset loss boundary value, obtaining an optimal individual human face feature prediction model. Wherein, the data interaction relationship between the family face feature extraction model and the individual face feature prediction model is shown in fig. 3, wherein P 1 And Pn is the prediction result of the individual human face feature prediction model.
In a specific example, when the face attribute corresponding to a certain prediction branch network is male children, the predicted single face feature obtained by prediction according to the attribute is P 1 And { X 1 、X 2 … … Xn } wherein X is 2 For male and children, take X 2 Corresponding f 2 And P 1 And comparing to measure and adjust the individual human face characteristic prediction model.
In one example, the genetic relationship determination method further includes: replacing at least one image in the plurality of face images with the face image to be detected, and inputting the replaced plurality of face images into the family face feature extraction model to obtain a second family face feature; inputting the second family face features into the individual face feature prediction model to obtain a second predicted single face feature which has the same face attribute with the face image to be detected in the family to be matched after family members are replaced; and comparing the single face feature extracted from the face image to be detected with the second predicted single face feature to determine the genetic relationship between the face image to be detected and the family to be matched. Specifically, replacing a face image in a part of family image sets with the face image to be detected, namely acquiring a second family face feature of the face image to be detected under the influence of the family face image to be matched; and the face image to be detected is not supported to completely replace all the images of the plurality of face images. Because the obtained second family face features are generated under the influence of the face features of the to-be-matched family of the face image, and the face attributes referenced by the mapping of the individual face feature prediction model are consistent with the face attributes of the to-be-matched face image, the second family face features are further subjected to second prediction single face features obtained through the individual face feature prediction model and the single face features extracted from the to-be-matched face image, and the theoretical difference is mainly generated by the process that the to-be-matched family face image participates in the second family face features. The difference value between the face features of the second family and the single face feature of the face image to be detected can measure the genetic relationship between the face image to be detected and the family to be matched; for example, when the difference between the second family face feature and the single face feature of the to-be-matched face image is greater than a preset threshold, it indicates that the to-be-matched family face feature has a large influence on the single face feature of the to-be-matched face image, that is, the to-be-matched family face feature is greatly different from the single face feature of the to-be-matched face image, and it may be determined that the to-be-matched face image does not belong to the to-be-matched family.
Specifically, replacing at least one of the plurality of face images with a face image to be detected includes: replacing images with the same face attributes as the face images to be detected in the face images to be detected to obtain the replaced face images; the face attributes include: a gender attribute and/or an age attribute. Optionally, the face image to be detected can replace a face image with the same gender or age in a plurality of face images, and preferably replace an image with the same gender and age; namely, the substitution is carried out under the condition that the face attributes are consistent, so that the influence of the difference of the face attribute factors before and after the substitution on the process of acquiring the face features of the second family is avoided, and the accuracy of the genetic relationship detection is improved.
In some examples, the face image to be detected is a plurality of images of different age groups to which the face to be detected belongs; the replacing at least one image in the plurality of face images by the face image to be detected comprises the following steps: respectively and correspondingly replacing images which are consistent with the human face attributes of the human face images to be detected in the multiple human face images by the human face images to be detected belonging to different age groups to obtain the multiple human face images after replacement; the face attributes include: gender attribute and age attribute. In other words, the influence of the attribute of age on the prediction judgment result is considered, images of the face to be detected in multiple age groups are provided from a sample, corresponding replacement can be performed for different age groups in the process of replacing the multiple face images, the influence of age factors on the extraction process of the face features of the second family is reduced, the single face feature of the face image to be detected in each age group is ensured, the correlation between the face features of the second family and the face image to be detected is ensured, and the accuracy of genetic relationship judgment is improved.
In step 103, comparing the single face feature extracted from the face image to be detected with the first predicted single face feature, and determining the genetic relationship between the face image to be detected and the family to be matched. Specifically, because the attributes of the single face feature of the face image to be detected and the first predicted single face feature are consistent, the influence of the face attribute on the genetic relationship determination can be avoided, that is, the difference (or similarity difference) between the single face feature of the face image to be detected and the first predicted single face feature can eliminate the influence of the face attributes such as gender and age, and the genetic relationship determination is more accurate.
In one example, extracting the single face feature from the face image to be detected includes: and inputting the face image to be detected into the feature extraction layer of the family face feature extraction model to obtain a single face feature of the face image to be detected. Namely, when a single face feature is extracted from a face image to be detected, the family face feature extraction model can be reused to realize the extraction, namely, the single face feature is obtained by adopting a feature extraction layer in the family face feature extraction model, and other means and ways are not required to be added, so that the operation complexity is reduced.
In the embodiment, the face features of the family to be matched and the face to be detected under the same face attribute are obtained by obtaining the face features of the first family of the family to be matched and predicting the first predicted single face feature under the same face attribute of the face to be detected; and comparing the first predicted single face feature with the single face feature of the face to be detected to determine whether the face image to be detected belongs to the family to be matched so as to determine the genetic relationship. According to the method and the device, the family characteristics of the family to be matched and the face attributes of the face to be detected are combined to predict the current state of the family member which is closest to the face attributes to be detected in the family to be matched, and then whether the person to be detected is a missing person in the family to be matched is determined according to the similarity between the predicted single face characteristic and the face characteristic of the face to be detected. The genetic relationship judgment method considers the family characteristics and the growth rules of family members, is more accurate in judgment of the genetic relationship between the person to be detected and the family to be matched, does not need to consider too much influence of the change of the human face characteristics on the identification precision caused by the growth of the missing person in the missing period, and is wider in application range. In addition, the face image to be detected is allowed to replace part of the family face image to obtain a second family face feature, the second predicted single face feature is obtained through prediction and compared with the single face feature of the face image to be detected, the genetic relationship is judged according to the image of the family image to be matched to the face image to be detected, on the basis of ensuring the identification of the genetic relationship, the requirement for the number of the family images is reduced, the resource condition required to be input in the process of judging the genetic relationship is reduced, and the user experience is improved.
The steps of the above methods are divided for clarity, and the implementation may be combined into one step or split some steps, and the steps are divided into multiple steps, so long as the same logical relationship is included, which are all within the protection scope of the present patent; it is within the scope of the patent to add insignificant modifications to the algorithms or processes or to introduce insignificant design changes to the core design without changing the algorithms or processes.
One embodiment of the invention relates to an electronic device, as shown in fig. 4, comprising at least one processor 201; and a memory 202 communicatively coupled to the at least one processor 201; the memory 202 stores instructions executable by the at least one processor 201, and the instructions are executed by the at least one processor 201, so that the at least one processor 201 can execute the relationship determination method.
Where the memory and processor are connected by a bus, the bus may comprise any number of interconnected buses and bridges, the buses connecting together one or more of the various circuits of the processor and the memory. The bus may also connect various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface provides an interface between the bus and the transceiver. The transceiver may be one element or a plurality of elements, such as a plurality of receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor is transmitted over a wireless medium through an antenna, which further receives the data and transmits the data to the processor.
The processor is responsible for managing the bus and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory may be used to store data used by the processor in performing operations.
One embodiment of the present invention relates to a computer-readable storage medium storing a computer program. The computer program realizes the above-described method embodiments when executed by a processor.
That is, as can be understood by those skilled in the art, all or part of the steps in the method for implementing the embodiments described above may be implemented by a program instructing related hardware, where the program is stored in a storage medium and includes several instructions to enable a device (which may be a single chip, a chip, or the like) or a processor (processor) to execute all or part of the steps of the method described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It will be understood by those of ordinary skill in the art that the foregoing embodiments are specific examples for carrying out the invention, and that various changes in form and details may be made therein without departing from the spirit and scope of the invention in practice.
Claims (10)
1. An affinity determination method, comprising:
selecting a plurality of face images from a family image set of a family to be matched as input data to be input into a family face feature extraction model to obtain a first family face feature of the family to be matched;
inputting the first family face features into an individual face feature prediction model to obtain first predicted single face features which have the same face attributes with the face image to be detected in the family to be matched;
comparing the single face feature extracted from the face image to be detected with the first predicted single face feature to determine the genetic relationship between the face image to be detected and the family to be matched;
the individual human face feature prediction model comprises a plurality of prediction branch networks corresponding to a plurality of human face attributes;
the step of inputting the first family face features into an individual face feature prediction model to obtain a first predicted single face feature which has the same face attribute with the face image to be detected in the family to be matched comprises the following steps: the individual face feature prediction model obtains a first predicted single face feature of the first family face feature under the face attribute adjustment of the face image to be detected through the input first family face feature and the face attribute of the face image to be detected;
the face attributes comprise: adult males, adult females, geriatric males, geriatric females, and children;
the familial human face feature extraction model comprises a feature extraction layer and a feature fusion layer, and the training process of the familial human face feature extraction model comprises the following steps: extracting single face features of each single image contained in the family image sample through a feature extraction layer: and fusing the single face features through a feature fusion layer to obtain the family face features corresponding to the family image sample.
2. The genetic relationship determination method according to claim 1, wherein a loss function for training the familial face feature extraction model is constructed based on a loss between a single face feature of each single image and the familial face feature, and a loss between each familial face feature and a corresponding familial face label; family image samples formed by the face images in the same family respectively correspond to a family face label according to the family to which the family images belong, and family image samples formed by the face images in different families jointly correspond to a family face label.
3. The genetic relationship determination method according to claim 2, wherein the obtaining of the family face features corresponding to the family image sample by fusing the individual face features through a feature fusion layer comprises:
and setting fusion weights for the corresponding single face features according to the age group of each face image in the family image sample, and performing weighted fusion on each single face feature based on the fusion weights.
4. A genetic relationship determination method according to claim 2 or 3, characterized by further comprising:
replacing at least one image in the plurality of face images with the face image to be detected, and inputting the replaced plurality of face images into the family face feature extraction model to obtain a second family face feature;
inputting the second family face features into the individual face feature prediction model to obtain a second predicted single face feature which has the same face attribute with the face image to be detected in the family to be matched after family members are replaced;
and comparing the single face feature extracted from the face image to be detected with the second predicted single face feature to determine the genetic relationship between the face image to be detected and the family to be matched.
5. The genetic relationship determination method according to claim 4, wherein extracting the single face feature from the face image to be measured includes:
and inputting the face image to be detected into the feature extraction layer of the family face feature extraction model to obtain a single face feature of the face image to be detected.
6. The genetic relationship determination method according to claim 2, wherein the training process of the individual face feature prediction model includes:
aiming at each prediction branch network, the family face features generated by the family face feature extraction model are mapped into single prediction face features with the same face attribute as the face features corresponding to the prediction branch network in the corresponding family through the prediction branch network;
and training the loss function of the prediction branch network based on the loss between the predicted single face feature corresponding to the familial face feature sample and the single face feature which is covered by the familial face feature sample and has the same face attribute as the face attribute corresponding to the prediction branch network.
7. The genetic relationship determination method according to claim 6, wherein obtaining a single face feature covered by the familial face feature sample and having the same face attribute as that corresponding to the prediction branch network comprises:
and inputting the face image with the same face attribute as that corresponding to the prediction branch network in a plurality of face images for generating the family face feature sample into the feature extraction layer of the family face feature extraction model to obtain a single face feature which is covered by the family face feature sample and has the same face attribute as that corresponding to the prediction branch network.
8. The genetic relationship determination method according to claim 1, wherein obtaining the family image set of the family to be matched includes:
acquiring the face images of the family members to be matched, and performing at least one of the following augmentation treatments on the face images of the members to form a family image set of the family to be matched:
blurring, occlusion, clipping, and random addition of noise.
9. An electronic device, comprising:
at least one processor; and (c) a second step of,
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the affinity determination method of any one of claims 1 to 8.
10. A computer-readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the genetic relationship determination method according to any one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210386635.4A CN114463830B (en) | 2022-04-14 | 2022-04-14 | Genetic relationship determination method, genetic relationship determination device, electronic device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210386635.4A CN114463830B (en) | 2022-04-14 | 2022-04-14 | Genetic relationship determination method, genetic relationship determination device, electronic device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114463830A CN114463830A (en) | 2022-05-10 |
CN114463830B true CN114463830B (en) | 2022-08-26 |
Family
ID=81418633
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210386635.4A Active CN114463830B (en) | 2022-04-14 | 2022-04-14 | Genetic relationship determination method, genetic relationship determination device, electronic device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114463830B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117218693A (en) * | 2022-05-31 | 2023-12-12 | 青岛云天励飞科技有限公司 | Face attribute prediction network generation method, face attribute prediction method and device |
CN117409973B (en) * | 2023-12-13 | 2024-05-17 | 成都大熊猫繁育研究基地 | Panda health assessment method and system based on family data |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2630070A1 (en) * | 2005-11-17 | 2007-05-31 | Motif Biosciences, Inc. | Systems and methods for the biometric analysis of index founder populations |
CN104680119B (en) * | 2013-11-29 | 2017-11-28 | 华为技术有限公司 | Image personal identification method and relevant apparatus and identification system |
CN105488463B (en) * | 2015-11-25 | 2019-01-29 | 康佳集团股份有限公司 | Lineal relative's relation recognition method and system based on face biological characteristic |
KR102221118B1 (en) * | 2016-02-16 | 2021-02-26 | 삼성전자주식회사 | Method for extracting feature of image to recognize object |
CN106709482A (en) * | 2017-03-17 | 2017-05-24 | 中国人民解放军国防科学技术大学 | Method for identifying genetic relationship of figures based on self-encoder |
CN110414299B (en) * | 2018-04-28 | 2024-02-06 | 中山大学 | Monkey face affinity analysis method based on computer vision |
CN109740536B (en) * | 2018-06-12 | 2020-10-02 | 北京理工大学 | Relatives identification method based on feature fusion neural network |
CN113158929B (en) * | 2021-04-27 | 2022-09-30 | 河南大学 | Depth discrimination measurement learning relativity verification system based on distance and direction |
-
2022
- 2022-04-14 CN CN202210386635.4A patent/CN114463830B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114463830A (en) | 2022-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114463830B (en) | Genetic relationship determination method, genetic relationship determination device, electronic device, and storage medium | |
US11699095B2 (en) | Cross-domain recommender systems using domain separation networks and autoencoders | |
US20040247177A1 (en) | Image processing | |
US20220044148A1 (en) | Adapting prediction models | |
CN108985133B (en) | Age prediction method and device for face image | |
CN110096617B (en) | Video classification method and device, electronic equipment and computer-readable storage medium | |
CN110162628A (en) | A kind of content identification method and device | |
CN109376226A (en) | Complain disaggregated model, construction method, system, classification method and the system of text | |
CN112270184B (en) | Natural language processing method, device and storage medium | |
CN112116025A (en) | User classification model training method and device, electronic equipment and storage medium | |
CN117556276A (en) | Method and device for determining similarity between text and video | |
CN112116024B (en) | Method and device for classifying models by user, electronic equipment and storage medium | |
CN111797194B (en) | Text risk detection method and device, electronic equipment and storage medium | |
CN116543237A (en) | Image classification method, system, equipment and medium for non-supervision domain adaptation of passive domain | |
US20240160196A1 (en) | Hybrid model creation method, hybrid model creation device, and recording medium | |
CN116955788A (en) | Method, device, equipment, storage medium and program product for processing content | |
CN112989054B (en) | Text processing method and device | |
CN115270754A (en) | Cross-modal matching method, related device, electronic equipment and storage medium | |
CN114926873A (en) | Genetic relationship determination method, device, electronic device and storage medium | |
CN111966829B (en) | Network topic outbreak time prediction method based on deep survival analysis | |
CN111460318B (en) | Collaborative filtering recommendation method based on explicit and implicit trusts | |
CN116821512B (en) | Recommendation model training method and device, recommendation method and device | |
CN109918576A (en) | A kind of microblogging concern recommended method based on joint probability matrix decomposition | |
CN114398854B (en) | Tag generation method and device of electronic book and electronic equipment | |
CN118468061B (en) | Automatic algorithm matching and parameter optimizing method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right |
Effective date of registration: 20230821 Address after: Room 799-4, 7th Floor, Building A3A4, Zhong'an Chuanggu Science and Technology Park, No. 900 Wangjiang West Road, Gaoxin District, Hefei Free Trade Experimental Zone, Anhui Province, 230031 Patentee after: Anhui Lushenshi Technology Co.,Ltd. Address before: 230091 room 611-217, R & D center building, China (Hefei) international intelligent voice Industrial Park, 3333 Xiyou Road, high tech Zone, Hefei, Anhui Province Patentee before: Hefei lushenshi Technology Co.,Ltd. |
|
TR01 | Transfer of patent right |