CN117315737A - Face information identification method, system, electronic equipment and storage medium - Google Patents

Face information identification method, system, electronic equipment and storage medium Download PDF

Info

Publication number
CN117315737A
CN117315737A CN202311069853.6A CN202311069853A CN117315737A CN 117315737 A CN117315737 A CN 117315737A CN 202311069853 A CN202311069853 A CN 202311069853A CN 117315737 A CN117315737 A CN 117315737A
Authority
CN
China
Prior art keywords
facial
face
feature
fuzzy
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311069853.6A
Other languages
Chinese (zh)
Inventor
张子晨
张雨萌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Telecom Corp Ltd
Original Assignee
China Telecom Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Telecom Corp Ltd filed Critical China Telecom Corp Ltd
Priority to CN202311069853.6A priority Critical patent/CN117315737A/en
Publication of CN117315737A publication Critical patent/CN117315737A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • G06V10/765Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects using rules for classification or partitioning the feature space
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/766Arrangements for image or video recognition or understanding using pattern recognition or machine learning using regression, e.g. by projecting features on hyperplanes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/70Labelling scene content, e.g. deriving syntactic or semantic representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the application provides a facial information identification method, a facial information identification system, electronic equipment and a storage medium, and belongs to the technical field of artificial intelligence. The method comprises the steps of determining a plurality of face feature points in a face image by adopting a face feature point positioning technology, extracting a plurality of face feature vectors in the face image according to the face feature points, forming a feature data set of the face image according to the face feature vectors, inputting the feature data set into a semantic description model based on an axiom fuzzy set theory to obtain a face fuzzy concept description of the face image, accurately and quantitatively describing uncertainty features by the axiom fuzzy set theory, and quantitatively analyzing and mapping the face fuzzy concept description to obtain a character classification result. According to the method and the device, quantitative description of facial features is achieved through axiom fuzzy set theory, probability of belonging character categories is described through quantitative analysis of facial fuzzy concepts of images on the basis of the quantitative description, and reliability of facial character information classification is improved.

Description

Face information identification method, system, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of artificial intelligence technologies, and in particular, to a facial information recognition method, a facial information recognition system, an electronic device, and a storage medium.
Background
At present, with the rise of interdisciplinary research, face research is gradually permeated into the computer field and becomes a new hot trend, and with the rise of artificial intelligence, students in a plurality of computer vision fields have obtained lovely results in the process of exploring the relationship between facial features and character characteristics. Characterizing personality analysis (APA) is an important issue for personality computation, and personality prediction can be further applied to predicting and understanding human behaviors, and is an important basis for predicting future development trends of humans, such as happiness, satisfaction, life, quality of relationship with peers, family, professional selection, and the like. In an attempt to reveal the relationship between face and character, there is a drawback in that human evaluators do not consistently evaluate character traits, and more studies use computer vision techniques and machine learning algorithms to relate face images to character traits, taking into account the variability of human individual scores. The progress of computer vision provides support for promoting the research of personality calculation, and the computer can realize automatic analysis and automatic identification on facial expressions, audio information, language information and scene environments, and the deep regression neural network can be used for character identification at present, but cannot process uncertainty data in facial features and cannot mine semantic links between the facial features and characters, so that the reliability of classification of facial character related information is poor.
Disclosure of Invention
The embodiment of the application mainly aims to provide a facial information identification method, a facial information identification system, electronic equipment and a storage medium, and aims to improve reliability of facial character information classification.
To achieve the above object, a first aspect of an embodiment of the present application provides a face information recognition method, including the steps of:
acquiring a face image;
determining a plurality of face feature points in the face image by adopting a face feature point positioning technology;
extracting a plurality of facial feature vectors in the facial image according to the plurality of facial feature points, and forming a feature data set of the facial image according to the plurality of facial feature vectors;
inputting the feature data set into a semantic description model based on axiom fuzzy set theory to obtain a facial fuzzy concept description of a facial image;
and determining character classification results according to the facial blur concept description.
In some embodiments, the determining the plurality of face feature points in the face image using a face feature point localization technique includes the steps of:
determining a plurality of first feature points in the facial image by adopting a face feature point positioning technology based on an active state model;
determining a plurality of second feature points in the facial image by adopting a face feature point positioning technology of a combined regression tree model;
And integrating the first feature points and the second feature points to obtain face feature points.
In some embodiments, the extracting a plurality of facial feature vectors in the facial image according to a plurality of the facial feature points includes:
acquiring a plurality of preset characteristic attributes;
selecting a plurality of corresponding face feature points from a plurality of face feature points according to each preset feature attribute;
and analyzing the selected plurality of facial feature points through a geometric form analysis method to obtain facial feature vectors corresponding to the preset feature attributes.
In some embodiments, the inputting the feature dataset into a semantic description model based on axiom fuzzy set theory to obtain a facial fuzzy concept description of the facial image includes the following steps:
determining the membership degree of each fuzzy concept in the fuzzy set of each facial feature vector in the feature data set through the membership function of the fuzzy set;
based on the principle of maximum membership, taking the fuzzy concept with the maximum membership on the facial feature vector as the semantic label of the facial feature vector;
and obtaining the facial fuzzy concept description of the facial image through fuzzy logic operation on the semantic tags of all the facial feature vectors.
In some embodiments, said determining a character classification result from said face-blurred concept description comprises the steps of:
determining the similarity of the facial fuzzy concept description and each category semantic description according to the membership of the facial fuzzy concept description and the membership of each category semantic description;
and determining a character classification result according to the category semantic description with the maximum similarity.
In some embodiments, the semantic description model is obtained by:
obtaining a training image dataset, wherein a sample tag of the training image data comprises a face description tag;
extracting features of the training image data set to obtain each type of facial feature data set;
determining a feature domain according to the facial feature data set, and constructing a fuzzy set according to the feature domain and preset fine granularity;
carrying out statistical analysis on the facial feature data set to construct a membership function of each facial feature on the fuzzy set;
and determining a semantic description model according to membership functions of all facial features.
In some embodiments, the category semantic description is obtained by:
obtaining a training image data set, wherein a sample label of the training image data set comprises a character label;
Classifying samples in the training image dataset according to the character labels to obtain a plurality of category label image datasets;
generating a plurality of candidate fuzzy concept sets of the category label image dataset through a semantic description model;
searching optimal fuzzy concepts from a plurality of candidate fuzzy concept sets by a random coordinate shrinkage optimization method, and using the optimal fuzzy concepts as category semantic description of the character labels.
To achieve the above object, a second aspect of the embodiments of the present application proposes a face information recognition system, including:
a first module for acquiring a facial image;
a second module for determining a plurality of facial feature points in the facial image using a facial feature point positioning technique;
a third module, configured to extract a plurality of facial feature vectors in the facial image according to a plurality of the facial feature points, and form a feature data set of the facial image according to the plurality of facial feature vectors;
a fourth module, configured to input the feature dataset into a semantic description model based on axiom fuzzy set theory, to obtain a facial fuzzy concept description of a facial image;
and a fifth module for determining a character classification result according to the face fuzzy concept description.
To achieve the above object, a third aspect of the embodiments of the present application proposes an electronic device including a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, the program implementing the face information identification method described in the first aspect when executed by the processor.
In order to achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium, for computer-readable storage, the storage medium storing one or more programs executable by one or more processors to implement the face information recognition method described in the first aspect.
The face information identification method, the face information identification system, the electronic equipment and the storage medium are characterized in that a plurality of face feature points in a face image are determined by adopting a face feature point positioning technology, a plurality of face feature vectors in the face image are extracted according to the face feature points, a feature data set of the face image is formed according to the face feature vectors, then the feature data set is input into a semantic description model based on an axiom fuzzy set theory to obtain a face fuzzy concept description of the face image, uncertainty features are accurately and quantitatively described through the axiom fuzzy set theory, and quantitative analysis mapping is carried out according to the face fuzzy concept description to obtain a character classification result. According to the method and the device, quantitative description of facial features is achieved through axiom fuzzy set theory, probability of belonging character categories is described through quantitative analysis of facial fuzzy concepts of images on the basis of the quantitative description, and reliability of classification of facial character information is improved.
Drawings
Fig. 1 is a flowchart of a face information recognition method provided in an embodiment of the present application;
fig. 2 is a flowchart of step S102 in fig. 1;
fig. 3 is a flowchart of step S103 in fig. 1;
fig. 4 is a flowchart of step S104 in fig. 1;
fig. 5 is a flowchart of step S105 in fig. 1;
FIG. 6 is a flow chart of a semantic description model building method in step S104 of FIG. 1;
FIG. 7 is a flow chart of a category semantic description determination method of step S501 in FIG. 5;
fig. 8 is a schematic diagram of a face information recognition system provided in an embodiment of the present application;
fig. 9 is a schematic hardware structure of an electronic device according to an embodiment of the present application;
FIG. 10 is a schematic diagram illustrating a description of preset feature attributes provided in an embodiment of the present application;
FIG. 11 is a schematic diagram of a TIPI scale provided in an embodiment of the present application;
fig. 12 is a schematic diagram of data set information after classifying a training image data set according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
It should be noted that although functional block division is performed in a device diagram and a logic sequence is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the block division in the device, or in the flowchart. The terms first, second and the like in the description and in the claims and in the above-described figures, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the present application.
First, several nouns referred to in this application are parsed:
artificial intelligence (artificial intelligence, AI): is a new technical science for researching and developing theories, methods, technologies and application systems for simulating, extending and expanding the intelligence of people; artificial intelligence is a branch of computer science that attempts to understand the nature of intelligence and to produce a new intelligent machine that can react in a manner similar to human intelligence, research in this field including robotics, language recognition, image recognition, natural language processing, and expert systems. Artificial intelligence can simulate the information process of consciousness and thinking of people. Artificial intelligence is also a theory, method, technique, and application system that utilizes a digital computer or digital computer-controlled machine to simulate, extend, and expand human intelligence, sense the environment, acquire knowledge, and use knowledge to obtain optimal results.
An Image description (Image capture) generates a natural language description for the Image and uses the generated description to help the application understand the semantics expressed in the visual scene of the Image. For example, the image description may convert an image search to a text search for classifying the image and improving the image search results. People usually need to quickly browse to describe the details of the visual scene of the image, and automatically adding descriptions to the image is a comprehensive and difficult computer vision task, and complex information contained in the image needs to be converted into natural language descriptions. In contrast to common computer vision tasks, image captions not only require identifying objects from images, but also require associating the identified objects with natural semantics and describing them in natural language. Thus, image descriptions require one to extract deep features of the image, correlate with semantic features, and transform for generating the description.
Random coordinate contraction (RAndom COordinate Shrinking, RACOS) optimization method, which is based on the classification idea, uses a classifier to divide a larger hyper-parameter space into two parts of good and bad. It sets a combination of superparameters as a coordinate, and during operation, each dimension is independent, and the operation is shown in a rectangular fashion in the figure. Where the discrete value is either to or not to this point and the continuous value is either left or right shifted. Sampling is carried out in a well-divided super-parameter interval with a certain probability, exploration and development are simultaneously considered, and then the coordinates are gradually contracted to a certain point, so that target optimization is realized. Specifically, in the process of minimization, the threshold value of the classifier is gradually reduced, and the range of the good parameter space is gradually reduced.
The embodiment of the application provides a facial information identification method, a facial information identification system, electronic equipment and a storage medium, aiming at improving the reliability of personality reasoning.
The face information identification method, the system, the electronic device and the storage medium provided in the embodiments of the present application are specifically described through the following embodiments, and the face information identification method in the embodiments of the present application is described first.
The embodiment of the application can acquire and process the related data based on the artificial intelligence technology. Among these, artificial intelligence (Artificial Intelligence, AI) is the theory, method, technique and application system that uses a digital computer or a digital computer-controlled machine to simulate, extend and extend human intelligence, sense the environment, acquire knowledge and use knowledge to obtain optimal results.
Artificial intelligence infrastructure technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, big data processing technologies, operation/interaction systems, mechatronics, and the like. The artificial intelligence software technology mainly comprises a computer vision technology, a robot technology, a biological recognition technology, a voice processing technology, a natural language processing technology, machine learning/deep learning and other directions.
The embodiment of the application provides a facial information identification method, relates to the technical field of artificial intelligence, and particularly relates to the field of image description. The facial information identification method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be a smart phone, tablet, notebook, desktop, etc.; the server side can be configured as an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and a cloud server for providing cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, cloud communication, middleware services, domain name services, security services, CDNs, basic cloud computing services such as big data and artificial intelligent platforms and the like; the software may be an application or the like that implements the face information recognition method, but is not limited to the above form.
The subject application is operational with numerous general purpose or special purpose computer system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
It should be noted that, in each specific embodiment of the present application, when related processing is required according to user information, user behavior data, user history data, user location information, and other data related to user identity or characteristics, permission or consent of the user is obtained first, and the collection, use, processing, and the like of these data comply with related laws and regulations and standards. In addition, when the embodiment of the application needs to acquire the sensitive personal information of the user, the independent permission or independent consent of the user is acquired through a popup window or a jump to a confirmation page or the like, and after the independent permission or independent consent of the user is explicitly acquired, necessary user related data for enabling the embodiment of the application to normally operate is acquired.
Fig. 1 is an optional flowchart of a facial information recognition method provided in an embodiment of the present application, where the method in fig. 1 may include, but is not limited to, steps S101 to S105.
Step S101, acquiring a face image;
step S102, a plurality of face feature points in a face image are determined by adopting a face feature point positioning technology;
step S103, extracting a plurality of facial feature vectors in the facial image according to the plurality of facial feature points, and forming a feature data set of the facial image according to the plurality of facial feature vectors;
Step S104, inputting the feature data set into a semantic description model based on axiom fuzzy set theory to obtain the facial fuzzy concept description of the facial image;
step S105, determining character classification results according to the facial blur concept description.
In step S102 of some embodiments, the face feature point positioning technology aims to obtain key point coordinate information describing features of a face component or a face contour from a face image, which is an important image preprocessing link in researches involving faces, such as biometric feature recognition, emotion understanding, and face animation. Face feature point localization techniques include, but are not limited to, global feature-based methods, constrained local model methods, regression model-based methods, and the like.
In step S103 of some embodiments, a plurality of facial feature vectors such as eye length, mouth area, nose height, etc. in the face image may be extracted by performing coordinate geometry analysis of all face feature points of the face image, and a feature data set of the face image may be formed from all the facial feature vectors.
In step S104 and step S105 of some embodiments, axiom fuzzy set theory (Axiomatic Fuzzy Set, AFS) is a data analysis tool with both prediction accuracy and result interpretability, so embodiments of the present application construct a character prediction model based on AFS theory. The general idea of constructing a classification model based on AFS theory is: first, simple concepts are defined on each feature of the data, which can be regarded as primitives constituting the semantic description. Secondly, by means of AFS logical operations of ∈ (or), "and", complex concepts, i.e., facial blur concept descriptions, can be generated. And finally, for each character category, screening out a complex concept which can accurately describe the characteristics of the category through an optimization algorithm, and forming semantic description of the category through the V-shaped operation.
In some embodiments from step S101 to step S105, a plurality of face feature points in a face image are determined by adopting a face feature point positioning technology, a plurality of face feature vectors in the face image are extracted according to the plurality of face feature points, a feature dataset of the face image is formed according to the plurality of face feature vectors, then the feature dataset is input into a semantic description model based on an axiom ambiguity set theory to obtain a face ambiguity conceptual description of the face image, uncertainty features are accurately and quantitatively described through the axiom ambiguity set theory, and then quantitative analysis mapping is performed on the face ambiguity conceptual description to obtain a character classification result. The quantitative description of facial features is realized through axiom fuzzy set theory, and on the basis, the reliability of facial character information classification is improved through quantitatively analyzing the probability of the character class to which the facial fuzzy concept of the image belongs.
Referring to fig. 2, in some embodiments, step S102 may include, but is not limited to, steps S201 to S203:
step S201, a plurality of first feature points in a facial image are determined by adopting a face feature point positioning technology based on an active state model;
step S202, a plurality of second feature points in the facial image are determined by adopting a face feature point positioning technology of a combined regression tree model;
Step S203, integrating the plurality of first feature points and the plurality of second feature points to obtain a plurality of face feature points.
In step S201 of some embodiments, the face feature point positioning technique based on the active state model may be a STASM algorithm, where STASM is improved based on the active state model, and uses a simplified representation of a scale-invariant feature transform (SIFT) descriptor to replace a profile description manner used by a template matching portion in a conventional ASM method, and introduces a multi-element adaptive regression spline (MARS) method to improve the matching efficiency of the descriptor around the feature points.
In step S202 of some embodiments, the regression tree model is cascaded based on the combined regression tree positioning method, so that the position of the feature point is directly estimated from a sparse subset of the pixel intensity, and the method performs well in algorithm instantaneity and positioning accuracy. The combined regression tree-based positioning method has been inherited into the well-known C++ open source tool Dlib, so that a plurality of second feature points can be directly determined through the Dlib tool.
In step S203 of some embodiments, the positioning feature points of different positioning face feature points are different for the positioning feature points of different face parts, so that the positioning results obtained by the two positioning technologies can be complementarily combined together to improve the integrity and accuracy of the face feature point extraction.
Referring to fig. 3, in some embodiments, extracting a plurality of facial feature vectors in the facial image according to a plurality of the facial feature points in step S103 may include, but is not limited to, steps S301 to S303:
step S301, a plurality of preset characteristic attributes are obtained;
step S302, selecting a plurality of corresponding face feature points from a plurality of face feature points according to each preset feature attribute;
step S303, analyzing the selected plurality of face feature points through a geometric form analysis method to obtain face feature vectors corresponding to preset feature attributes.
In step S301 of some embodiments, a plurality of facial preset feature attributes associated with the character are determined through empirical knowledge, illustratively as shown in fig. 10, including but not limited to beards, mouth sizes, hair colors, etc.
In step S302 of some embodiments, the first set of feature points derived from STASM isThe second characteristic point set obtained from Dlib isCombining the first feature point set and the second feature point set to obtain a face feature point set, wherein each element obtained by the face feature point set is described as a face feature point. And selecting a plurality of face feature points from all face feature points in the face feature point set according to the preset feature attributes to calculate feature attribute vectors. By way of example, face feature points of a mouth region are selected from a set of face feature points according to the mouth size attribute, and geometric calculation such as distance or area can be performed through the selected face feature points, so that feature vectors of the mouth size are obtained.
In step S303 of some embodiments, a geometric analysis method may be used to analyze feature vectors of shapes, distances, areas, etc. of feature attributes. In this embodiment, the semantic description is performed on the facial appearance by taking distance feature analysis and area feature analysis as examples. Two face feature pointsAnd->The distance between them can be calculated by the formula (1):
from a plurality of face feature points selectedThe area of the polygon formed is calculated by the formula (2):
further, the present embodiment can construct 27 facial geometric features, of which 22 are distance features and 5 are area features. In calculating the facial geometric feature vector, considering the situation that the image sizes are different, besides the feature which is the ratio, the interpupillary distance and the face area can be respectively used as the normalization reference of the distance feature and the area feature, wherein the face area is calculated by the formula (3):
wherein,
referring to fig. 4, in some embodiments, step S104 may include, but is not limited to, steps S401 to S402:
step S401, determining the membership degree of each fuzzy concept in the fuzzy set of each facial feature vector in the feature data set through the membership function of the fuzzy set;
step S402, based on the principle of maximum membership, using the fuzzy concept with the maximum membership on the facial feature vector as the semantic label of the facial feature vector;
Step S403, the semantic tags of all the facial feature vectors are subjected to fuzzy logic operation to obtain the facial fuzzy concept description of the facial image.
In step S401 of some embodiments, the fuzzy set is a set used to express the fuzzy concept, also called fuzzy set, fuzzy subset, and illustratively the fuzzy concept in the fuzzy set may be smaller, moderate and larger. Given a universe U, a mapping from U to unit intervals [0,1] is referred to as a fuzzy set on U, or a fuzzy subset of U. The fuzzy set can be marked as A, and the mapping function muA (-) or A (-) becomes the membership function of the fuzzy set A. For each x ε U, μA (x) is called the membership of element x to fuzzy set A.
In step S402 of some embodiments, taking a face area feature vector as an example, according to a membership function of a face area on a fuzzy set, a membership degree of a face area belonging to a smaller concept, a membership degree of a moderate concept and a membership degree of a larger concept are calculated respectively, then according to a maximum membership degree principle, a fuzzy concept with the largest membership degree is used as a semantic label of the face area feature vector, and a plurality of face feature vectors of the face image correspond to the semantic labels through a "" and "" of fuzzy logic operation, so as to obtain a face fuzzy concept description of the face image.
Referring to fig. 5, in some embodiments, step S105 may further include, but is not limited to, steps S501 to S502:
step S501, determining the similarity of the facial blur concept description and the category semantic description according to the membership of the facial blur concept description and the membership of the category semantic description;
step S502, determining a character classification result according to the category semantic description corresponding to the character category label with the maximum similarity.
In some embodiments, the AFS classifier includes class semantic descriptions corresponding to character labels of a plurality of training data sets, where the class semantic descriptions are obtained by performing fuzzy rule statistical analysis and optimization on facial features in the training data sets, and the class semantic descriptions and the membership of the facial fuzzy concept descriptions can be calculated based on the AFS logic operation system. After obtaining the membership degree of the face-blurred concept description, the similarity of the face-blurred concept description and each category semantic description is determined by analyzing the membership degree similarity of the face-blurred concept description and each category semantic description. And taking the character label corresponding to the category semantic description which is most similar to the facial blur concept description as a character classification result.
Referring to fig. 6, in some embodiments, the semantic description model in step S104 is obtained by:
step S601, obtaining a training image data set, wherein a sample label of the training image data comprises a face description label;
step S602, extracting features of the training image data set to obtain each type of facial feature data set;
step S603, determining a feature domain according to the facial feature data set, and constructing a fuzzy set according to the feature domain and a preset fine granularity;
step S604, carrying out statistical analysis on the facial feature data set, and constructing a membership function of each facial feature on the fuzzy set;
step S605, a semantic description model is determined according to membership functions of all facial features.
In some embodiments, a CelebAMask-HQ dataset may be selected as a source of face images, three attributes of "eyes", "smiles", and "slightly open mouth" in the CelebAMask-HQ dataset are screened, face images with neutral expressions and no obvious face occlusion are retained, and 400 images are randomly selected from the screened images to form a training image dataset, wherein a female percentage in the training image dataset is 40.5% and a male percentage in the training image dataset is 59.5%. Facial description tags in the training image dataset are used to describe facial features such as large mouth, high cheekbones, more beard, etc. Further, the sample tags of the training image dataset also include character tags, which can be represented based on a Ten-item personality inventory (Ten-I tem Personality Inventory, TIPI) scale in the field of physics, as shown in fig. 11.
In some embodiments, during the annotation of the training image dataset, the image data in the dataset may be randomly divided into 4 groups, each group containing 100 images. And labeling each group of image data by a plurality of groups of data labeling persons, so that the labeling result of each image comprises character scoring results of a plurality of different data labeling persons, and calculating character scoring average values of the different data labeling persons to obtain character final scoring labels. After obtaining the face description labels of the training image data and the character scores in five dimensions corresponding to the face description labels, the embodiment of the application constructs the character classifier by mining the relation between the appearance of the face and the character characteristics, and the character characteristics are divided into 5 dimensions, so that a corresponding data set is required to be constructed for each dimension. Illustratively, the set of data is derived from discretizing the obtained trait scores in an equally wide divided manner for each data set category portion. The scale scores ranged from [1,7], if divided into 3 categories, the three intervals formed were [1,3], [3,5] and [5,7], respectively, and represent low, medium and high performance of the corresponding dimensional trait, and the relevant information of the resulting 10 datasets is shown in fig. 12. According to the embodiment of the invention, the optimal semantic description related to the character is constructed through the AFS theory, wherein an accurate semantic description model is constructed through the facial description label, and after the fuzzy concept description of the facial features is carried out on the data set through the semantic description model, the fuzzy concept description in the data set is optimized through the character label so as to obtain the optimal fuzzy concept for describing the character label.
In some embodiments, the semantic description model is built based on an AFS framework, which is specifically as follows:
the training image dataset is denoted as X train ={x 1 ,x 2 ,...,x Ntrain The facial feature set is denoted as f= { F } 1 ,f 2 ,...,f Nfeature Ntrain and Nfeature represent the number of samples in the training image dataset and the number of features in the facial feature set, respectively.At f i Define simple concept set (i.e. fuzzy concept set) M above j ={m j,1 ,m j,2 ,...,m j,Nse Nse represents M j The number of simple concepts. />m j,k Can be expressed as "f j Smaller "," f j Moderate "or" f j Larger ", etc.
To measure that the sample belongs to the simple concept m j,k By combining the tag of the dataset with m j,k The semantics of the representation give a weighting functionThe construction process of (2) is as follows.
For any one facial feature f i E F, first obtaining the minimum value of the dataset on the facial featureMaximum->Facial features f formed by the two j Is->Equally divided into Nse-1 subintervals, and the widths of the subintervals are calculated by formula (4):
wherein Nse reflects facial features f j Particle size of blurring. M is M j The set of central values corresponding to the simple concept is denoted as P j ={p j,1 ,p j,2 ,...,p j,Nse }。p j,k Calculated by equation (5):
for any one sample x i ∈X train And arbitrary simple concept m j,k ∈M j Sample x i Belonging to the simple concept m j,k The degree of (2) can be calculated by the formula (6):
where σ is an adjustable parameter in the weighting function obtained by statistical analysis of the facial features from the training dataset. X is x i,j Distance p j,k The closer, x i,j Belonging to m j,k The higher the degree of (2). That is, the weighting function reflects the "ambiguity" of the sample for the feature.
For any two samples x a ,x b ∈X train ,x a And x b Between each simple concept m j,k ∈M j The relationship can be determined by AFS structure Θ (M, τ, X train ) To represent. Because the data types related by the embodiment of the invention are numerical, according to the AFS structure, a linear sequence relation is more than or equal to m j,k "to construct τ, specifically as shown in equation (7):
wherein x is a ≥m j,k x b Representing |x a,j -p j,k |≤|x b,j -p j,k | a. The invention relates to a method for producing a fibre-reinforced plastic composite. By τ (x) a ,x b ) The original numeric data set is converted into a relational data set.Aτ (x) is obtained, which reflects the "randomness" of the sample data distribution.
By defining the number of simple concepts, the domain and simple concepts of each facial feature dataset according to the training image datasetCalculating the number of subintervals, determining a central value set according to the subintervals of the simple concept, and constructing a Gaussian weight function of the facial features on the simple concept according to each sample of the training image data set Then based on the Gaussian weight function and the AFS structure Θ (M, τ, X train ) Determining AFS membership functions for facial features on fuzzy setsThe AFS membership function can be essentially be regarded as the whole of the weight function, and the semantic description model for describing the facial features is determined from the membership functions of all the facial features.
Referring to fig. 7, in some embodiments, the category semantic description in step S501 is obtained by:
step S701, a training image data set is obtained, wherein a sample label of the training image data set comprises a character label;
step S702, classifying samples in the training image dataset according to character labels to obtain a plurality of category label image datasets;
step S703, generating a plurality of candidate fuzzy concept sets of the category label image dataset through the semantic description model;
step S704, searching the optimal fuzzy concept from the plurality of candidate fuzzy concept sets by a random coordinate shrinkage optimization method, and taking the optimal fuzzy concept as the category semantic description of the character label.
In some embodiments, in the framework of axiom fuzzy set theory, the core task of classifying the problem can be regarded as finding the proper fuzzy concept describing each personality category, but too many concepts in the AFS logic system (EM) in the related art make it difficult to efficiently complete the task, so this embodiment reduces the EM according to the actual situation.
Specifically, let Φ=m 1,1 m 1,2Phi and phi respectively represent two complex concepts in EM combined by simple concepts, namely the case of using only 'lambda' (phi) and the case of using both 'V' and 'lambda' (phi). Illustratively, the semantics of the fuzzy concept φ may be translated into "f 1 Smaller and f 1 Moderate ", obviously this is semantically uncoordinated.
The membership μ φ '(x) of Φ can be determined by the function sup { μ φ' (x), μ φ "(x) }, and therefore attention should be paid more to fuzzy concepts that are obtained only by" Λ ". Based on the above analysis, the definition of reduced EM is given below.
Let M= { M j,k I j e 1, 2..the, nfeature }, k e {1, 2..the, nse }) is a simple set of concepts,is defined as:
wherein EM is - The number of fuzzy concepts may be calculated by equation (9):
compared with the EM, the EM of the embodiment of the application - The method is more suitable for being used as a candidate fuzzy concept set when each category description is generated, on one hand, semantic consistency is guaranteed, and on the other hand, search space in practical application is greatly reduced. In the searching process of the candidate fuzzy concept set, the RACOS optimization method can be adopted to realize the searching task. In order to make a reasonable balance between prediction accuracy and result interpretability, two secondary EM's are designed from two angles of semantic distinction and training error respectively - A policy of searching for the optimal fuzzy concept.
Specifically, classifying samples in the training image dataset according to character tags to obtain multiple category tag image datasets, e.g., image dataset of mood-stabilized characters, responsibilityA grid image dataset, etc. Obtaining facial blur concept description of each sample of the class label image dataset through a semantic description model and based on an AFS logic system (EM - ) A plurality of candidate fuzzy concept sets of the class label image dataset, i.e. complex concept sets of the face image, are generated from the face fuzzy concept descriptions of the plurality of samples. And searching the optimal fuzzy concept from the plurality of candidate fuzzy concept sets by taking global semantic attribute fitness, local semantic attribute fitness and training error data fitness as measurement targets through a random coordinate contraction optimization method, and taking the optimal fuzzy concept as category semantic description of the character labels. After the category semantic descriptions are obtained, the membership degree of the facial image belonging to each category semantic description can be calculated according to the membership function, and then the character label of the image data is determined according to the maximum membership degree principle.
Referring to fig. 8, an embodiment of the present application further provides a facial information recognition system, including:
A first module for acquiring a facial image;
a second module for determining a plurality of facial feature points in the facial image using a facial feature point positioning technique;
a third module for extracting a plurality of facial feature vectors in the facial image according to the plurality of facial feature points, and forming a feature data set of the facial image according to the plurality of facial feature vectors;
a fourth module, configured to input the feature dataset into a semantic description model based on axiom fuzzy set theory, to obtain a facial fuzzy concept description of the facial image;
and a fifth module for determining character classification results according to the face fuzzy concept description.
It can be understood that the content in the above facial information recognition method embodiment is applicable to the embodiment of the present system, and the functions specifically implemented by the embodiment of the present system are the same as those of the above facial information recognition method embodiment, and the beneficial effects achieved by the embodiment of the above facial information recognition method are the same as those achieved by the embodiment of the above facial information recognition method.
The embodiment of the application also provides electronic equipment, which comprises: the face information recognition method includes a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for realizing connection communication between the processor and the memory, the program realizing the above-described face information recognition method when executed by the processor. The electronic equipment can be any intelligent terminal including a tablet personal computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, the electronic device includes:
the processor 901 may be implemented by a general purpose CPU (central processing unit), a microprocessor, an application specific integrated circuit (ApplicationSpecificIntegratedCircuit, ASIC), or one or more integrated circuits, etc. for executing related programs to implement the technical solutions provided by the embodiments of the present application;
the memory 902 may be implemented in the form of read-only memory (ReadOnlyMemory, ROM), static storage, dynamic storage, or random access memory (RandomAccessMemory, RAM). The memory 902 may store an operating system and other application programs, and when the technical solutions provided in the embodiments of the present application are implemented by software or firmware, relevant program codes are stored in the memory 902, and the processor 901 invokes the facial information recognition method to perform the embodiments of the present application;
an input/output interface 903 for inputting and outputting information;
the communication interface 904 is configured to implement communication interaction between the device and other devices, and may implement communication in a wired manner (e.g. USB, network cable, etc.), or may implement communication in a wireless manner (e.g. mobile network, WIFI, bluetooth, etc.);
A bus 905 that transfers information between the various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 are communicatively coupled to each other within the device via a bus 905.
The embodiment of the application also provides a storage medium, which is a computer readable storage medium and is used for computer readable storage, the storage medium stores one or more programs, and the one or more programs can be executed by one or more processors to realize the facial information recognition method.
The memory, as a non-transitory computer readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer executable programs. In addition, the memory may include high-speed random access memory, and may also include non-transitory memory, such as at least one magnetic disk storage device, flash memory device, or other non-transitory solid state storage device. In some embodiments, the memory optionally includes memory remotely located relative to the processor, the remote memory being connectable to the processor through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
According to the facial information identification method, the facial information identification system, the electronic equipment and the storage medium, a plurality of face feature points in a face image are determined through a face feature point positioning technology, a plurality of face feature vectors in the face image are extracted according to the face feature points, a feature data set of the face image is formed according to the face feature vectors, then the feature data set is input into a semantic description model based on an axiom fuzzy set theory to obtain a face fuzzy concept description of the face image, uncertainty features are accurately and quantitatively described through the axiom fuzzy set theory, and then quantitative analysis mapping is conducted on the face fuzzy concept description to obtain a character classification result. According to the embodiment of the application, quantitative description of facial features is realized through an axiom fuzzy set theory, and on the basis, the probability of the character class to which the facial fuzzy concept of the image belongs is quantitatively analyzed, so that the reliability of facial information classification is improved.
The embodiments described in the embodiments of the present application are for more clearly describing the technical solutions of the embodiments of the present application, and do not constitute a limitation on the technical solutions provided by the embodiments of the present application, and as those skilled in the art can know that, with the evolution of technology and the appearance of new application scenarios, the technical solutions provided by the embodiments of the present application are equally applicable to similar technical problems.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-7 are not limiting to embodiments of the present application and may include more or fewer steps than shown, or certain steps may be combined, or different steps.
The above described apparatus embodiments are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
Those of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the present application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that embodiments of the present application described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in this application, "at least one" means one or more, and "a plurality" means two or more. "and/or" for describing the association relationship of the association object, the representation may have three relationships, for example, "a and/or B" may represent: only a, only B and both a and B are present, wherein a, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of" or the like means any combination of these items, including any combination of single item(s) or plural items(s). For example, at least one (one) of a, b or c may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in this application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is merely a logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical or other form.
The units described above as separate components may or may not be physically separate, and components shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all of the technical solution or in part in the form of a software product stored in a storage medium, including multiple instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods of the various embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing a program.
Preferred embodiments of the present application are described above with reference to the accompanying drawings, and thus do not limit the scope of the claims of the embodiments of the present application. Any modifications, equivalent substitutions and improvements made by those skilled in the art without departing from the scope and spirit of the embodiments of the present application shall fall within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A face information recognition method, characterized by comprising the steps of:
acquiring a face image;
determining a plurality of face feature points in the face image by adopting a face feature point positioning technology;
extracting a plurality of facial feature vectors in the facial image according to the plurality of facial feature points, and forming a feature data set of the facial image according to the plurality of facial feature vectors;
inputting the feature data set into a semantic description model based on axiom fuzzy set theory to obtain a facial fuzzy concept description of a facial image;
and determining character classification results according to the facial blur concept description.
2. The face information identification method of claim 1, wherein the determining a plurality of face feature points in the face image using a face feature point localization technique comprises the steps of:
determining a plurality of first feature points in the facial image by adopting a face feature point positioning technology based on an active state model;
Determining a plurality of second feature points in the facial image by adopting a face feature point positioning technology of a combined regression tree model;
and integrating the first feature points and the second feature points to obtain face feature points.
3. The face information recognition method according to claim 1, wherein the extracting a plurality of facial feature vectors in the face image from a plurality of the face feature points comprises the steps of:
acquiring a plurality of preset characteristic attributes;
selecting a plurality of corresponding face feature points from a plurality of face feature points according to each preset feature attribute;
and analyzing the selected plurality of facial feature points through a geometric form analysis method to obtain facial feature vectors corresponding to the preset feature attributes.
4. The face information recognition method according to claim 1, wherein the inputting the feature data set into a semantic description model based on axiom fuzzy set theory, obtaining a face fuzzy concept description of the face image comprises the steps of:
determining the membership degree of each fuzzy concept in the fuzzy set of each facial feature vector in the feature data set through the membership function of the fuzzy set;
Based on the principle of maximum membership, taking the fuzzy concept with the maximum membership on the facial feature vector as the semantic label of the facial feature vector;
and obtaining the facial fuzzy concept description of the facial image through fuzzy logic operation on the semantic tags of all the facial feature vectors.
5. The face information recognition method of claim 1, wherein the determining a character classification result according to the face blur concept comprises the steps of:
determining the similarity of the facial fuzzy concept description and each category semantic description according to the membership of the facial fuzzy concept description and the membership of each category semantic description;
and determining a character classification result according to the category semantic description with the maximum similarity.
6. The face information recognition method according to claim 1, wherein the semantic description model is obtained by:
obtaining a training image dataset, wherein a sample tag of the training image data comprises a face description tag;
extracting features of the training image data set to obtain each type of facial feature data set;
determining a feature domain according to the facial feature data set, and constructing a fuzzy set according to the feature domain and preset fine granularity;
Carrying out statistical analysis on the facial feature data set to construct a membership function of each facial feature on the fuzzy set;
and determining a semantic description model according to membership functions of all facial features.
7. The face information recognition method of claim 5, wherein the category semantic description is obtained by:
obtaining a training image data set, wherein a sample label of the training image data set comprises a character label;
classifying samples in the training image dataset according to the character labels to obtain a plurality of category label image datasets;
generating a plurality of candidate fuzzy concept sets of the category label image dataset through a semantic description model;
searching optimal fuzzy concepts from a plurality of candidate fuzzy concept sets by a random coordinate shrinkage optimization method, and using the optimal fuzzy concepts as category semantic description of the character labels.
8. A face information recognition system, comprising:
a first module for acquiring a facial image;
a second module for determining a plurality of facial feature points in the facial image using a facial feature point positioning technique;
a third module, configured to extract a plurality of facial feature vectors in the facial image according to a plurality of the facial feature points, and form a feature data set of the facial image according to the plurality of facial feature vectors;
A fourth module, configured to input the feature dataset into a semantic description model based on axiom fuzzy set theory, to obtain a facial fuzzy concept description of a facial image;
and a fifth module for determining a character classification result according to the face fuzzy concept description.
9. An electronic device comprising a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program when executed by the processor implementing the steps of the facial information recognition method of any one of claims 1 to 7.
10. A storage medium, which is a computer-readable storage medium, for computer-readable storage, characterized in that the storage medium stores one or more programs executable by one or more processors to implement the steps of the face information recognition method of any one of claims 1 to 7.
CN202311069853.6A 2023-08-23 2023-08-23 Face information identification method, system, electronic equipment and storage medium Pending CN117315737A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311069853.6A CN117315737A (en) 2023-08-23 2023-08-23 Face information identification method, system, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311069853.6A CN117315737A (en) 2023-08-23 2023-08-23 Face information identification method, system, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117315737A true CN117315737A (en) 2023-12-29

Family

ID=89283768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311069853.6A Pending CN117315737A (en) 2023-08-23 2023-08-23 Face information identification method, system, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117315737A (en)

Similar Documents

Publication Publication Date Title
Perez et al. Gender classification from face images using mutual information and feature fusion
CN114638960A (en) Model training method, image description generation method and device, equipment and medium
CN114897060B (en) Training method and device for sample classification model, and sample classification method and device
Arora et al. Facial emotion recognition system based on PCA and gradient features
CN115239675A (en) Training method of classification model, image classification method and device, equipment and medium
CN113392179A (en) Text labeling method and device, electronic equipment and storage medium
Ma et al. Landmark‐Based Facial Feature Construction and Action Unit Intensity Prediction
CN117494051A (en) Classification processing method, model training method and related device
CN114627282A (en) Target detection model establishing method, target detection model application method, target detection model establishing device, target detection model application device and target detection model establishing medium
Chergui et al. Deep features for kinship verification from facial images
CN116741396A (en) Article classification method and device, electronic equipment and storage medium
CN116721454A (en) Micro-expression recognition method and device, electronic equipment and storage medium
CN116719999A (en) Text similarity detection method and device, electronic equipment and storage medium
CN116543798A (en) Emotion recognition method and device based on multiple classifiers, electronic equipment and medium
CN114398903B (en) Intention recognition method, device, electronic equipment and storage medium
Vijayaraju Image retrieval using image captioning
CN115795007A (en) Intelligent question-answering method, intelligent question-answering device, electronic equipment and storage medium
Liu et al. Multiview Cross-Media Hashing with Semantic Consistency
CN115036022A (en) Health risk assessment method and system, computer device, and storage medium
CN114998041A (en) Method and device for training claim settlement prediction model, electronic equipment and storage medium
CN117315737A (en) Face information identification method, system, electronic equipment and storage medium
CN113643283A (en) Method, device, equipment and storage medium for detecting aging condition of human body
Yin et al. Perceptually learning multi-view sparse representation for scene categorization
Liao et al. Video Face Detection Technology and Its Application in Health Information Management System
Wu et al. A unified framework for age invariant face recognition and age estimation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination