CN112101121A - Face sensitivity identification method and device, storage medium and computer equipment - Google Patents

Face sensitivity identification method and device, storage medium and computer equipment Download PDF

Info

Publication number
CN112101121A
CN112101121A CN202010841596.3A CN202010841596A CN112101121A CN 112101121 A CN112101121 A CN 112101121A CN 202010841596 A CN202010841596 A CN 202010841596A CN 112101121 A CN112101121 A CN 112101121A
Authority
CN
China
Prior art keywords
face
sensitive
feature
image
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010841596.3A
Other languages
Chinese (zh)
Other versions
CN112101121B (en
Inventor
陈仿雄
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Original Assignee
Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shuliantianxia Intelligent Technology Co Ltd filed Critical Shenzhen Shuliantianxia Intelligent Technology Co Ltd
Priority to CN202010841596.3A priority Critical patent/CN112101121B/en
Publication of CN112101121A publication Critical patent/CN112101121A/en
Application granted granted Critical
Publication of CN112101121B publication Critical patent/CN112101121B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0631Item recommendations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Accounting & Taxation (AREA)
  • Finance (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Business, Economics & Management (AREA)
  • Strategic Management (AREA)
  • Marketing (AREA)
  • Economics (AREA)
  • Development Economics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a face sensitivity identification method and device, a storage medium and computer equipment, wherein the method comprises the following steps: the method comprises the steps of obtaining a to-be-recognized face image of a user, obtaining first face sensitive feature information of the user, wherein the first sensitive feature information comprises a collected subjective cognitive sensitive type of the user, inputting the to-be-recognized face image into a face sensitive recognition model, obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises a detected objective sensitive type of the user, and determining the face sensitivity of the user according to the first face sensitive feature information and the second face sensitive feature information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitive type of the user and the objectively detected sensitive type of the user can be combined, and the accuracy of face sensitivity identification of the user is improved.

Description

Face sensitivity identification method and device, storage medium and computer equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a method and an apparatus for identifying human face sensitivity, a storage medium, and a computer device.
Background
With the rapid development of mobile communication technology and the improvement of the living standard of people, various intelligent terminals are widely applied to daily work and life of people, and the intelligent terminals can be provided with various application programs, so that people are more and more accustomed to using the application programs to solve the problems.
Disclosure of Invention
The invention mainly aims to provide a face sensitivity identification method and device, a storage medium and computer equipment, which can effectively improve the accuracy of face sensitivity identification.
In order to achieve the above object, a first aspect of the present invention provides a face-sensitive recognition method, including:
acquiring a to-be-recognized face image of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
inputting the face image to be recognized into a face sensitive recognition model, and obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
In order to achieve the above object, a second aspect of the present invention provides a face-sensitive recognition apparatus, including:
the system comprises an acquisition module, a judgment module and a processing module, wherein the acquisition module is used for acquiring a to-be-recognized face image of a user and acquiring first face sensitive characteristic information of the user, and the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
the model identification module is used for inputting the face image to be identified into a face sensitive identification model to obtain second face sensitive characteristic information of the user output by the face sensitive identification model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
and the determining module is used for determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
To achieve the above object, a third aspect of the present invention provides a computer-readable storage medium storing a computer program, which, when executed by a processor, causes the processor to perform the steps of the face-sensitive recognition method according to the first aspect.
In order to achieve the above object, a fourth aspect of the present invention provides a computer device, including a memory and a processor, where the memory stores a computer program, and the computer program, when executed by the processor, causes the processor to execute the steps in the face-sensitivity recognition method according to the first aspect.
The embodiment of the invention has the following beneficial effects:
the invention provides a face sensitivity identification method, which comprises the following steps: the method comprises the steps of obtaining a to-be-recognized face image of a user, obtaining first face sensitive feature information of the user, wherein the first sensitive feature information comprises a collected subjective cognitive sensitive type of the user, inputting the to-be-recognized face image into a face sensitive recognition model, obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises a detected objective sensitive type of the user, and determining the face sensitivity of the user according to the first face sensitive feature information and the second face sensitive feature information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitive type of the user and the objectively detected sensitive type of the user can be combined, and the accuracy of face sensitivity identification of the user is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Wherein:
FIG. 1 is a schematic flow chart of a face-sensitive recognition method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another embodiment of a face sensitivity recognition method according to the present invention;
FIG. 3 is a schematic structural diagram of a face-sensitive recognition model according to an embodiment of the present invention;
FIG. 4 is a schematic structural diagram of a feature extraction module according to an embodiment of the present invention;
FIG. 5 is a schematic structural diagram of a feature extraction sub-module according to an embodiment of the present invention;
FIG. 6 is a schematic structural diagram of a face-sensitive recognition model in an embodiment of the present invention;
FIG. 7 is a block diagram of a face-sensitive recognition apparatus according to an embodiment of the present invention;
fig. 8 is a block diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 is a schematic flow chart of a face-sensitive recognition method according to an embodiment of the present invention, where the method includes:
step 101, acquiring a to-be-recognized face image of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
in an embodiment of the present invention, the face-sensitive recognition method is implemented by a face-sensitive recognition device, where the face-sensitive recognition device is a program module, the program module is stored in a computer-readable storage medium of a computer device, and a processor in the computer device can read from the computer-readable storage medium and operate the face-sensitive recognition device, so as to implement the face-sensitive recognition method.
When face sensitive recognition needs to be performed on a user, a to-be-recognized face image of the user can be acquired, wherein the to-be-recognized face image is an image shot by the front face of the user, the size of an area occupied by the face in the image is larger than a preset threshold, and the preset threshold can be 70% in time.
In addition, first face sensitive characteristic information of the user can be obtained, and the first face sensitive characteristic information comprises acquired subjective cognitive sensitivity types of the user. The method comprises the steps of presetting sensitive types with different characteristics, specifically, an erythema sensitive type corresponding to an erythema characteristic of a human face, a scaling sensitive type corresponding to a scaling characteristic of the human face, and an acne sensitive type corresponding to an acne characteristic of the human face. And a face skin information table can be preset, the face skin information table is mainly used for collecting skin information which is subjectively perceived by a user, and comprises erythema, scales, acne and the like, and can also comprise burning heat, stabbing pain, itching, tightness and the like. The first face sensitivity characteristic information may be null or may include at least one of a erythema sensitivity type, a scaling sensitivity type and an acne sensitivity type.
Step 102, inputting the face image to be recognized into a face sensitive recognition model, and obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises a detected objective sensitive type of the user;
in the embodiment of the invention, a face sensitive recognition model is preset, and the face sensitive recognition model is used for recognizing a face image to be recognized of the user to determine second face sensitive characteristic information of the user, wherein the face sensitive recognition model is obtained by training a face sensitive sample data set, the face sensitive sample data set comprises a plurality of face sensitive sample images, each face sensitive sample image is labeled with a sensitive type, and a position area corresponding to the sensitive type is also labeled on each face sensitive sample image for the accuracy of the face sensitive recognition model obtained by training. The face sensitive sample data set can be used for training a face sensitive recognition model capable of recognizing at least an erythema sensitive type, a scaling sensitive type and an acne sensitive type. It can be understood that, because the face-sensitive recognition model obtained by training the face-sensitive sample data set is used for recognizing the face image to be recognized, the second face-sensitive feature information obtained by the face-sensitive recognition model is objective information, specifically, the detected objective sensitive type of the user.
The first face sensitive characteristic information and the second face sensitive characteristic information are used for distinguishing different face sensitive characteristic information, the first face sensitive characteristic information is used for representing a sensitive type of subjective cognition of a user, the second face sensitive characteristic information is used for representing a sensitive type of face sensitivity of the user detected objectively, and other limitations are not made.
Step 103, determining the face sensitivity of the user according to the first face sensitive feature information and the second face sensitive feature information.
In the embodiment of the invention, a to-be-recognized face image of a user is acquired, first face sensitive characteristic information of the user is acquired, the first sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user, the to-be-recognized face image is input into a face sensitive recognition model, second face sensitive characteristic information of the user output by the face sensitive recognition model is acquired, the second face sensitive characteristic information comprises a detected objective sensitive type of the user, and the face sensitivity of the user is determined according to the first face sensitive characteristic information and the second face sensitive characteristic information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitive type of the user and the objectively detected sensitive type of the user can be combined, and the accuracy of face sensitivity identification of the user is improved.
For better understanding of the technical solution in the embodiment of the present invention, please refer to fig. 2, which is another schematic flow chart of a face-sensitive recognition method in the embodiment of the present invention, the method is obtained based on the method in the embodiment shown in fig. 1, and includes:
step 201, acquiring a to-be-recognized face image of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
in the embodiment of the present invention, the content of step 201 is similar to that described in step 101 in the embodiment shown in fig. 1, and specific reference may be made to the related content in step 101 in the embodiment shown in fig. 1, which is not described herein again.
Step 202, inputting the face image to be recognized into a face sensitive recognition model, and obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises a detected objective sensitive type of the user;
in the embodiment of the present invention, the preset face-sensitive recognition model may be an improved yolov3 model, and the improved manner may be to replace a feature extraction network Darknet53 of an original yolov3 model with an Inversed Res Block (IRB) structure.
Please refer to fig. 3, which is a schematic diagram of a feasible structure of a face-sensitive recognition model in an embodiment of the present invention, where the face-sensitive recognition model includes a convolutional layer module and N feature extraction modules, where the N feature extraction modules are sequentially cascaded according to an order from the 1 st feature extraction module to the N th feature extraction module, and the convolutional layer module is connected to the 1 st feature extraction module, and it should be noted that one feature extraction module may also be referred to as an IRB module, in the embodiment of the present invention, N IRB modules (feature extraction modules) are used to replace a feature extraction network Darknet53 in a yolov3 model, so as to obtain an improved yolov3 model.
The feature extraction module comprises a plurality of feature extraction sub-modules which are sequentially cascaded, specifically, the feature extraction modules from 1 st to (N-1) th comprise a plurality of feature extraction sub-modules which are sequentially cascaded, and further comprise a down-sampling module connected with the last feature extraction sub-module, the feature extraction module comprises a plurality of feature extraction sub-modules, and the output of the last feature extraction sub-module is part of the output of the face-sensitive recognition model. In addition, a first feature extraction submodule in the Mth feature extraction module is connected with a down-sampling module in the M-1 th feature extraction module, the value of M is [2, N ], and N is larger than 3.
Specifically, please refer to fig. 4, which is a schematic structural diagram of a feature extraction module in an embodiment of the present invention, where the feature extraction module in fig. 4 is a schematic diagram of the 1 st to the N-1 st feature extraction modules, and for the nth feature extraction module, there is no last down-sampling module corresponding to the first N-1 feature extraction modules, as shown in fig. 4, the feature extraction module includes a plurality of feature extraction sub-modules, and the last feature extraction sub-module is connected to the down-sampling module.
Further, for each of the feature extraction submodules, the size of the feature image input to the feature extraction submodule is identical to the size of the feature image output by the feature extraction submodule, for example, if the size of the feature image input to the feature extraction submodule is 26 × 26, the size of the feature image output by the feature extraction submodule is also 26 × 26. And the feature extraction submodule comprises at least three convolutional layers which are sequentially cascaded, and in the transmission direction of the feature image, the number of channels of the at least three convolutional layers is changed according to the trend of increasing and then decreasing, so that the feature map can be increased by expanding the number of channels, more features are extracted, feature map increasing processing is realized, and then feature map compression processing is carried out, so that the fusion of the features can be enhanced. For better understanding of the feature extraction sub-module in the embodiment of the present invention, please refer to fig. 5, which is a structural diagram of a possible implementation manner of the feature extraction sub-module in the embodiment of the present invention, in fig. 5, the example of the convolution layers 1 × 32, 3 × 64, 1 × 8 is described, where 1 x 1, 3 x 3 denote the size of the convolution kernel and 32, 64, 8 denote the number of channels, it will be appreciated that, in practical application, the number of convolution kernels and the number of channels of each convolution layer in the feature extraction submodule can be set according to specific needs, and only the condition that the size of the feature image input into the feature extraction submodule is consistent with the size of the feature image output by the feature extraction submodule and the number of channels changes according to the trend of increasing and then reducing is needed, so that the fusion of features can be effectively enhanced.
In the embodiment of the present invention, for a face image to be recognized of a user, the face image to be recognized may be input to the face-sensitive recognition model, specifically, the convolution layer module of the face-sensitive recognition model is input, and the convolution layer module performs convolution processing on the face image to be recognized to obtain an initial feature image.
And further, taking a q-th feature extraction submodule in the i-th feature extraction module as an example, when the q-th feature extraction submodule has an input feature image, sequentially performing feature map addition processing and feature map compression processing on the input feature image through the q-th feature extraction submodule, and outputting an output feature image of the q-th feature extraction submodule, wherein the i-th feature extraction module is any one of the N feature extraction modules, the q-th feature extraction submodule is any one of the i-th feature extraction modules, an input feature image of a 1-th feature extraction submodule in the 1-th feature extraction module is an initial feature image, and q is a positive integer.
Specifically, after obtaining an initial feature image, inputting the initial feature image to a 1 st feature extraction sub-module in a 1 st feature extraction module, sequentially performing feature map enhancement processing and feature map compression processing on the initial feature image by the 1 st feature extraction sub-module, and outputting an output feature image of the 1 st feature extraction sub-module, wherein the output feature image of the 1 st feature extraction sub-module is used as an input feature image of a 2 nd feature extraction sub-module in the 1 st feature extraction module, and the processing manner of each feature extraction sub-module in the 1 st feature extraction module can be analogized until obtaining an output feature image of a last feature extraction sub-module in the 1 st feature extraction module, and inputting the output feature image of the last feature extraction sub-module in the 1 st feature extraction module to a down-sampling module in the 1 st feature extraction module, and the output characteristic image of the down-sampling module is used as the input characteristic image of the 1 st characteristic extraction sub-module in the 2 nd characteristic extraction module, and the like, so that the identification process by using the face sensitive identification model is completed.
In the embodiment of the invention, second face sensitive feature information of a face image to be recognized is determined by acquiring a first feature image output by a down-sampling module in an N-2 th feature extraction module, a second feature image output by the down-sampling module in an N-1 th feature extraction module and a third feature image output by a last feature extraction sub-module in the N-1 th feature extraction module in a face sensitive recognition model.
Further, the first feature image output by the (N-2) th feature extraction module and the feature image obtained by fusing the down-sampled image of the first feature image are used as the input feature image of the (N-1) th feature extraction module, and by means of the method, the size of the model can be further reduced, and meanwhile, the fusion of features with different sizes is increased, so that the fusion of features can be enhanced.
Furthermore, a down-sampling image obtained by two times of down-sampling of the first feature image output by the N-2 th feature extraction module, the second feature image output by the N-1 th feature extraction module and the down-sampling image of the second feature image are fused to obtain a feature image which is used as an input feature image of the N-th feature extraction module.
In order to better understand the technical solution in the embodiment of the present invention, a specific face-sensitive recognition model will be described below, please refer to fig. 6, which is a schematic structural diagram of the face-sensitive recognition model in the embodiment of the present invention, and N is taken as an example in fig. 6 to describe, that is, the face-sensitive recognition model includes 5 feature extraction modules, which are respectively a feature extraction module 1 to a feature extraction module 5, a convolutional layer module is connected to the feature extraction module 1, and the feature extraction modules 1 to 5 are sequentially cascaded.
The size of the face image to be recognized is set to be 416, the value range of the pixel value of the image is 0-255, the data of the face sensitive recognition model is of float type, the value is in the range of 0-1, when the pixel value is larger than 1, the image is displayed to be white, and effective image information cannot be expressed, so that normalization processing needs to be carried out on the pixel points of the face image to be recognized, namely the pixel points with the pixel value range of 0-255 are normalized to be in the range of 0-1, and the image information is correctly expressed.
The size of the face image to be recognized input to the convolution layer module is 416 × 416, the size of the output feature image is 208 × 208, the size of the feature image input to the feature extraction module 1 is 208 × 208, the size of the output feature image of the feature extraction module 1 is 104 × 104, the sizes of the input and output feature images of other feature extraction modules are analogized, and finally the size of the feature image output by the feature extraction module 5 is 13 × 13. From this, the size of the first feature image is 26 × 26, the size of the second feature image is 13 × 13, and the size of the third feature image is 13 × 13.
After the first characteristic image, the second characteristic image and the third characteristic image are obtained, the first characteristic image, the second characteristic image and the third characteristic image are subjected to mapping processing and mapped into the face image to be recognized, coordinate values of pixel points mapped in the face image to be recognized and corresponding sensitive types of the pixel points are determined, and through the method, second face sensitive characteristic information of the face image to be recognized can be determined.
Step 203, determining a first number of the sensitive types contained in the first face sensitive characteristic information, and determining a second number of the sensitive types contained in the second face sensitive characteristic information;
and 204, determining the face sensitivity of the user according to the first quantity and the second quantity. In an embodiment of the present invention, a first number of sensitivity types included in the first face sensitivity characteristic information may be determined, and a second number of sensitivity types included in the second face sensitivity characteristic information may be determined, where the sensitivity types include erythema, scaling, acne, and the like, for example, if the erythema sensitivity type is included in the first face sensitivity characteristic information, the first number is 1, and if the erythema sensitivity type and the acne sensitivity type are included in the second face sensitivity characteristic information, the second number is 2. It is understood that the sensitivity type included in the first face-sensitive feature information and the second face-sensitive feature information may also be 0.
In the embodiment of the present invention, the face sensitivity of the user may be determined according to the first number and the second number.
Specifically, when the first number is smaller than a preset first threshold and the second number is smaller than a preset second threshold, determining that the face sensitivity of the user is slightly sensitive;
when the first number is greater than or equal to a preset first threshold value and less than a preset third threshold value, and the second number is less than a preset second threshold value; or when the first number is smaller than a preset first threshold, the second number is larger than or equal to a preset second threshold and smaller than a preset fourth threshold, determining that the face sensitivity of the user is moderate sensitivity, wherein the third threshold is larger than the first threshold, and the fourth threshold is larger than the second threshold;
and when the first number is greater than or equal to a preset third threshold value and the second number is greater than or equal to a preset second threshold value, or when the second number is greater than or equal to a preset fourth threshold value, determining that the face sensitivity is heavily sensitive.
It should be understood that, in practical applications, a rule for determining the face sensitivity may be set according to specific needs, and is not limited herein.
In the embodiment of the invention, the improved yolov3 model is used, so that the accuracy of determining the second face sensitive characteristic information by using the model can be improved, and the first face sensitive characteristic information is set, so that the objective second face sensitive characteristic information and the subjective first face sensitive characteristic information can be combined, the face sensitivity is determined, and the accuracy of face sensitivity recognition is improved.
Please refer to fig. 7, which is a schematic structural diagram of a face-sensitive recognition apparatus according to an embodiment of the present invention, the apparatus includes:
the acquiring module 701 is configured to acquire a to-be-recognized face image of a user, and acquire first face sensitive feature information of the user, where the first face sensitive feature information includes an acquired subjective cognitive sensitivity type of the user;
a model identification module 702, configured to input the facial image to be identified into a face-sensitive identification model, and obtain second face-sensitive feature information of the user output by the face-sensitive identification model, where the second face-sensitive feature information includes a detected objective sensitive type of the user;
a determining module 703, configured to determine the face sensitivity of the user according to the first face sensitive feature information and the second face sensitive feature information.
It can be understood that, in the embodiment of the present invention, contents related to the obtaining module 701, the model identifying module 702, and the determining module 703 are similar to those described in the foregoing embodiment of the method for face-sensitive identification, and specific reference may be made to related contents in the embodiment of the face-sensitive identification method, which are not described herein again.
In the embodiment of the invention, a to-be-recognized face image of a user is acquired, first face sensitive characteristic information of the user is acquired, the first sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user, the to-be-recognized face image is input into a face sensitive recognition model, second face sensitive characteristic information of the user output by the face sensitive recognition model is acquired, the second face sensitive characteristic information comprises a detected objective sensitive type of the user, and the face sensitivity of the user is determined according to the first face sensitive characteristic information and the second face sensitive characteristic information. The face sensitivity of the user is determined by utilizing the first sensitive characteristic information and the second sensitive characteristic information, so that the subjective cognitive sensitive type of the user and the objectively detected sensitive type of the user can be combined, and the accuracy of face sensitivity identification of the user is improved.
FIG. 8 is a diagram illustrating an internal structure of a computer device in one embodiment. The computer device may specifically be a terminal, and may also be a server. As shown in fig. 8, the computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium of the computer device stores an operating system and may also store a computer program that, when executed by the processor, causes the processor to implement the age identification method. The internal memory may also have a computer program stored therein, which when executed by the processor, causes the processor to perform the age identification method. Those skilled in the art will appreciate that the architecture shown in fig. 8 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is proposed, comprising a memory and a processor, the memory storing a computer program which, when executed by the processor, causes the processor to perform the steps of:
acquiring a to-be-recognized face image of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
inputting the face image to be recognized into a face sensitive recognition model, and obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
In one embodiment, a computer-readable storage medium is proposed, in which a computer program is stored which, when executed by a processor, causes the processor to carry out the steps of:
acquiring a to-be-recognized face image of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
inputting the face image to be recognized into a face sensitive recognition model, and obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the above embodiments can be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the above embodiments are not described, but should be considered as the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A face-sensitive recognition method, comprising:
acquiring a to-be-recognized face image of a user, and acquiring first face sensitive characteristic information of the user, wherein the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
inputting the face image to be recognized into a face sensitive recognition model, and obtaining second face sensitive feature information of the user output by the face sensitive recognition model, wherein the second face sensitive feature information comprises the detected objective sensitive type of the user;
and determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
2. The method of claim 1, wherein determining the face sensitivity of the user based on the first face sensitivity characteristic information and the second face sensitivity characteristic information comprises:
determining a first number of sensitive types contained in the first face sensitive characteristic information and determining a second number of sensitive types contained in the second face sensitive characteristic information;
determining the face sensitivity of the user according to the first quantity and the second quantity.
3. The method of claim 2, wherein determining the face sensitivity of the user based on the first number and the second number comprises:
when the first number is smaller than a preset first threshold value and the second number is smaller than a preset second threshold value, determining that the face sensitivity of the user is slightly sensitive;
when the first number is greater than or equal to a preset first threshold value and less than a preset third threshold value, and the second number is less than a preset second threshold value; or when the first number is smaller than a preset first threshold, the second number is larger than or equal to a preset second threshold and smaller than a preset fourth threshold, determining that the face sensitivity of the user is moderately sensitive, wherein the third threshold is larger than the first threshold, and the fourth threshold is larger than the second threshold;
when the first number is greater than or equal to the preset third threshold and the second number is greater than or equal to the preset second threshold, or when the second number is greater than or equal to the preset fourth threshold, determining that the face sensitivity is heavily sensitive.
4. The method according to claim 1, wherein the face-sensitive recognition model comprises a convolutional layer module and N feature extraction modules, the feature extraction modules are sequentially cascaded according to the sequence from 1 st to nth, the convolutional layer module is connected with the 1 st feature extraction module, the 1 st to N-1 st feature extraction modules each comprise a plurality of feature extraction sub-modules which are sequentially cascaded, and further comprise a down-sampling module connected with the last feature extraction sub-module, the nth feature extraction module comprises a plurality of feature extraction sub-modules which are sequentially cascaded, the mth feature extraction module is connected with the down-sampling module in the M-1 st feature extraction module, the value of M is [2, N ], and N is greater than 3.
5. The method according to claim 4, wherein the inputting the facial image to be recognized into a face-sensitive recognition model, and the obtaining second face-sensitive feature information of the user output by the face-sensitive recognition model comprises:
inputting the face image to be recognized into the convolution layer module of the face sensitive recognition model, and performing convolution processing on the face image to be recognized through the convolution layer module to obtain an initial characteristic image;
when the q-th feature extraction submodule in the ith feature extraction module has an input feature image, sequentially performing feature map increasing processing and feature map compressing processing on the input feature image through the q-th feature extraction submodule, and outputting an output feature image of the q-th feature extraction submodule; the ith feature extraction module is any one of N feature extraction modules, the qth feature extraction submodule is any one of the ith feature extraction modules, an input feature image of the 1 st feature extraction submodule of the 1 st feature extraction module is the initial feature image, and the size of the input feature image of the qth feature extraction submodule is the same as that of the output feature image;
and determining second face sensitive feature information of the face image to be recognized according to a first feature image output by a down-sampling module in an N-2 th feature extraction module of the face sensitive recognition model, a second feature image output by the down-sampling module in an N-1 th feature extraction module and a third feature image output by a last feature extraction submodule in the N-1 th feature extraction module.
6. The method according to claim 4, wherein the first feature image output by the N-2 th feature extraction module is fused with a down-sampled image of the first feature image to obtain a feature image, and the feature image is used as the input feature image of the N-1 th feature extraction module.
7. The method according to claim 4, wherein a down-sampled image obtained by two times down-sampling of the first feature image output by the N-2 th feature extraction module, the second feature image output by the N-1 th feature extraction module, and the down-sampled image of the second feature image are fused to obtain a feature image, which is used as the input feature image of the N-th feature extraction module.
8. A face-sensitive recognition apparatus, the apparatus comprising:
the system comprises an acquisition module, a judgment module and a processing module, wherein the acquisition module is used for acquiring a to-be-recognized face image of a user and acquiring first face sensitive characteristic information of the user, and the first face sensitive characteristic information comprises an acquired subjective cognitive sensitive type of the user;
the model identification module is used for inputting the face image to be identified into a face sensitive identification model to obtain second face sensitive characteristic information of the user output by the face sensitive identification model, wherein the second face sensitive characteristic information comprises the detected objective sensitive type of the user;
and the determining module is used for determining the face sensitivity of the user according to the first face sensitive characteristic information and the second face sensitive characteristic information.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
10. A computer device comprising a memory and a processor, characterized in that the memory stores a computer program which, when executed by the processor, causes the processor to carry out the steps of the method according to any one of claims 1 to 7.
CN202010841596.3A 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment Active CN112101121B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010841596.3A CN112101121B (en) 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010841596.3A CN112101121B (en) 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment

Publications (2)

Publication Number Publication Date
CN112101121A true CN112101121A (en) 2020-12-18
CN112101121B CN112101121B (en) 2024-04-30

Family

ID=73753932

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010841596.3A Active CN112101121B (en) 2020-08-19 2020-08-19 Face sensitive identification method and device, storage medium and computer equipment

Country Status (1)

Country Link
CN (1) CN112101121B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921128A (en) * 2018-07-19 2018-11-30 厦门美图之家科技有限公司 Cheek sensitivity flesh recognition methods and device
WO2019080580A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN110059546A (en) * 2019-03-08 2019-07-26 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN110674748A (en) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and readable storage medium
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019080580A1 (en) * 2017-10-26 2019-05-02 深圳奥比中光科技有限公司 3d face identity authentication method and apparatus
CN108921128A (en) * 2018-07-19 2018-11-30 厦门美图之家科技有限公司 Cheek sensitivity flesh recognition methods and device
WO2020037898A1 (en) * 2018-08-23 2020-02-27 平安科技(深圳)有限公司 Face feature point detection method and apparatus, computer device, and storage medium
CN110059546A (en) * 2019-03-08 2019-07-26 深圳神目信息技术有限公司 Vivo identification method, device, terminal and readable medium based on spectrum analysis
CN110674748A (en) * 2019-09-24 2020-01-10 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and readable storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
彭强;张晓飞;: "基于特征向量的敏感图像识别技术", 西南交通大学学报, no. 01 *
赵云丰;尹怡欣;: "基于决策融合的红外与可见光图像人脸识别研究", 激光与红外, no. 06 *
郝俊寿;丁艳会;: "基于智能视觉的动态人脸跟踪", 现代电子技术, no. 24 *

Also Published As

Publication number Publication date
CN112101121B (en) 2024-04-30

Similar Documents

Publication Publication Date Title
CN110135406B (en) Image recognition method and device, computer equipment and storage medium
CN109829506B (en) Image processing method, image processing device, electronic equipment and computer storage medium
CN109063742B (en) Butterfly identification network construction method and device, computer equipment and storage medium
CN110427852B (en) Character recognition method and device, computer equipment and storage medium
CN109886077B (en) Image recognition method and device, computer equipment and storage medium
CN108399052B (en) Picture compression method and device, computer equipment and storage medium
US20220165053A1 (en) Image classification method, apparatus and training method, apparatus thereof, device and medium
CN110287836B (en) Image classification method and device, computer equipment and storage medium
CN111968134B (en) Target segmentation method, device, computer readable storage medium and computer equipment
CN113159143A (en) Infrared and visible light image fusion method and device based on jump connection convolution layer
CN112183295A (en) Pedestrian re-identification method and device, computer equipment and storage medium
CN111144285B (en) Fat and thin degree identification method, device, equipment and medium
CN113160087B (en) Image enhancement method, device, computer equipment and storage medium
WO2023065503A1 (en) Facial expression classification method and electronic device
CN113496208B (en) Video scene classification method and device, storage medium and terminal
CN111666932A (en) Document auditing method and device, computer equipment and storage medium
CN111666931B (en) Mixed convolution text image recognition method, device, equipment and storage medium
CN112016502B (en) Safety belt detection method, safety belt detection device, computer equipment and storage medium
CN114266946A (en) Feature identification method and device under shielding condition, computer equipment and medium
CN112115860A (en) Face key point positioning method and device, computer equipment and storage medium
CN111709415A (en) Target detection method, target detection device, computer equipment and storage medium
CN111612732B (en) Image quality evaluation method, device, computer equipment and storage medium
CN112101121A (en) Face sensitivity identification method and device, storage medium and computer equipment
CN116129881A (en) Voice task processing method and device, electronic equipment and storage medium
CN112699809B (en) Vaccinia category identification method, device, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant