CN116883670A - Anti-shielding face image segmentation method - Google Patents

Anti-shielding face image segmentation method Download PDF

Info

Publication number
CN116883670A
CN116883670A CN202311013748.0A CN202311013748A CN116883670A CN 116883670 A CN116883670 A CN 116883670A CN 202311013748 A CN202311013748 A CN 202311013748A CN 116883670 A CN116883670 A CN 116883670A
Authority
CN
China
Prior art keywords
face
image
occlusion
segmentation
graph
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311013748.0A
Other languages
Chinese (zh)
Other versions
CN116883670B (en
Inventor
刘伟华
李娇娇
左勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Athena Eyes Co Ltd
Original Assignee
Athena Eyes Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Athena Eyes Co Ltd filed Critical Athena Eyes Co Ltd
Priority to CN202311013748.0A priority Critical patent/CN116883670B/en
Publication of CN116883670A publication Critical patent/CN116883670A/en
Application granted granted Critical
Publication of CN116883670B publication Critical patent/CN116883670B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

According to the anti-occlusion face image segmentation method provided by the application, an original image is subjected to segmentation labeling treatment, so that an occlusion object labeling diagram and a face labeling diagram are obtained, and an initial image segmentation model is trained through the original image, the occlusion object labeling diagram and the face labeling diagram, so that a trained image segmentation model is obtained; when the face segmentation image needs to be obtained, preprocessing the undetermined image, and then combining the trained network model to obtain the face segmentation image. By decoupling the shielding object from the human face, the adjacent boundary between the shielding object and the human face can be effectively distinguished, the segmentation accuracy of the model on difficult samples such as human face shielding, weak light, strong light, light blurring and the like is improved, and the model reasoning speed is accelerated.

Description

Anti-shielding face image segmentation method
Technical Field
The application relates to the technical field of computer vision and face segmentation, in particular to an anti-shielding face image segmentation method.
Background
There are many methods for face segmentation, such as a full convolution method, an architecture of an encoding-decoding (encoder-decoder), and the like. The full convolution network (Fully Convolutional Network, FCN) adopts the deconvolution layer to up-sample the characteristic diagram of the last convolution layer, so that the characteristic diagram is restored to the same size of the input image, the input image with any size can be accepted, and all training images and test images are not required to have the same size; a popular face segmentation technique is an encoding-decoding (encoder-decoder) architecture, where encoding encodings gradually reduce spatial dimensions due to pooling, and decoding decoders gradually recover spatial dimensions and detail information, and the Unet is one of the popular architectures.
The existing anti-occlusion face image segmentation method has obvious differences on the foreground and the background, can be used for efficiently processing, and is very challenging to accurately segment the face positions with high overlapping when the face is occluded, especially when the aspects of color, texture, outline and the like of the occlusion object are highly similar to the face positions.
Therefore, the anti-occlusion face image segmentation method capable of effectively distinguishing the adjacent boundaries of the face and the occlusion object and improving the segmentation accuracy and the segmentation efficiency is a problem to be solved urgently by those skilled in the art.
Disclosure of Invention
The application aims to provide an anti-shielding face image segmentation method which is clear in logic, safe, effective, reliable and simple and convenient to operate, and can improve the segmentation accuracy of difficult samples such as faces, shielding objects and the like and the segmentation efficiency of face images.
Based on the above purpose, the technical scheme provided by the application is as follows:
an anti-occlusion face image segmentation method comprises the following steps:
dividing and labeling the original image to obtain a shielding object labeling diagram and the face labeling diagram;
training an initial image segmentation model according to the original image, the occlusion object annotation graph and the face annotation graph to obtain a trained image segmentation model;
acquiring a face segmentation image according to the preprocessed undetermined image and the trained image segmentation model;
wherein the original image and the pending image both comprise: a face and a shelter.
Preferably, the training of the initial image segmentation model according to the original image, the occlusion labeling graph and the face labeling graph to obtain a trained image segmentation model includes the following steps:
respectively acquiring a shielding confidence coefficient map and a face confidence coefficient map according to the preprocessed original image and the initial image segmentation model;
obtaining model loss according to the occlusion labeling graph, the occlusion confidence graph, the face labeling graph and the face confidence graph;
and reducing the model loss through multiple iterations so as to adjust preset parameters in the initial image segmentation model and obtain a trained image segmentation model.
Preferably, the initial image segmentation model comprises:
the first sub-network module and the second sub-network module are of the same structure;
the first sub-network module includes: a first feature extraction module and a first segmentation module;
the second sub-network module includes: the device comprises a second feature extraction module and a second segmentation module.
Preferably, the step of obtaining a mask confidence map and a face confidence map according to the preprocessed original image and the initial image segmentation model respectively includes the following steps:
preprocessing the original image to obtain a preprocessed original image;
extracting the preprocessed original shelter image features according to the first feature extraction module;
dividing the preprocessed original occlusion image characteristics according to the first dividing module to obtain an occlusion confidence map;
extracting the preprocessed original face image features according to the second feature extraction module;
and according to the second segmentation module, segmenting the preprocessed original face image features to obtain the face confidence map.
Preferably, the obtaining model loss according to the occlusion labeling graph and the occlusion confidence graph, the face labeling graph and the face confidence graph includes the following steps:
calculating and acquiring a first loss according to the occlusion labeling graph and the occlusion confidence graph;
calculating and acquiring a second loss according to the face annotation graph and the face confidence map;
the first loss and the second loss are weighted and summed to obtain a model loss.
Preferably, the calculating to obtain the first loss according to the occlusion labeling graph and the occlusion confidence graph includes the following specific calculation formula:
wherein, pre occ To block confidence map, label occ For covering object mark graph, loss occ For the first loss, n is the amount of input data.
Preferably, the calculating to obtain the second loss according to the face label graph and the face confidence graph includes the following specific calculation formula:
wherein, pre seg For face confidence map, label seg Labeling the face with a graph, loss seg For the second loss, n is the amount of input data.
Preferably, after the obtaining the occlusion confidence map and the face confidence map according to the preprocessed original image and the initial image segmentation model, the method further includes the following steps:
defining a plurality of characteristics of the human faces;
and establishing a mapping relation between each feature of the face and each preset pixel value.
Preferably, the step of obtaining a face segmentation image according to the pre-processed pending image and the trained image segmentation model includes the following steps:
acquiring a confidence level diagram of the undetermined face according to the preprocessed undetermined image and the trained image segmentation model;
and outputting the characteristics of the corresponding face as a face segmentation image according to the maximum value in the preset pixel point values.
According to the anti-occlusion face image segmentation method provided by the application, an original image is subjected to segmentation labeling treatment, so that an occlusion object labeling diagram and a face labeling diagram are obtained, and an initial image segmentation model is trained through the original image, the occlusion object labeling diagram and the face labeling diagram, so that a trained image segmentation model is obtained; when the face segmentation image needs to be obtained, preprocessing the undetermined image, and then combining the trained network model to obtain the face segmentation image.
Compared with the prior art, the method effectively distinguishes the adjacent boundaries of the shielding object and the face part by decoupling the shielding object and the face, improves the segmentation accuracy of the model on difficult samples such as face shielding, weak light, strong light, light blurring and the like, and accelerates the model reasoning speed.
Drawings
In order to more clearly illustrate the embodiments of the application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the application, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of an anti-occlusion face image segmentation method provided by an embodiment of the present application;
FIG. 2 is a flowchart of step S2 provided in an embodiment of the present application;
FIG. 3 is a flowchart of step A1 provided in an embodiment of the present application;
FIG. 4 is a flowchart of step A2 provided in an embodiment of the present application;
fig. 5 is a flowchart of step S3 provided in the embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The embodiment of the application is written in a progressive manner.
The embodiment of the application provides an anti-occlusion face image segmentation method. The method mainly solves the technical problem that in the prior art, the segmentation accuracy of difficult samples such as faces and shielding objects are overlapped highly is low.
As shown in fig. 1, an anti-occlusion face image segmentation method includes the following steps:
s1, dividing and labeling an original image to obtain a shielding object labeling diagram and a face labeling diagram;
s2, training an initial image segmentation model according to the original image, the shielding object annotation graph and the face annotation graph to obtain a trained image segmentation model;
s3, acquiring a face segmentation image according to the preprocessed undetermined image and the trained image segmentation model;
wherein, the original image and the undetermined image both comprise: a face and a shelter.
In the step S1, an original image is acquired by a collecting device, and the original image is processed by using a segmentation marking tool, so that a shielding object segmentation marking image and a human face segmentation marking image are obtained, and scaled to 256 multiplied by 256;
in step S2, an initial image segmentation model is constructed, and the initial image segmentation model is trained through an original image, a shelter segmentation labeling image and a face segmentation labeling image, so that a trained image segmentation model is obtained;
in the embodiment, the training set has 30000 pieces of labeling data, the training frequency is set to be 140, and the learning rate is=1e-3;
in step S3, after preprocessing the undetermined image, inputting the undetermined image into the trained image segmentation model to obtain a face segmentation image result.
In this embodiment, the twin parallel network model is composed of 2 sub-networks with the same structure, one sub-network is used for shielding object segmentation, the other sub-network is used for face segmentation, each sub-network is composed of a feature extraction module and a segmentation module, the feature extraction module adopts the Unet to perform feature extraction, and a pixel differential convolution (Pixel difference convolution, PDC) structure is designed in the segmentation module and is used for fine segmentation. The face segmentation network fuses the shielding object features and the face features, and then carries out face segmentation through the PDC segmentation module.
The specific implementation process is as follows: the input is 256×256×64 eigenvectors, the eigenvectors are convolved by 64 center differential convolutions conv1 of 5×5, 64 eigenvectors of 256×256×1 are obtained after a regularization activation function relu, then the eigenvectors are convolved by 64 center differential convolutions conv2 of 3×3, the result of conv2 convolution is added with the input eigenvectors 256×256×64, then the result of 256×256×12 is obtained by 12 common convolutions conv3 of 1×1, and finally the result is output by a normalized exponential function softmax.
As shown in fig. 2, preferably, in step S2, the following steps are included:
A1. respectively acquiring a shielding confidence coefficient map and a human face confidence coefficient map according to the preprocessed original image and the preprocessed original image segmentation model;
A2. obtaining model loss according to the shielding object labeling drawing, the shielding object confidence level drawing, the human face labeling drawing and the human face confidence level drawing;
A3. and reducing model loss through multiple iterations so as to adjust preset parameters in the initial image segmentation model and obtain the trained image segmentation model.
In the step A1, the preprocessed original image is input into an initial image segmentation model, so that prediction is performed, and a shelter confidence map and a face confidence map are obtained;
in the step A2, a model loss is calculated and obtained by combining the shielding object labeling diagram and the shielding object confidence level diagram and the face labeling diagram and the face confidence level diagram;
in step A3, the model loss is reduced by multiple iterations, so that preset parameters in the initial image segmentation model are adjusted, and the trained image segmentation model is obtained.
Preferably, the initial image segmentation model comprises:
the first sub-network module and the second sub-network module are of the same structure;
the first subnetwork module includes: a first feature extraction module and a first segmentation module;
the second sub-network module includes: the device comprises a second feature extraction module and a second segmentation module.
As shown in fig. 3, preferably, step A1 includes the steps of:
B1. preprocessing an original image to obtain a preprocessed original image;
B2. extracting the image features of the preprocessed original shielding object according to the first feature extraction module;
B3. dividing the preprocessed original occlusion image characteristics according to a first dividing module to obtain an occlusion confidence map;
B4. extracting the preprocessed original face image features according to the second feature extraction module;
B5. and according to the second segmentation module, segmenting the preprocessed original face image features to obtain a face confidence map.
In the actual application process, the initial image segmentation model designs two parallel sub-network modules, and the two sub-network modules are arranged to be of the same structure; the sub-network module is provided with a feature extraction module and a segmentation module, after the features are extracted by the feature extraction module, corresponding technical features are obtained, and after feature fusion is carried out, face segmentation is carried out by the segmentation module;
in step B1, the image preprocessing is to sort out each text image and deliver it to the recognition module for recognition, and this process is called image preprocessing. In image analysis, the input image is subjected to processing performed before feature extraction, segmentation, and matching.
The main purpose of image preprocessing is to eliminate extraneous information in the image, recover useful real information, enhance the detectability of related information and maximally simplify data, thereby improving the reliability of feature extraction, image segmentation, matching and recognition.
In this embodiment, the acquired original image, its occlusion object segmentation labeling image, and the face segmentation labeling image are scaled to 256×256 sizes.
And B2 to B5 are all that corresponding original image features are extracted from the preprocessed original face image features through corresponding feature extraction modules and segmentation modules in the first sub-network module and the second sub-network module, so that the image features are segmented to obtain corresponding face confidence maps.
In this embodiment, a twin parallel network model is constructed, which includes 2 sub-networks of the same structure, each sub-network including a feature extraction module and a segmentation module.
The feature extraction module adopts the Unet to extract features;
the segmentation module is designed with a pixel differential convolution (Pixel difference convolution, PDC) structure, and the differential information is utilized to represent abrupt change and detail characteristics of the edge context and fine edge segmentation;
specifically, a subnetwork uses the Unet to extract the characteristics of the occlusion, and segments the characteristics of the occlusion through a PDC segmentation model to obtain a segmentation confidence map of the occlusion (the value of the confidence map is a continuous value between [0,1 ]); the other sub-network uses the Unet to extract the face characteristics, and performs face segmentation through a PDC segmentation model after the face characteristics are fused with the occlusion object characteristics, so as to obtain a 12-class segmentation confidence coefficient map (12 classes of skin, left eyebrow, right eyebrow, left eye, right eye, nose, upper lip, lower lip, oral cavity, glasses, occlusion object and background).
As shown in fig. 4, preferably, step A2 includes the steps of:
C1. calculating and acquiring a first loss according to the occlusion labeling graph and the occlusion confidence graph;
C2. calculating and acquiring a second loss according to the face annotation graph and the face confidence graph;
C3. the first loss and the second loss are weighted and summed to obtain a model loss.
Preferably, according to the occlusion labeling graph and the occlusion confidence graph, the first loss is calculated and acquired, and a specific calculation formula is as follows:
wherein, pre occ To block confidence map, label occ For covering object mark graph, loss occ For the first loss, n is the amount of input data.
Preferably, according to the face label graph and the face confidence graph, calculating to obtain the second loss, wherein the specific calculation formula is as follows:
wherein, pre seg For face confidence map, label seg Labeling the face with a graph, loss seg For the second loss, n is the amount of input data.
In the steps C1 to C3, the preprocessed face image is input into a twin parallel network model for prediction, and a shelter segmentation confidence map pre is obtained occ And face segmentation confidence map pre seg Calculating pre occ Label graph label for dividing corresponding shielding object occ Loss of loss between occ 、pre seg Label graph label corresponding to face segmentation seg Between which are locatedLoss of (2) seg And carrying out weighted summation on the two losses to obtain a final model loss, and reducing model loss through continuous iteration, so that parameters in the twin parallel network model are adjusted, and a trained twin parallel network model is obtained.
Preferably, after step A1, the method further comprises the following steps:
defining a plurality of characteristics of the human faces;
and establishing a mapping relation between each feature of the face and each preset pixel value.
In this embodiment, in the segmentation label graph of the occlusion object, a pixel value of 1 indicates the occlusion object, and a pixel value of 0 indicates the non-occlusion object. In the face segmentation label graph, a pixel point value of 0 represents a background, 1 represents skin, 2 represents a left eyebrow, 3 represents a right eyebrow, 4 represents a left eye, 5 represents a right eye, 6 represents a nose, 7 represents an upper lip, 8 represents a lower lip, 9 represents an oral cavity, 10 represents glasses and 11 represents a shielding object.
As shown in fig. 5, preferably, step S3 includes the steps of:
D1. acquiring a confidence level diagram of the undetermined face according to the preprocessed undetermined image and the trained image segmentation model;
D2. and outputting the characteristics of the corresponding face as a face segmentation image according to the maximum value in the preset pixel point values.
In the step D1 to the step D2, after preprocessing an image to be subjected to face segmentation, inputting the image into a trained twin parallel network model, and outputting a face segmentation confidence map; calculating a maximum value index corresponding to each pixel point of the face segmentation confidence coefficient graph to obtain a face segmentation result;
in this embodiment, the face segmentation confidence map is composed of 12 256×256 confidence maps (total 12 classes), which class has the greatest confidence in the corresponding pixel, and the pixel outputs the class, i.e. calculates the index corresponding to the maximum value of each pixel, and finally outputs a face segmentation map of 1×256×256 (the value of the segmentation map is an integer between [0,11 ]).
In the embodiments provided in the present application, it should be understood that the disclosed method may be implemented in other manners. The above-described method embodiments are merely illustrative, and for example, the division of modules is merely a logical function division, and there may be other division manners in actual implementation, such as: multiple modules or components may be combined, or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or modules, whether electrically, mechanically, or otherwise.
In addition, each functional module in each embodiment of the present application may be integrated in one processor, or each module may be separately used as one device, or two or more modules may be integrated in one device; the functional modules in the embodiments of the present application may be implemented in hardware, or may be implemented in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will appreciate that: all or part of the steps for implementing the above method embodiments may be implemented by program instructions and associated hardware, where the program instructions may be stored in a computer readable storage medium, and where the program instructions, when executed, perform steps comprising the above method embodiments; and the aforementioned storage medium includes: a mobile storage device, a Read Only Memory (ROM), a magnetic disk or an optical disk, or the like, which can store program codes.
It should be appreciated that the use of "systems," "devices," "units," and/or "modules" in this disclosure is but one way to distinguish between different components, elements, parts, portions, or assemblies at different levels. However, if other words can achieve the same purpose, the word can be replaced by other expressions.
As used in the specification and in the claims, the terms "a," "an," "the," and/or "the" are not specific to a singular, but may include a plurality, unless the context clearly dictates otherwise. In general, the terms "comprises" and "comprising" merely indicate that the steps and elements are explicitly identified, and they do not constitute an exclusive list, as other steps or elements may be included in a method or apparatus. The inclusion of an element defined by the phrase "comprising one … …" does not exclude the presence of additional identical elements in a process, method, article, or apparatus that comprises an element.
The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature.
If a flowchart is used in the present application, the flowchart is used to describe the operations performed by a system according to an embodiment of the present application. It should be appreciated that the preceding or following operations are not necessarily performed in order precisely. Rather, the steps may be processed in reverse order or simultaneously. Also, other operations may be added to or removed from these processes.
The anti-occlusion face image segmentation method provided by the application is described in detail. The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (9)

1. An anti-occlusion face image segmentation method is characterized by comprising the following steps:
dividing and labeling the original image to obtain a shielding object labeling diagram and the face labeling diagram;
training an initial image segmentation model according to the original image, the occlusion object annotation graph and the face annotation graph to obtain a trained image segmentation model;
acquiring a face segmentation image according to the preprocessed undetermined image and the trained image segmentation model;
wherein the original image and the pending image both comprise: a face and a shelter.
2. The method for segmentation of anti-occlusion facial images of claim 1, wherein the training of an initial image segmentation model based on the original image, the occlusion labeling map, and the facial labeling map to obtain a trained image segmentation model comprises the steps of:
respectively acquiring a shielding confidence coefficient map and a face confidence coefficient map according to the preprocessed original image and the initial image segmentation model;
obtaining model loss according to the occlusion labeling graph, the occlusion confidence graph, the face labeling graph and the face confidence graph;
and reducing the model loss through multiple iterations so as to adjust preset parameters in the initial image segmentation model and obtain a trained image segmentation model.
3. The anti-occlusion facial image segmentation method of claim 2, wherein the initial image segmentation model comprises:
the first sub-network module and the second sub-network module are of the same structure;
the first sub-network module includes: a first feature extraction module and a first segmentation module;
the second sub-network module includes: the device comprises a second feature extraction module and a second segmentation module.
4. An anti-occlusion facial image segmentation method as set forth in claim 3, wherein the step of obtaining an occlusion confidence map and a facial confidence map from the preprocessed original image and the initial image segmentation model, respectively, comprises the steps of:
preprocessing the original image to obtain a preprocessed original image;
extracting the preprocessed original shelter image features according to the first feature extraction module;
dividing the preprocessed original occlusion image characteristics according to the first dividing module to obtain an occlusion confidence map;
extracting the preprocessed original face image features according to the second feature extraction module;
and according to the second segmentation module, segmenting the preprocessed original face image features to obtain the face confidence map.
5. The method for segmentation of anti-occlusion facial images of claim 4, wherein the obtaining model loss based on the occlusion labeling map and the occlusion confidence map, the facial labeling map and the facial confidence map comprises the steps of:
calculating and acquiring a first loss according to the occlusion labeling graph and the occlusion confidence graph;
calculating and acquiring a second loss according to the face annotation graph and the face confidence map;
the first loss and the second loss are weighted and summed to obtain a model loss.
6. The method for segmentation of anti-occlusion facial images of claim 5, wherein the calculating to obtain the first loss according to the occlusion labeling graph and the occlusion confidence graph comprises the following specific calculation formula:
wherein, pre occ To block confidence map, label occ For covering object mark graph, loss occ Is the firstLoss, n, is the amount of input data.
7. The method for segmentation of anti-occlusion facial images of claim 6, wherein the calculating to obtain the second loss according to the face label graph and the face confidence graph comprises the following specific calculation formula:
wherein, pre seg For face confidence map, label seg Labeling the face with a graph, loss seg For the second loss, n is the amount of input data.
8. The method for segmentation of anti-occlusion facial images according to claim 2, wherein after obtaining an occlusion confidence map and a facial confidence map according to the preprocessed original image and the initial image segmentation model, respectively, the method further comprises the following steps:
defining a plurality of characteristics of the human faces;
and establishing a mapping relation between each feature of the face and each preset pixel value.
9. The method for segmentation of anti-occlusion facial images of claim 8, wherein the step of obtaining a facial segmented image based on the pre-processed pending image and the trained image segmentation model comprises the steps of:
acquiring a confidence level diagram of the undetermined face according to the preprocessed undetermined image and the trained image segmentation model;
and outputting the characteristics of the corresponding face as a face segmentation image according to the maximum value in the preset pixel point values.
CN202311013748.0A 2023-08-11 2023-08-11 Anti-shielding face image segmentation method Active CN116883670B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311013748.0A CN116883670B (en) 2023-08-11 2023-08-11 Anti-shielding face image segmentation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311013748.0A CN116883670B (en) 2023-08-11 2023-08-11 Anti-shielding face image segmentation method

Publications (2)

Publication Number Publication Date
CN116883670A true CN116883670A (en) 2023-10-13
CN116883670B CN116883670B (en) 2024-05-14

Family

ID=88255069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311013748.0A Active CN116883670B (en) 2023-08-11 2023-08-11 Anti-shielding face image segmentation method

Country Status (1)

Country Link
CN (1) CN116883670B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299658A (en) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 Face area detecting method, face image rendering method, device and storage medium
CN110363134A (en) * 2019-07-10 2019-10-22 电子科技大学 A kind of face blocked area localization method based on semantic segmentation
CN111310718A (en) * 2020-03-09 2020-06-19 成都川大科鸿新技术研究所 High-accuracy detection and comparison method for face-shielding image
CN111339874A (en) * 2020-02-18 2020-06-26 广州麦仑信息科技有限公司 Single-stage face segmentation method
CN112016464A (en) * 2020-08-28 2020-12-01 中移(杭州)信息技术有限公司 Method and device for detecting face shielding, electronic equipment and storage medium
CN112613374A (en) * 2020-12-16 2021-04-06 厦门美图之家科技有限公司 Face visible region analyzing and segmenting method, face making-up method and mobile terminal
CN113723414A (en) * 2021-08-12 2021-11-30 中国科学院信息工程研究所 Mask face shelter segmentation method and device
CN115457624A (en) * 2022-08-18 2022-12-09 中科天网(广东)科技有限公司 Mask wearing face recognition method, device, equipment and medium with local and overall face features cross-fused
CN116309643A (en) * 2023-03-23 2023-06-23 上海云从企业发展有限公司 Face shielding score determining method, electronic equipment and medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109299658A (en) * 2018-08-21 2019-02-01 腾讯科技(深圳)有限公司 Face area detecting method, face image rendering method, device and storage medium
CN110363134A (en) * 2019-07-10 2019-10-22 电子科技大学 A kind of face blocked area localization method based on semantic segmentation
CN111339874A (en) * 2020-02-18 2020-06-26 广州麦仑信息科技有限公司 Single-stage face segmentation method
CN111310718A (en) * 2020-03-09 2020-06-19 成都川大科鸿新技术研究所 High-accuracy detection and comparison method for face-shielding image
CN112016464A (en) * 2020-08-28 2020-12-01 中移(杭州)信息技术有限公司 Method and device for detecting face shielding, electronic equipment and storage medium
CN112613374A (en) * 2020-12-16 2021-04-06 厦门美图之家科技有限公司 Face visible region analyzing and segmenting method, face making-up method and mobile terminal
CN113723414A (en) * 2021-08-12 2021-11-30 中国科学院信息工程研究所 Mask face shelter segmentation method and device
CN115457624A (en) * 2022-08-18 2022-12-09 中科天网(广东)科技有限公司 Mask wearing face recognition method, device, equipment and medium with local and overall face features cross-fused
CN116309643A (en) * 2023-03-23 2023-06-23 上海云从企业发展有限公司 Face shielding score determining method, electronic equipment and medium

Also Published As

Publication number Publication date
CN116883670B (en) 2024-05-14

Similar Documents

Publication Publication Date Title
CN108520503B (en) Face defect image restoration method based on self-encoder and generation countermeasure network
Shankaranarayana et al. Fully convolutional networks for monocular retinal depth estimation and optic disc-cup segmentation
WO2018125580A1 (en) Gland segmentation with deeply-supervised multi-level deconvolution networks
CN111369565B (en) Digital pathological image segmentation and classification method based on graph convolution network
CN110008832A (en) Based on deep learning character image automatic division method, information data processing terminal
CN112017189A (en) Image segmentation method and device, computer equipment and storage medium
CN110781980B (en) Training method of target detection model, target detection method and device
CN112446892A (en) Cell nucleus segmentation method based on attention learning
EP4047509A1 (en) Facial parsing method and related devices
CN110287813A (en) Personal identification method and system
CN113762265B (en) Classified segmentation method and system for pneumonia
CN112348830B (en) Multi-organ segmentation method based on improved 3D U-Net
CN112836625A (en) Face living body detection method and device and electronic equipment
CN114445904A (en) Iris segmentation method, apparatus, medium, and device based on full convolution neural network
CN111814603A (en) Face recognition method, medium and electronic device
CN111429468A (en) Cell nucleus segmentation method, device, equipment and storage medium
CN109002808B (en) Human behavior recognition method and system
CN113658206A (en) Plant leaf segmentation method
CN112766028A (en) Face fuzzy processing method and device, electronic equipment and storage medium
CN114419091A (en) Foreground matting method and device and electronic equipment
CN111881803B (en) Face recognition method based on improved YOLOv3
CN116883670B (en) Anti-shielding face image segmentation method
CN116993987A (en) Image semantic segmentation method and system based on lightweight neural network model
CN111612803A (en) Vehicle image semantic segmentation method based on image definition
Malgheet et al. MS-net: Multi-segmentation network for the iris region using deep learning in an unconstrained environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Country or region after: China

Address after: No. 205, Building B1, Huigu Science and Technology Industrial Park, No. 336 Bachelor Road, Bachelor Street, Yuelu District, Changsha City, Hunan Province, 410000

Applicant after: Wisdom Eye Technology Co.,Ltd.

Address before: Building 14, Phase I, Changsha Zhongdian Software Park, No. 39 Jianshan Road, Changsha High tech Development Zone, Changsha City, Hunan Province, 410205

Applicant before: Wisdom Eye Technology Co.,Ltd.

Country or region before: China

GR01 Patent grant