CN112836653A - Face privacy method, device and apparatus and computer storage medium - Google Patents

Face privacy method, device and apparatus and computer storage medium Download PDF

Info

Publication number
CN112836653A
CN112836653A CN202110167982.3A CN202110167982A CN112836653A CN 112836653 A CN112836653 A CN 112836653A CN 202110167982 A CN202110167982 A CN 202110167982A CN 112836653 A CN112836653 A CN 112836653A
Authority
CN
China
Prior art keywords
privacy
face
face detection
image
generate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110167982.3A
Other languages
Chinese (zh)
Inventor
谈继勇
刘根
杨洪光
李元伟
孙熙
杨道文
李冰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hanwei Intelligent Medical Technology Co ltd
Original Assignee
Shenzhen Hanwei Intelligent Medical Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Hanwei Intelligent Medical Technology Co ltd filed Critical Shenzhen Hanwei Intelligent Medical Technology Co ltd
Priority to CN202110167982.3A priority Critical patent/CN112836653A/en
Publication of CN112836653A publication Critical patent/CN112836653A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The invention discloses a face privacy method, a device and a computer storage medium, wherein the method comprises the following steps: based on a face detection model, executing face detection operation on an input image to generate a region to be subjected to privacy; and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image. The method solves the privacy protection problem of the patient in the automatic breast ultrasound scanning process, and achieves the technical effects of fuzzifying the area to be private of the face and avoiding privacy disclosure of the patient.

Description

Face privacy method, device and apparatus and computer storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to a method, an apparatus, a device, and a computer storage medium for human face privacy.
Background
In the automatic breast ultrasound scanning process, incomplete or complete face information can not be shot easily due to the limitation of camera lenses and object distances. The face information is the privacy information of the patient as the main characteristic of human body differentiation, but the face information is not processed in the automatic breast ultrasound scanning process of the hospital at present, which may cause the disclosure of the identity, physical sign and other privacy information of the examiner.
The main direction of the face detection implementation mode is as follows: (1) traditional Machine learning is realized by artificially selecting Haar-like features, HOG (Histogram of Oriented gradients), and the like (without being limited to the features) and combining classifiers Adaboost, SVM (Support Vector Machine), and the like (without being limited to the classifiers); (2) deep learning, namely collecting a data set containing a face image in an arranging manner, carrying out related labeling and arranging, and completing face detection model training by using common face detection models such as a deep learning model MTCNN (Multi-task convolutional neural network) and a Retinaface (not limited to the above models) for realizing face detection in an image or a video. The traditional machine learning method has obvious defects, such as: the features and classifiers need to be manually selected, the robustness is low, common face detection models have good effects in real scenes, but the common face detection models aim at intensive face detection, so that a large amount of data collection and arrangement are needed, part of network models are complex, and the training and reasoning speed is limited.
Disclosure of Invention
In view of this, a face privacy method, device, apparatus, and computer storage medium are provided to solve the privacy protection problem of patients during the automated breast ultrasound scanning process.
The embodiment of the application provides a face privacy method, which comprises the following steps:
based on a face detection model, executing face detection operation on an input image to generate a region to be subjected to privacy;
and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
In an embodiment, before the step of performing a face detection operation on the input image based on the face detection model to generate the region to be privatized, the method further includes:
creating a face detection model, specifically comprising:
inputting the images in the training set into the face detection model to generate a face detection result;
comparing the face detection result with an image label and calculating an error;
reversely propagating the error, and updating parameters of the face detection model;
and generating a face detection model until the error meets a preset threshold value.
In an embodiment, the performing, by using a face detection model, a face detection operation on an input image to generate a region to be privatized includes:
sequentially carrying out a first number of preset structure operations on the input image to generate a first intermediate feature map;
performing convolution operation on the first intermediate feature map to generate a second intermediate feature map;
inputting the second intermediate feature map into a first full-connection layer to generate a first feature vector;
inputting the first feature vector into a second full-connection layer to carry out face classification, and generating a face classification prediction result;
inputting the first feature vector into a third full-connection layer to perform bounding box regression, and generating a bounding box prediction result;
inputting the first feature vector into a fourth full-connection layer to carry out key point regression, and generating a key point prediction result;
if the face classification prediction result and the key point prediction result meet preset conditions, taking the boundary frame prediction result as the area to be subjected to privacy;
wherein the preset structure operation is a convolution operation and a preset pooling operation; the face classification uses a first loss function; the bounding box regression and the keypoint regression use a second loss function.
In an embodiment, the performing an image privacy processing on the area to be privacy-treated includes:
dividing the area to be private into a preset number of pixel block areas;
calculating the average red pixel value, the average green pixel value and the average blue pixel value of all pixel points in the current pixel block area;
replacing the average red pixel value, the average green pixel value and the average blue pixel value with the original red pixel value, the original green pixel value and the original blue pixel value of all the pixel points in the current pixel block;
and generating a blurred privacy area until all the pixel block areas are calculated and replaced.
In one embodiment, the generating the face-privacy image includes:
and covering the blurred privacy area with the area to be privacy.
In an embodiment, the training set constructing process includes:
collecting images meeting a preset standard;
and labeling the image based on a preset method to generate a training image with the image label.
In one embodiment, the formatting process of the input image includes:
and preprocessing the input image based on the input format of the face detection model.
In order to achieve the above object, there is also provided a face privacy apparatus including:
the face detection module is used for executing face detection operation on the input image based on the face detection model to generate a region to be subjected to privacy;
and the face privacy module is used for executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
To achieve the above object, there is also provided a computer storage medium having stored thereon a face privacy method program that, when executed by a processor, implements the steps of any of the above methods.
In order to achieve the above object, there is also provided a face privacy apparatus, including a memory, a processor, and a face privacy method program stored in the memory and executable on the processor, where the processor implements any of the above steps of the method when executing the face privacy method program.
One or more technical solutions provided in the embodiments of the present application have at least the following technical effects or advantages:
based on a face detection model, executing face detection operation on an input image to generate a region to be subjected to privacy; the method comprises the steps that a region to be subjected to privacy is extracted from an input image through a face detection model, a boundary frame is correctly extracted in face detection, the correctness of generation of the region to be subjected to privacy is guaranteed, and a correct privacy region is provided for privacy processing of the region to be subjected to privacy subsequently.
And executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image. Through privacy processing, the area to be subjected to privacy processing is fuzzified, so that the generated face privacy image protects the privacy of the patient in the ultrasonic scanning process.
The method solves the privacy protection problem of the patient in the automatic breast ultrasound scanning process, and achieves the technical effects of fuzzifying the area to be private of the face and avoiding privacy disclosure of the patient.
Drawings
Fig. 1 is a schematic flowchart of a first embodiment of a face privacy method according to the present application;
FIG. 2 is a diagram illustrating a bounding box detection and privacy result of the face privacy method of the present application;
fig. 3 is a schematic flowchart of a second embodiment of a face privacy method according to the present application;
fig. 4 is a schematic flowchart illustrating a specific implementation step of step S210 in a second embodiment of the face privacy method according to the present application;
fig. 5 is a schematic flowchart illustrating a specific implementation step of step S110 in the first embodiment of the face privacy method according to the present application;
fig. 6 is a schematic diagram of a network structure of the face privacy method according to the present application;
fig. 7 is a schematic flowchart illustrating a specific implementation step of step S120 in the first embodiment of the face privacy method according to the present application;
FIG. 8 is a process of constructing a training set in the face privacy method of the present application;
fig. 9 is a schematic flowchart of a third embodiment of a face privacy method according to the present application;
FIG. 10 is a flow chart of a face privacy method of the present application;
FIG. 11 is a schematic structural diagram of a face privacy apparatus according to the present application;
fig. 12 is a schematic hardware architecture diagram of a face privacy apparatus according to an embodiment of the present application.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
The main solution of the embodiment of the invention is as follows: based on a face detection model, executing face detection operation on an input image to generate a region to be subjected to privacy; and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image. The method solves the privacy protection problem of the patient in the automatic breast ultrasound scanning process, and achieves the technical effects of fuzzifying the area to be private of the face and avoiding privacy disclosure of the patient.
In order to better understand the technical solution, the technical solution will be described in detail with reference to the drawings and the specific embodiments.
Referring to fig. 1, fig. 1 is a first embodiment of a face privacy method according to the present application, where the method includes:
step S110: and executing face detection operation on the input image based on the face detection model to generate a region to be subjected to privacy protection.
Specifically, in this embodiment, the input image may be an image with complete or incomplete face information captured in an automated breast ultrasound scanning process, an image with complete or incomplete face information captured in other medical examination processes, or an image including face information that needs to be privatized and acquired from other approaches, which is not limited herein and may be dynamically adjusted according to a corresponding service.
Specifically, in this embodiment, the area to be privatized may be an area including face information that needs to be privatized.
Step S120: and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
Specifically, in this embodiment, the image privacy processing may be to perform key portion privacy processing, that is, to perform blurring processing on private and sensitive portions in the image or video, so as to protect privacy information such as the identity and physical sign of the user in the image. Common ways are gaussian blur, pixel blur, etc. in local regions of the image. Therefore, in the present embodiment, the local pixel blurring process, i.e., the mosaic technique, is adopted, but the present invention is not limited to the mosaic technique, and other image blurring process techniques may be adopted, and the motion adjustment is performed according to the corresponding business requirements.
As shown in fig. 2, the rectangular frame in the left image of fig. 2 is a region to be private generated by performing a face detection operation on an input image, and in the right image of fig. 2, an image privacy processing is performed on the region to be private to generate a face privacy image.
In the above embodiment, there are advantageous effects of: based on a face detection model, executing face detection operation on an input image to generate a region to be subjected to privacy; the method comprises the steps that a region to be subjected to privacy is extracted from an input image through a face detection model, a boundary frame is correctly extracted in face detection, the correctness of generation of the region to be subjected to privacy is guaranteed, and a correct privacy region is provided for privacy processing of the region to be subjected to privacy subsequently.
And executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image. Through privacy processing, the area to be subjected to privacy processing is fuzzified, so that the generated face privacy image protects the privacy of the patient in the ultrasonic scanning process.
The embodiment solves the privacy protection problem of the patient in the automatic breast ultrasound scanning process, and achieves the technical effects of fuzzifying the area to be treated by the face to avoid privacy disclosure of the patient.
Referring to fig. 3, fig. 3 is a second embodiment of the face privacy method according to the present application, where before the step of performing a face detection operation on an input image based on a face detection model to generate a region to be privacy-enhanced, the method further includes:
step S210: a face detection model is created.
Specifically, in this embodiment, the face detection model is created based on a simple network structure and parameters, so that the accuracy and precision of face detection are ensured, the training time of the face detection model is reduced, and the detection speed of the face detection model is increased.
In the process of creating the face detection model, the face detection model is trained by using the training set to obtain the face detection model with the characteristics of the training set fully learned, so that the detection effect of the face detection model is ensured.
Step S220: and executing face detection operation on the input image based on the face detection model to generate a region to be subjected to privacy protection.
Step S230: and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
Compared with the first embodiment, the method includes step S210, and other steps are the same as those in the first embodiment and are not described herein again.
In the above embodiment, there are advantageous effects of: the human face detection model established based on the simple network structure and the parameters detects the human face, and improves the training model and the reasoning calculation speed on the basis of ensuring the human face detection effect.
Referring to fig. 4, fig. 4 is a specific implementation step of step S210 in the second embodiment of the face privacy method of the present application, which specifically includes:
step S211: and inputting the images in the training set into the face detection model to generate a face detection result.
Specifically, in this embodiment, the training set is based on ten thousands of images acquired by the screening device (the training set specifically includes 11765 images), and certainly, the data continuously increases with the passage of time, and the data set is not limited to the above number of data sets, and is adjusted according to the acquired images and the business requirement.
The images in the collected data sets are labeled according to requirements, and in the embodiment, the positions of the face frames, the coordinates of the key points and the face classification in the images are labeled.
Step S212: and comparing the face detection result with the image label and calculating an error.
Specifically, in the back propagation process, parameters of the face detection model are updated according to errors between a face detection result and the image label.
Step S213: and reversely propagating the error, and updating the parameters of the face detection model.
Step S214: and generating a face detection model until the error meets a preset threshold value.
Specifically, the preset threshold is not limited herein, and is dynamically adjusted according to specific requirements.
In the above embodiment, there are advantageous effects: the human face detection model based on the simple network structure and the parameters has faster convergence speed in the training process, and improves the human face detection speed on the basis of ensuring the human face detection effect.
Referring to fig. 5, fig. 5 is a specific implementation step of step S110 in the first embodiment of the face privacy method of the present application, where the performing, by using a face detection model, a face detection operation on an input image to generate a region to be privacy-protected includes:
step S111: and sequentially carrying out a first number of preset structure operations on the input image to generate a first intermediate feature map.
Specifically, in this embodiment, as shown in fig. 6, a schematic diagram of a network structure of a face detection model is shown, where the first number of preset structure operations may be 4 preset structure operations, or other number of preset structure operations, which is not limited herein and may be specifically selected according to specific settings.
Specifically, the preset structure operation is a convolution operation and a preset pooling operation. In this embodiment, the predetermined pooling operation is a maximum pooling operation, and may be an average pooling operation or other pooling operations, which is not limited herein and may be specifically selected according to specific settings.
Specifically, in the present embodiment, as shown in fig. 6, the input image sequentially passes through the first convolution layer, the first maximum pooling layer, the second convolution layer, the second maximum pooling layer, the third convolution layer, the third maximum pooling layer, the fourth convolution layer, and the fourth maximum pooling layer to generate the first intermediate feature map.
For the convolutional layer, different convolutional kernels are used for extracting features of different features, feature information which is more complex than that of traditional machine learning is generated, a feature map of high latitude is output, meanwhile, the nonlinear feature of the model is improved by matching with a PReLU activation function, and the model training convergence process is accelerated.
For the maximum pooling layer, the feature graph is rapidly downsampled based on the specified step length, and the generalization capability of the model features is improved.
Step S112: and performing convolution operation on the first intermediate feature map to generate a second intermediate feature map.
Specifically, as shown in fig. 6, the first intermediate feature map is calculated by the fifth convolution layer, and a second intermediate feature map is generated.
Step S113: and inputting the second intermediate feature map into a first full-connection layer to generate a first feature vector.
Specifically, as shown in fig. 6, the second intermediate feature map is input to the first fully connected layer, and in the present embodiment, the first fully connected layer is a 256-dimensional feature vector, but is not limited to the above-described vector dimension, and is adaptively adjusted according to a specific model.
Step S114: and inputting the first feature vector into a second full-connection layer for face classification to generate a face classification prediction result.
Specifically, as shown in fig. 6, the first feature vector is input into the second fully-connected layer to perform face classification, where in this embodiment, the output of the second fully-connected layer corresponding to the face classification is 2 score values, one is a score value of a face as a result of the bounding box prediction, and the other is a score value of a face not as a result of the bounding box prediction.
Step S115: and inputting the first feature vector into a third full-connection layer to perform bounding box regression, and generating a bounding box prediction result.
Specifically, as shown in fig. 6, the first feature vector is input into the third fully-connected layer to perform bounding box regression, where in this implementation, the output of the third fully-connected layer corresponding to the bounding box regression is 4 values, which respectively represent the abscissa and ordinate of the upper left corner of the bounding box, and the width and height of the bounding box.
Step S116: and inputting the first feature vector into a fourth full-connection layer to perform key point regression, and generating a key point prediction result.
Specifically, as shown in fig. 6, the first feature vector is input into the fourth fully-connected layer for performing the keypoint regression, wherein in this embodiment, the output of the fourth fully-connected layer corresponding to the keypoint regression is 10 values, which respectively correspond to the abscissa and ordinate of 5 keypoints, and the 5 keypoints represent 2 eyes, 2 corners of the mouth, and 1 tip of the nose.
It should be noted that the number of the key points is not limited to 5 in the present embodiment, and may be 7, 2 eyes, 2 mouth corners, 2 eyebrows, and 1 nose tip, and the corresponding output of the key point regression to the corresponding fourth full-link layer is 14 values, which may be dynamically adjusted according to the needs, and is not limited to the number of the above-mentioned key points.
Step S117: and if the face classification prediction result and the key point prediction result meet preset conditions, taking the boundary frame prediction result as the area to be subjected to privacy protection.
Specifically, in this embodiment, the face classification prediction result and the key point prediction result satisfy a preset condition, which may be that the boundary box prediction result is a face and the boundary box prediction result includes 5 key points, and if the preset condition is satisfied, the corresponding boundary box prediction result is used as the to-be-privatized area.
It should be noted that the preset conditions are not limited to the above conditions, and are dynamically adjusted according to the requirements and the model.
Wherein the preset structure operation is a convolution operation and a preset pooling operation; the face classification uses a first loss function; the bounding box regression and the keypoint regression use a second loss function.
Specifically, in this embodiment, the face classification uses a first loss function, the first loss function may be a focus loss function (focalliss), the bounding box regression and the keypoint regression use a second loss function, and the second loss function may be a mean square error loss function, but the first loss function and the second loss function are not limited thereto, and are adaptively adjusted according to model requirements.
The face detection under the general scene has the condition of large scale change, and the span is from a few pixels to thousands of pixels, so the required model is complex, but under the background of the invention, the scene is fixed, the camera and the object distance are relatively fixed, and the face in the acquired data is incomplete, has small occupation ratio, but is relatively consistent. Therefore, considering the model complexity, the data complexity and the robustness required by the background of the invention, the design model comprises 5 convolution layers, 4 maximum pooling layers and 4 full-connection layers, wherein 3 of the 4 full-connection layers respectively execute a face classification task, a face bounding box regression task and a face key point regression task.
Referring to table 1, table 1 shows specific parameters of the face detection model in the present application.
Figure BDA0002937293150000091
TABLE 1
In the above embodiment, there are beneficial effects of: according to the characteristics of the background of the invention, the input image is detected based on the simple face detection model network structure and parameters, and the face detection speed is increased on the basis of ensuring the accuracy of the extraction of the area to be privatized.
Referring to fig. 7, fig. 7 is a specific implementation step of step S120 in the first embodiment of the face privacy method of the present application, where performing image privacy processing on the area to be privacy-treated includes:
step S121: and dividing the area to be private into a preset number of pixel block areas.
Specifically, a region R to be subjected to privacy protection is obtained according to the position of a face boundary frame, the width and the height of the face boundary frame are equally divided into B parts, and B multiplied by B pixel blocks are generated; in this example, B is set to 10; the preset number is 100, and the value of B and the preset number are not limited thereto and are dynamically adjusted according to the requirement.
Step S122: and calculating the average red pixel value, the average green pixel value and the average blue pixel value of all pixel points in the current pixel block area.
Specifically, the color value of each pixel in the RGB-mode image is determined by three values, i.e., R (red pixel value), G (green pixel value), and B (blue pixel value), each of which ranges from 0 to 255.
It should be further noted that, when calculating the average red pixel value in this embodiment, first the red pixel values of all the pixels in the pixel block region are obtained and summed, and then the summed result is divided by the total number of pixels in the pixel block region; the calculation method of the average green pixel value is similar to that of the average blue pixel value, and is not described herein again.
Step S123: and replacing the original red pixel values, the original green pixel values and the original blue pixel values of all the pixel points in the current pixel block with the average red pixel value, the average green pixel value and the average blue pixel value.
Specifically, after step S123 is executed, the red pixel values, the green pixel values, and the blue pixel values of all the pixel points in the pixel block are the same, and the color values corresponding to all the pixel values are the same.
Step S124: and generating a blurred privacy area until all the pixel block areas are calculated and replaced.
Specifically, step 122 and step 123 are executed in a loop, until all pixel block regions are calculated and replaced, the region to be privatized is composed of a preset number of blurred pixel blocks, and thus a blurred privatized region is generated.
Specifically, in the present embodiment, the image local area privacy processing may be implemented based on an OpenCV design.
In the above embodiment, there are advantageous effects of: the implementation step of executing the image privacy processing on the area to be subjected to privacy processing is specifically given, so that the correctness of fuzzification processing on the area to be subjected to privacy processing is ensured, and the privacy area of the patient is protected.
In one embodiment, the generating the face-privacy image includes:
and covering the blurred privacy area with the area to be privacy.
Specifically, the blurred privacy area generated by executing the privacy processing is covered with the area to be private, so that the generation of the face privacy image is realized.
In the above embodiment, there are advantageous effects of: the generation of the face privacy image ensures that the privacy area of the patient is protected, and the privacy information of the patient is prevented from being revealed.
Referring to fig. 8, fig. 8 is a process for constructing a training set in the face privacy method of the present application, where the process for constructing the training set includes:
step S310: and collecting the image which meets the preset standard.
In particular, the image that meets the preset criteria may be an image based on the privacy of the face of the user captured by a screening device, such as a breast ultrasound scan or other examination device that may be related to other private information of the patient.
Step S320: and labeling the image based on a preset method to generate a training image with the image label.
Specifically, all collected images meeting a preset standard are detected through an open-source face detection model based on RetinaFace, and a detection result is obtained; performing manual review based on the detection result, and screening out qualified image data (in this embodiment, 11765 images are acquired according to the preset standard, and 11698 screened out qualified image data); and converting the detection result of the screened qualified image data into a format (face classification, key point and boundary box) required by the face detection model in the embodiment, and generating a training image (training set) with an image label.
In the above embodiment, there are advantageous effects of: the construction quality of the training set directly influences the detection result of the face detection model, and the correctness of the construction of the training set ensures the correctness of the face detection result, so that the correct acquisition of the area to be subjected to privacy is ensured.
In one embodiment, the formatting process of the input image includes:
and preprocessing the input image based on the input format of the face detection model.
Specifically, in this embodiment, the input format of the face detection model may be 256 × 256 × 3, and then image preprocessing such as resizing and normalization is performed on the image acquired by the screening device, so as to complete formatting of the input image.
In the above embodiment, there are advantageous effects of: and formatting the input image to ensure that the input image can be successfully detected in the face detection model.
Referring to fig. 9, fig. 9 is a third embodiment of a face privacy method according to the present application, where the method includes:
step S410: a face detection model is created.
Step S420: and executing face detection operation on the input image based on the face detection model to generate a region to be subjected to privacy protection.
Step S430: and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
Step S440: the face-private image is displayed.
Specifically, the bounding box is subjected to coordinate conversion, the coordinate conversion is carried out on the bounding box to coordinate information under the size proportion of an original input image, image privacy processing is carried out on an area to be subjected to privacy processing, and image or video data which are subjected to privacy processing on face information and sensitive information are presented at the output end of screening equipment.
Compared with the second embodiment, the step S240 is included, and other steps are the same as those in the second embodiment, and are not described again here.
Referring to fig. 10, fig. 10 is a flowchart of a face privacy method according to the present application.
In the above embodiment, there are advantageous effects of: the face privacy image is displayed, and the privacy of the patient is protected on the basis of ensuring that the examination of the patient is smoothly carried out.
To achieve the above object, there is also provided a face privacy apparatus 20 including:
the face detection module is used for executing face detection operation on the input image based on the face detection model to generate a region to be subjected to privacy;
and the face privacy module is used for executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
The apparatus shown in fig. 11 includes a face detection module 21 and a face privacy module 22, and the apparatus may perform the methods of the embodiments shown in fig. 1, fig. 3, fig. 4, fig. 5, fig. 7, fig. 8 and fig. 9, and parts not described in detail in this embodiment may refer to the related descriptions of the embodiments shown in fig. 1, fig. 3, fig. 4, fig. 5, fig. 7, fig. 8 and fig. 9. The implementation process and technical effect of the technical solution are described in the embodiments shown in fig. 1, fig. 3, fig. 4, fig. 5, fig. 7, fig. 8, and fig. 9, and are not described again here.
To achieve the above object, there is also provided a computer storage medium having stored thereon a face privacy method program that, when executed by a processor, implements the steps of any of the above methods.
In order to achieve the above object, there is also provided a face privacy apparatus, including a memory, a processor, and a face privacy method program stored in the memory and executable on the processor, where the processor implements any of the above steps of the method when executing the face privacy method program.
The present application relates to a face privacy apparatus 010, including as shown in fig. 12: at least one processor 012, memory 011.
The processor 012 may be an integrated circuit chip having signal processing capability. In implementation, the steps of the method may be performed by hardware integrated logic circuits or instructions in the form of software in the processor 012. The processor 012 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 011, and the processor 012 reads the information in the memory 011 and completes the steps of the method in combination with the hardware.
It is to be understood that the memory 011 in embodiments of the present invention can be either volatile memory or nonvolatile memory, or can include both volatile and nonvolatile memory. The non-volatile Memory may be a Read Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of illustration and not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (ddr DRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 011 of the systems and methods described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It should be noted that in the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (10)

1. A face privacy method, the method comprising:
based on a face detection model, executing face detection operation on an input image to generate a region to be subjected to privacy;
and executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
2. The method for privacy of human face according to claim 1, wherein before the step of performing human face detection operation on the input image based on the human face detection model to generate the region to be privacy, the method further comprises:
creating a face detection model, specifically comprising:
inputting the images in the training set into the face detection model to generate a face detection result;
comparing the face detection result with an image label and calculating an error;
reversely propagating the error, and updating parameters of the face detection model;
and generating a face detection model until the error meets a preset threshold value.
3. The method for privacy of human face according to claim 1, wherein the generating the area to be privacy-enhanced by performing the human face detection operation on the input image by using the human face detection model comprises:
sequentially carrying out a first number of preset structure operations on the input image to generate a first intermediate feature map;
performing convolution operation on the first intermediate feature map to generate a second intermediate feature map;
inputting the second intermediate feature map into a first full-connection layer to generate a first feature vector;
inputting the first feature vector into a second full-connection layer to carry out face classification, and generating a face classification prediction result;
inputting the first feature vector into a third full-connection layer to perform bounding box regression, and generating a bounding box prediction result;
inputting the first feature vector into a fourth full-connection layer to carry out key point regression, and generating a key point prediction result;
if the face classification prediction result and the key point prediction result meet preset conditions, taking the boundary frame prediction result as the area to be subjected to privacy;
wherein the preset structure operation is a convolution operation and a preset pooling operation; the face classification uses a first loss function; the bounding box regression and the keypoint regression use a second loss function.
4. The method for privacy of human faces according to claim 1, wherein the performing of the image privacy processing on the area to be privacy-treated comprises:
dividing the area to be private into a preset number of pixel block areas;
calculating the average red pixel value, the average green pixel value and the average blue pixel value of all pixel points in the current pixel block area;
replacing the average red pixel value, the average green pixel value and the average blue pixel value with the original red pixel value, the original green pixel value and the original blue pixel value of all the pixel points in the current pixel block;
and generating a blurred privacy area until all the pixel block areas are calculated and replaced.
5. The method of claim 4, wherein the generating the face-privatized image comprises:
and covering the blurred privacy area with the area to be privacy.
6. The method for privacy of human faces according to claim 2, wherein the training set construction process comprises:
collecting images meeting a preset standard;
and labeling the image based on a preset method to generate a training image with the image label.
7. The method for privacy of human faces according to claim 1, wherein the formatting process of the input image comprises:
and preprocessing the input image based on the input format of the face detection model.
8. A face privacy apparatus, comprising:
the face detection module is used for executing face detection operation on the input image based on the face detection model to generate a region to be subjected to privacy;
and the face privacy module is used for executing image privacy processing on the area to be subjected to privacy processing to generate a face privacy image.
9. A computer storage medium, characterized in that the computer storage medium has stored thereon a face privacy method program that, when executed by a processor, implements the steps of the face privacy method of any one of claims 1-7.
10. A face privacy device comprising a memory, a processor and a face privacy method program stored in the memory and executable on the processor, wherein the processor implements the steps of the face privacy method of any one of claims 1-7 when executing the face privacy method program.
CN202110167982.3A 2021-02-05 2021-02-05 Face privacy method, device and apparatus and computer storage medium Pending CN112836653A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110167982.3A CN112836653A (en) 2021-02-05 2021-02-05 Face privacy method, device and apparatus and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110167982.3A CN112836653A (en) 2021-02-05 2021-02-05 Face privacy method, device and apparatus and computer storage medium

Publications (1)

Publication Number Publication Date
CN112836653A true CN112836653A (en) 2021-05-25

Family

ID=75932654

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110167982.3A Pending CN112836653A (en) 2021-02-05 2021-02-05 Face privacy method, device and apparatus and computer storage medium

Country Status (1)

Country Link
CN (1) CN112836653A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223101A (en) * 2021-05-28 2021-08-06 支付宝(杭州)信息技术有限公司 Image processing method, device and equipment based on privacy protection
CN113283377A (en) * 2021-06-10 2021-08-20 重庆师范大学 Face privacy protection method, system, medium and electronic terminal
CN113313026A (en) * 2021-05-28 2021-08-27 支付宝(杭州)信息技术有限公司 Face recognition interaction method, device and equipment based on privacy protection
CN113488143A (en) * 2021-06-28 2021-10-08 上海联影智能医疗科技有限公司 Medical scanning method, medical scanning apparatus, and computer-readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018041293A (en) * 2016-09-08 2018-03-15 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN107851192A (en) * 2015-05-13 2018-03-27 北京市商汤科技开发有限公司 For detecting the apparatus and method of face part and face
CN110135195A (en) * 2019-05-21 2019-08-16 司马大大(北京)智能系统有限公司 Method for secret protection, device, equipment and storage medium
CN110866490A (en) * 2019-11-13 2020-03-06 复旦大学 Face detection method and device based on multitask learning
CN111783749A (en) * 2020-08-12 2020-10-16 成都佳华物链云科技有限公司 Face detection method and device, electronic equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107851192A (en) * 2015-05-13 2018-03-27 北京市商汤科技开发有限公司 For detecting the apparatus and method of face part and face
JP2018041293A (en) * 2016-09-08 2018-03-15 キヤノン株式会社 Image processing apparatus, image processing method, and program
CN110135195A (en) * 2019-05-21 2019-08-16 司马大大(北京)智能系统有限公司 Method for secret protection, device, equipment and storage medium
CN110866490A (en) * 2019-11-13 2020-03-06 复旦大学 Face detection method and device based on multitask learning
CN111783749A (en) * 2020-08-12 2020-10-16 成都佳华物链云科技有限公司 Face detection method and device, electronic equipment and storage medium

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113223101A (en) * 2021-05-28 2021-08-06 支付宝(杭州)信息技术有限公司 Image processing method, device and equipment based on privacy protection
CN113313026A (en) * 2021-05-28 2021-08-27 支付宝(杭州)信息技术有限公司 Face recognition interaction method, device and equipment based on privacy protection
CN113223101B (en) * 2021-05-28 2022-12-09 支付宝(杭州)信息技术有限公司 Image processing method, device and equipment based on privacy protection
CN113283377A (en) * 2021-06-10 2021-08-20 重庆师范大学 Face privacy protection method, system, medium and electronic terminal
CN113283377B (en) * 2021-06-10 2022-11-11 重庆师范大学 Face privacy protection method, system, medium and electronic terminal
CN113488143A (en) * 2021-06-28 2021-10-08 上海联影智能医疗科技有限公司 Medical scanning method, medical scanning apparatus, and computer-readable storage medium

Similar Documents

Publication Publication Date Title
Hsu et al. Ratio-and-scale-aware YOLO for pedestrian detection
CN110378381B (en) Object detection method, device and computer storage medium
CN108229490B (en) Key point detection method, neural network training method, device and electronic equipment
CN109325954B (en) Image segmentation method and device and electronic equipment
US11830230B2 (en) Living body detection method based on facial recognition, and electronic device and storage medium
EP3937481A1 (en) Image display method and device
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
US20220198230A1 (en) Auxiliary detection method and image recognition method for rib fractures based on deep learning
CN111160269A (en) Face key point detection method and device
JP2018092610A (en) Image recognition device, image recognition method, and program
JP6688277B2 (en) Program, learning processing method, learning model, data structure, learning device, and object recognition device
EP4006773A1 (en) Pedestrian detection method, apparatus, computer-readable storage medium and chip
US20220148291A1 (en) Image classification method and apparatus, and image classification model training method and apparatus
CN110909618B (en) Method and device for identifying identity of pet
US20220157046A1 (en) Image Classification Method And Apparatus
CN109815931B (en) Method, device, equipment and storage medium for identifying video object
CN111914748B (en) Face recognition method, device, electronic equipment and computer readable storage medium
CN111754531A (en) Image instance segmentation method and device
CN112396050B (en) Image processing method, device and storage medium
CN115631112B (en) Building contour correction method and device based on deep learning
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
CN113793301A (en) Training method of fundus image analysis model based on dense convolution network model
CN111353325A (en) Key point detection model training method and device
CN112991281A (en) Visual detection method, system, electronic device and medium
CN113012030A (en) Image splicing method, device and equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination