CN117171767A - Desensitization processing method, device, equipment and medium for image data - Google Patents

Desensitization processing method, device, equipment and medium for image data Download PDF

Info

Publication number
CN117171767A
CN117171767A CN202310966731.0A CN202310966731A CN117171767A CN 117171767 A CN117171767 A CN 117171767A CN 202310966731 A CN202310966731 A CN 202310966731A CN 117171767 A CN117171767 A CN 117171767A
Authority
CN
China
Prior art keywords
face
image
detection frame
face detection
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310966731.0A
Other languages
Chinese (zh)
Inventor
张正欣
李含锐
肖春亮
王豪
何坤
张宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhou Lvmeng Chengdu Technology Co ltd
Nsfocus Technologies Group Co Ltd
Original Assignee
Shenzhou Lvmeng Chengdu Technology Co ltd
Nsfocus Technologies Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhou Lvmeng Chengdu Technology Co ltd, Nsfocus Technologies Group Co Ltd filed Critical Shenzhou Lvmeng Chengdu Technology Co ltd
Priority to CN202310966731.0A priority Critical patent/CN117171767A/en
Publication of CN117171767A publication Critical patent/CN117171767A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of network security, in particular to a desensitization processing method, device, equipment and medium for image data, which are used for accurately carrying out desensitization processing on an area surrounded by face key points in the image data, improving the usability of the data after the desensitization processing, improving the accuracy of face detection and face key point detection and enhancing the security of the data after the desensitization. The method comprises the following steps: acquiring an image to be processed; carrying out face detection in the image to be processed by utilizing a first type of algorithm, and if the face is detected, determining a face detection frame containing the face in the image to be processed; and carrying out face key point recognition in a face detection frame by utilizing a second type of algorithm, determining the face key points in the face detection frame, determining the positions of the face key points in the image to be processed, and carrying out desensitization treatment on the area surrounded by the face key points in the image to be processed.

Description

Desensitization processing method, device, equipment and medium for image data
Technical Field
The present application relates to the field of network security technologies, and in particular, to a method, an apparatus, a device, and a medium for desensitizing image data.
Background
With the development of big data, artificial intelligence and internet of things, the data become an indispensable part of life of people, and the security of important information is very important for individuals and even countries.
The Internet of vehicles is based on an in-vehicle network, an inter-vehicle network and a vehicle-mounted mobile Internet, and a large system network for wireless communication and information exchange between vehicles-X (X refers to vehicles, roads, pedestrians, the Internet and the like) is an integrated network capable of realizing intelligent traffic management, intelligent dynamic information service and intelligent vehicle control according to agreed communication protocols and data interaction standards, and is a typical application of the Internet of things technology in the field of traffic systems.
The internet of vehicles data mainly comes from information collection of vehicle terminals and users, and in the using process of vehicles, driver information, vehicle exterior information and the like can be collected by cameras or sensors and the like loaded on the vehicles and uploaded to a cloud (for example, a cloud server). In the internet of vehicles data, face data (including image data and video data) is mainly used for identity recognition, authentication and other aspects, is the most sensitive privacy data of a user, and is mainly used for protecting the face data of the user by desensitizing the face data, wherein the desensitizing is to hide or blur important data so that the important data cannot be recognized.
In the related art, when recognizing the face in the image, the traditional face recognition algorithm mostly adopts a geometric feature algorithm, and the algorithm can convert the coordinate information of human organs such as nose, eyes, mouth and the like into corresponding face features and then measure the face features through similarity. However, the method needs to find out the position of the facial organ manually, has high labor cost, and is difficult to identify correctly for the geometric feature algorithm for the factors such as the change of the facial expression, the change of the age and the like, and cannot be used practically.
With the development of technology, face recognition methods based on neural networks are proposed in recent years, and the face recognition methods can be used for recognizing faces in images or videos, combining with a desensitization technology, can be used for recognizing faces in images or videos by using a neural network algorithm, and can be used for performing simple color block replacement on the recognized results (face blocks) to finish desensitization processing.
The desensitization processing method in the related art is characterized in that the desensitization processing is carried out on the face part of the person in the image or the video on the basis of face recognition, the granularity is larger, and the desensitization processing method is rough.
Disclosure of Invention
The application aims to provide a desensitization processing method, device, equipment and medium for image data, which are used for accurately carrying out desensitization processing on an area surrounded by face key points in the image data, improving the usability of the data after the desensitization processing, and simultaneously improving the accuracy of face detection and face key point detection and enhancing the safety of the data after the desensitization by utilizing the combination of a first type of algorithm and a second type of algorithm.
In a first aspect, the present application provides a desensitization processing method of image data, including:
acquiring an image to be processed;
performing face detection in the image to be processed by using a first type of algorithm, and if a face is detected, determining a face detection frame containing the face in the image to be processed;
performing face key point recognition in the face detection frame by using a second type algorithm, and determining the face key points in the face detection frame, wherein the detection effect of the first type algorithm on the face frame in the image is superior to that of the second type algorithm, and the recognition effect of the second type algorithm on the face key points in the face detection frame is superior to that of the first type algorithm;
and determining the position of the face key point in the image to be processed, and performing desensitization treatment on the area surrounded by the face key point in the image to be processed.
In a possible implementation manner, the performing face key point recognition in the face detection frame by using a second type of algorithm, and determining the face key point in the face detection frame includes:
and when the face detection frame is determined to contain a face based on the characteristics of the face detection frame, carrying out face key point recognition in the face detection frame by utilizing a second type algorithm, and determining the face key points in the face detection frame.
In a possible implementation manner, when the face detection frame is determined to meet the following conditions based on the characteristics of the face detection frame, the face detection frame is determined to contain a face:
the ratio of the area of the face detection frame to the area of the image to be processed is smaller than a preset proportion threshold value; or the similarity between the feature vector of the face detection frame and at least one feature vector in a pre-stored feature vector library is larger than a preset similarity threshold.
In one possible implementation manner, the performing face key point recognition in the face detection frame by using a second type of algorithm, and determining the face key point in the face detection frame includes:
extracting a target image of an area where a face detection frame is located in the image to be processed, and performing scaling treatment on the target image to obtain a target image with a preset size;
Performing face key point recognition in the target image with the preset size by using a second type algorithm, and determining face key points in the target image;
the determining the position of the face key point in the image to be processed includes:
and determining the position of the face key point in the image to be processed according to the position information of each pixel point in the pre-recorded face detection frame in the image to be processed and the pixel point of the face key point in the target image.
In one possible implementation, the first type of algorithm is one of the following algorithms: retinaface recognition algorithm, hoG face positioning algorithm, blazeface recognition algorithm, and Pyramidbox detection algorithm.
In one possible implementation, the second type of algorithm is one of the following algorithms: pfld algorithm and Yolo algorithm.
In a possible implementation manner, the desensitizing the area surrounded by the key points of the face in the image to be processed includes:
and masking and/or Gaussian blur processing is carried out on the area surrounded by part or all of the face key points in the image to be processed.
In a second aspect, an embodiment of the present application provides a desensitization processing apparatus for image data, including:
An acquisition unit configured to acquire an image to be processed;
the first processing unit is used for carrying out face detection in the image to be processed by utilizing a first type of algorithm, and if the face is detected, a face detection frame containing the face is determined in the image to be processed;
the second processing unit is used for recognizing the key points of the human face in the human face detection frame by using a second type algorithm and determining the key points of the human face in the human face detection frame, wherein the detection effect of the first type algorithm on the human face frame in the image is better than that of the second type algorithm, and the recognition effect of the second type algorithm on the key points of the human face in the human face detection frame is better than that of the first type algorithm;
and the third processing unit is used for determining the position of the face key point in the image to be processed and carrying out desensitization processing on the area surrounded by the face key point in the image to be processed.
In a third aspect, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, the method according to the first aspect is implemented.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium, in which a computer program is stored, which when executed by a processor, implements the method according to the first aspect.
The application has the following beneficial effects:
according to the embodiment of the application, after the image to be processed is acquired, the first type of algorithm is firstly utilized to perform face detection in the image to be processed, if the face is detected, the face detection frame containing the face is determined in the image to be processed, then the second type of algorithm is utilized to perform face key point recognition in the face detection frame, the face key points in the face detection frame are determined, then the positions of the face key points in the image to be processed are determined, the area surrounded by the face key points in the image to be processed is subjected to desensitization processing, compared with the prior art, the face key points are further recognized in the face detection frame after the face detection frame is recognized by simple face part, so that the face key points are combined, the area surrounded by the face key points in the image data is accurately subjected to desensitization processing, the usability of the data after desensitization is improved, meanwhile, the face in the image is recognized by the first type of algorithm with good face key point detection effect, the face detection algorithm is further improved, and the face data in the second type of algorithm is combined with the face key point recognition algorithm.
Additional features and advantages of the application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the application. The objectives and other advantages of the application will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the embodiments of the present application will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is an application scenario of a desensitization processing method for image data according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of a method for desensitizing image data according to an embodiment of the application;
FIG. 3 is a schematic diagram of determining a face detection frame in an image to be processed according to an embodiment of the present application;
FIG. 4 is a schematic diagram of determining key points of a face in a face detection frame according to an embodiment of the present application;
FIG. 5 is a schematic diagram of desensitizing an area surrounded by key points of a face according to an embodiment of the present application;
FIG. 6A is a schematic diagram of Gaussian blur processing for an area surrounded by all face key points according to an embodiment of the present application;
FIG. 6B is a schematic diagram of Gaussian blur processing for a region surrounded by part of face key points according to an embodiment of the present application;
FIG. 7 is a schematic diagram of another embodiment of determining a face detection frame in an image to be processed;
FIG. 8 is a schematic flow chart of a desensitizing process for video according to an embodiment of the present application;
fig. 9 is a schematic structural diagram of a desensitizing apparatus for image data according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be described in further detail below with reference to the accompanying drawings, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The following briefly describes the design concept of the embodiment of the present application:
with the development of big data, artificial intelligence and internet of things, the data become an indispensable part of life of people, and the security of important information is very important for individuals and even countries.
The Internet of vehicles is based on an in-vehicle network, an inter-vehicle network and a vehicle-mounted mobile Internet, and a large system network for wireless communication and information exchange between vehicles-X (X refers to vehicles, roads, pedestrians, the Internet and the like) is an integrated network capable of realizing intelligent traffic management, intelligent dynamic information service and intelligent vehicle control according to agreed communication protocols and data interaction standards, and is a typical application of the Internet of things technology in the field of traffic systems.
The internet of vehicles data mainly comes from information collection of vehicle terminals and users, and in the using process of vehicles, driver information, vehicle exterior information and the like can be collected by cameras or sensors and the like loaded on the vehicles and uploaded to a cloud (for example, a cloud server). In the internet of vehicles data, face data (including image data and video data) is mainly used for identity recognition, authentication and other aspects, is the most sensitive privacy data of a user, and is mainly used for protecting the face data of the user by desensitizing the face data, wherein the desensitizing is to hide or blur important data so that the important data cannot be recognized.
In the related art, when recognizing the face in the image, the traditional face recognition algorithm mostly adopts a geometric feature algorithm, and the algorithm can convert the coordinate information of human organs such as nose, eyes, mouth and the like into corresponding face features and then measure the face features through similarity. However, the method needs to find out the position of the facial organ manually, has high labor cost, and is difficult to identify correctly for the geometric feature algorithm for the factors such as the change of the facial expression, the change of the age and the like, and cannot be used practically.
With the development of technology, face recognition methods based on neural networks are proposed in recent years, and the face recognition methods can be used for recognizing faces in images or videos, combining with a desensitization technology, can be used for recognizing faces in images or videos by using a neural network algorithm, and can be used for performing simple color block replacement on the recognized results (face blocks) to finish desensitization processing.
The desensitization processing method in the related art is characterized in that the desensitization processing is carried out on the face part of the person in the image or the video on the basis of face recognition, the granularity is larger, and the desensitization processing method is rough.
In view of this, the embodiments of the present application provide a method, apparatus, device, and medium for desensitizing image data. In the embodiment of the application, after an image to be processed is acquired, face detection is firstly performed in the image to be processed by using a first type algorithm, if a face is detected, a face detection frame containing the face is determined in the image to be processed, then face key point identification is performed in the face detection frame by using a second type algorithm, the face key point in the face detection frame is determined, then the position of the face key point in the image to be processed is determined, and the area surrounded by the face key point in the image to be processed is subjected to desensitization processing.
The preferred embodiments of the present application will be described below with reference to the accompanying drawings of the specification, it being understood that the preferred embodiments described herein are for illustration and explanation only, and not for limitation of the present application, and embodiments of the present application and features of the embodiments may be combined with each other without conflict.
Fig. 1 is a schematic diagram of an application scenario in an embodiment of the present application. The application scene graph comprises the following steps: camera 10, terminal equipment 11 and server 12, wherein:
the camera 10 is used for acquiring image data or video data, wherein the image data or the video data may contain sensitive data such as a face of a user.
The terminal device 11 is configured to receive the image data or the video data collected by the camera 10, perform operations such as identity authentication and authentication on a user by using the image data or the video data, and upload the image data or the video data to the server 12.
In practical applications, in order to protect the privacy of the user, the terminal device 11 may perform desensitization processing on the image data or video data before uploading the image data or video data to the server 12. Taking the image data as an example, the terminal device 11 may perform face detection in the image to be processed by using a first type algorithm, if a face is detected, determine a face detection frame containing the face in the image to be processed, then perform face key point recognition in the face detection frame by using a second type algorithm, determine a face key point in the face detection frame, then determine a position of the face key point in the image to be processed, perform desensitization processing on an area surrounded by the face key point in the image to be processed, and then upload the image data after the desensitization processing to the server 12.
Of course, the above-described process of desensitizing image data or video data in the terminal device 11 may also be performed in the server 12, which is limited by the device performance and processing capability of the terminal device 11, and the embodiment of the present application is not limited thereto.
In an alternative embodiment, the communication network is a wired network or a wireless network.
It should be noted that, in fig. 1, the number and communication manners of the terminal device 11 and the server 12 are not limited in practice, and when the number of the servers 12 is plural, the plural servers may be configured as a blockchain, and the servers are nodes on the blockchain, where the servers 12 may be independent physical servers, may be a server cluster or a distributed system formed by plural physical servers, or may be cloud servers that provide cloud services, a cloud database, and basic cloud computing services such as big data and an artificial intelligent platform, which are not particularly limited in the embodiment of the present application.
In order to further explain the technical solution provided by the embodiments of the present application, the following details are described with reference to the accompanying drawings and the detailed description. Although embodiments of the present application provide the method operational steps shown in the following embodiments or figures, more or fewer operational steps may be included in the method based on routine or non-inventive labor. In steps where there is logically no necessary causal relationship, the execution order of the steps is not limited to the execution order provided by the embodiments of the present application. The method may be performed sequentially or and in accordance with the method shown in the embodiments or the drawings when the actual process or apparatus is performed.
Fig. 2 shows a flowchart of a method for desensitizing image data according to an embodiment of the present application. As shown in fig. 2, the method may include the steps of:
s201, acquiring an image to be processed.
The image to be processed refers to an image which needs to be subjected to desensitization processing, and the image to be processed can be an image acquired by an image acquisition device in real time or can be a pre-stored image. The image to be processed may be an image frame in a video or a picture in a preset format, where the preset format may include, but is not limited to, a JPEG format, a PNG format, a BMP format, or a GIF format, which is not limited in the embodiment of the present application.
In the implementation, the image to be processed may be an image acquired by a camera or a sensor in real time, or may be an image stored in a terminal device or a server, or may be an image obtained by frame splitting in a video acquired by the camera or the sensor in real time, or may be an image obtained by frame splitting in a video stored in the terminal device or the server, which is not limited in the embodiment of the present application.
It should be noted that, if the image to be processed is obtained by frame splitting of a video, after the desensitization processing of each frame of image, each frame of image after the desensitization processing needs to be combined into a video, and the embodiment of the present application is not limited to this specific frame splitting and combining manner.
S202, performing face detection in an image to be processed by using a first type of algorithm, and if a face is detected, determining a face detection frame containing the face in the image to be processed.
In specific implementation, the first type of algorithm may use any one of a Retinaface face recognition algorithm, a HoG face positioning algorithm, a BlazeFace face recognition algorithm, and a pyremidbox detection algorithm, which is not limited in this embodiment of the present application, and in the following embodiment of the present application, the first type of algorithm is illustrated by taking the Retinaface algorithm as an example.
There are two kinds of backbone feature extraction networks of the Retinaface recognition algorithm, namely, mobilene 0.25 and Resnet50, wherein Resnet50 has higher precision, but has higher performance requirements on processing equipment, so that the Retinaface recognition algorithm is suitable for terminal equipment or servers with higher processing capacity, and for mobile terminals, the memory is smaller, and certain requirements are usually met on processing time, so that in practical application, the backbone feature extraction network of Mobilene 0.25 can be adopted, and Mobilene 0.25 is a mobile-end-based backbone feature extraction network and is a lightweight deep neural network for embedded equipment such as mobile terminals.
In the embodiment of the application, when a Retinaface recognition algorithm is utilized to detect a face in an image to be processed, if the image to be processed contains the face, a face detection frame containing the face is determined in the image to be processed, in practical application, the output result of the Retinaface recognition algorithm can be two coordinate values, namely the coordinate values of two points at the upper left corner and the lower right corner of the face detection frame (a coordinate system is constructed by taking one vertex of the image to be processed as a coordinate origin), and after the coordinate values of the two points at the upper left corner and the lower right corner of the face detection frame are obtained, the rectangular face detection frame can be determined.
In one example, as shown in fig. 3, fig. 3 is a rear seat image of a vehicle, where two face detection frames, namely, a face detection frame 30 and a face detection frame 31, may be obtained when face recognition is performed by using a first type of algorithm.
It should be noted that, the HoG face positioning algorithm, the BlazeFace recognition algorithm, and the pyremidbox detection algorithm may all obtain a face detection frame, and other algorithms that may obtain a face detection frame are applicable to the step, and are not limited to the several algorithms listed in the embodiments of the present application, and in addition, the detected face detection frame is not limited to a rectangular detection frame, but may also be other polygonal detection frames or detection frames with irregular patterns.
S203, performing face key point recognition in the face detection frame by using a second type algorithm, and determining the face key points in the face detection frame, wherein the detection effect of the first type algorithm on the face frame in the image is superior to that of the second type algorithm, and the recognition effect of the second type algorithm on the face key points in the face detection frame is superior to that of the first type algorithm.
Wherein the second type of algorithm may employ one of the following algorithms: pfld algorithm and Yolo algorithm. Of course, other algorithms capable of detecting key points of a face are applicable to the embodiment of the present application.
It should be noted that, the detection effect of the first type algorithm on the face frame in the image is better than that of the second type algorithm, and the comparison can be performed based on the recognition of the same image, specifically, when the face detection is performed on the same image to be processed, the face detection frame recognized by the first type algorithm is better than that recognized by the second type algorithm, or, the accuracy or precision of the face detection frame recognized by the first type algorithm is higher than that of the second type algorithm.
Similarly, the recognition effect of the second type algorithm on the face key points in the face detection frame in the embodiment of the application is better than that of the first type algorithm, and the comparison can be performed based on the recognition of the face key points in the same face detection frame, specifically, when the face key points are recognized for the same face detection frame, the face key points recognized by the second type algorithm are better than that of the face key points recognized by the first type algorithm, for example, the number of the face key points recognized by the second type algorithm is greater than that of the face key points recognized by the first type algorithm for the same face detection frame.
In practical application, the Yolo algorithm can be used in both mobile terminals and fixed terminals, but when the Yolo algorithm is used in the mobile terminals, the performance may have a certain influence, in this case, the model can be cut and quantized through adjustment of model parameters and the like, and under the condition that the model can normally run, the accuracy, the required key point number and the like are properly reduced, and the detection result of the key points of the human face can be obtained.
In an example, taking a Pfld algorithm as an example, in a specific implementation, training a Pfld neural network model by using a sample picture marked with 98 face key points to obtain a model supporting 98 face key point recognition, and further using the model to detect the face key points of a face in a face detection frame, as shown in fig. 4, the 98 face key points can be obtained in the face detection frame.
In the embodiment of the application, after the face detection frame is obtained based on the Retinaface recognition algorithm, the Pfld algorithm is utilized to detect and identify the key points of the face in the face detection frame, so that the calculated amount of the Pfld neural network model is reduced, the detection speed of the key points of the face is improved, the Pfld neural network model is detected in the face detection frame, the detection range of the key points of the face is greatly reduced, and the detection accuracy of the key points of the face is improved.
Meanwhile, the detection effect of the Retinaface recognition algorithm on the face frames in the image is superior to that of the Pfld algorithm, and the face key point recognition effect of the Pfld algorithm is superior to that of the Retinaface recognition algorithm, and the embodiment of the application combines the advantages of the two, so that the accuracy of face detection and face key point detection is improved, and the accuracy and safety of face data desensitization processing are further enhanced.
S204, determining the position of the key points of the human face in the image to be processed, and performing desensitization treatment on the area surrounded by the key points of the human face in the image to be processed.
After the face key points in the face detection frame are obtained, the positions of the face key points in the image to be processed are determined, the area surrounded by the face key points in the image to be processed is subjected to desensitization processing, other parts in the face detection frame are not subjected to desensitization processing, and the usability of image data after the desensitization processing is improved.
In an example, taking the face detection box shown in fig. 4 as an example, in the embodiment of the present application, during the desensitization processing, as shown in fig. 5, only the area surrounded by the detected face key points may be subjected to the desensitization processing, and other areas in the face may not be subjected to the desensitization processing.
In order to make the image data after the desensitization processing more attractive and improve the user experience, the embodiment of the application can perform shielding and/or Gaussian blur processing on the area surrounded by part or all of the face key points in the image to be processed when the desensitization processing is performed on the area surrounded by the face key points.
Specifically, the region surrounded by the partial face key points (for example, the region surrounded by the partial key points around the eyes, the region surrounded by the partial key points around the nose, the region surrounded by the partial key points around the mouth, etc.) may be desensitized by masking and/or gaussian blur processing, or the region surrounded by the full face key points may be desensitized by masking and/or gaussian blur processing.
In an example, taking the face detection box shown in fig. 4 as an example, in the embodiment of the present application, during the desensitization processing, as shown in fig. 6A, gaussian blur processing is performed on the area surrounded by all the detected face key points, so as to achieve the effect of the face desensitization processing.
In another example, taking the face detection box shown in fig. 4 as an example, in the embodiment of the present application, during the desensitization processing, as shown in fig. 6B, a gaussian blur processing is performed on a region surrounded by a part of detected face key points, that is, an eye region surrounded by a part of key points around eyes and an eye region surrounded by a part of key points around the mouth, so as to achieve the effect of the face desensitization processing.
In the embodiment of the application, before the face key point recognition is performed in the face detection frame by utilizing the second type algorithm, whether the face is contained in the face detection frame or not can be determined based on the characteristics of the face detection frame, and when the face is contained in the face detection frame, the face key point recognition is performed in the face detection frame by utilizing the second type algorithm, so that the false desensitization rate is reduced.
It should be noted that in practical application, a plurality of face detection frames may be detected in the image to be processed, and for each face detection frame, it is necessary to determine whether the face detection frame includes a face based on the features of the face detection frame.
In one example, as shown in fig. 7, assuming that two face detection frames, namely, a face detection frame 70 and a face detection frame 71, are detected in the image to be processed shown in fig. 7, before the face key points are identified in the face detection frames in fig. 7 by using the second type algorithm, it is required to determine whether the faces are contained in each face detection frame, if it is determined that the faces are contained in the face detection frame 70 and the faces are not contained in the face detection frame 71, the face key points may be identified in the face detection frame 70 by using only the second type algorithm without identifying the face key points in the face detection frame 71.
In specific implementation, when judging whether the face detection frame contains a face, when determining that the face detection frame meets one or more of the following conditions based on the characteristics of the face detection frame, the face detection frame may be determined to contain the face:
and the ratio of the area of the face detection frame to the area of the image to be processed is smaller than a preset proportion threshold value under the condition 1.
The preset proportion threshold value can be set according to an acquired scene or an empirical value of the image, and the embodiment of the application is not limited to the above, for example, the preset proportion threshold value is 0.2 in an external scene image shot by a vehicle-mounted camera.
In practical application, as shown in fig. 7, a suitable preset ratio threshold may be selected, so that the ratio of the area of the face detection frame 70 to the image area of fig. 7 is smaller than the preset ratio threshold, and the ratio of the area of the face detection frame 71 to the image area of fig. 7 is larger than the preset ratio threshold, so as to determine that the face detection frame 70 contains a face, and the face detection frame 71 does not contain a face.
And 2, the similarity between the feature vector of the face detection frame and at least one feature vector in a pre-stored feature vector library is larger than a preset similarity threshold.
It should be noted that, the feature vector library stored in advance may include feature vectors of a plurality of face detection frames in a plurality of scenes, and at least one face detection frame is provided in each scene. The preset similarity threshold may be set according to an empirical value, for example, 0.9, 0.85, etc.
In practical application, as shown in fig. 7, the similarity between the feature vector of the face detection frame 70 and at least one feature vector in the pre-stored feature vector library may be greater than the preset similarity threshold, so that the similarity between the feature vector of the face detection frame 71 and at least one feature vector in the pre-stored feature vector library may be less than the preset similarity threshold, so as to determine that the face detection frame 70 includes a face, and the face detection frame 71 does not include a face.
In practical application, when the second type algorithm is used for recognizing the key points of the face in the face detection frame, in order to improve the accuracy of the recognition of the key points of the face, the face detection frame can be scaled first, so that the size of the face detection frame is a preset size.
In specific implementation, for the detected face detection frame, the position information of each pixel point in the face detection frame in the image to be processed can be recorded, the target image of the area where the face detection frame is located in the image to be processed is extracted, scaling processing is carried out on the target image to obtain a target image with a preset size, then face key point recognition is carried out on the target image with the preset size by using a second type algorithm, the face key point in the target image is determined, after the face key point is determined in the target image, the position of the face key point in the image to be processed is determined according to the position information of each pixel point in the face detection frame in the image to be processed, and the pixel point where the face key point in the target image is located, and then desensitization processing is carried out on the area surrounded by the face key point in the image to be processed.
The preset size is a size suitable for the second type algorithm to detect the face key points, and may be set according to a specific algorithm used, for example, the preset size is 112×112, 98×98, and the like.
In order to facilitate understanding of the scheme of the embodiment of the present application, the following takes an image to be processed as an image obtained by frame splitting in a video as an example, the first type of algorithm takes a Retinaface recognition algorithm as an example, and the second type of algorithm takes a Pfld algorithm as an example, so that a detailed description is given to a flow of a desensitization processing method of image data provided by the embodiment of the present application.
As shown in fig. 8, a specific flow of a desensitization processing method for image data provided by an embodiment of the present application includes:
s801, acquiring a video which needs to be subjected to desensitization processing.
S802, frame splitting processing is carried out on the video to obtain a plurality of images.
In specific implementation, the frame splitting process for the video may be implemented by using tools such as Opencv library and FFMPEG, which is not limited in the embodiment of the present application.
S803, face detection is carried out on each image by utilizing a Retinaface recognition algorithm, and a face detection frame is marked in the image containing the face.
S804, for each image containing face detection frames, based on the features of the face detection frames, judging whether one or more face detection frames contained in the image contain faces, if at least one face detection frame in the image contains faces, executing step S805, otherwise, if all face detection frames in the image do not contain faces, executing step S810.
In this step, if the image includes a plurality of face detection frames, and it is determined that a certain face detection frame does not include a face based on the features of the face detection frames, the face detection frame may be deleted from the image.
Specifically, based on the characteristics of the face detection frame, whether the face detection frame in the image contains a face is judged, and when the face detection frame is determined to meet one or more of the following conditions based on the characteristics of the face detection frame, the face detection frame is determined to contain the face: the ratio of the area of the face detection frame to the area of the whole image is smaller than a preset proportion threshold value; and 2, the similarity between the feature vector of the face detection frame and at least one feature vector in a pre-stored feature vector library is larger than a preset similarity threshold.
S805, for each face detection frame, recording the position information of each pixel point in the face detection frame in the image to which the pixel point belongs.
S806, respectively extracting target images of the areas where the face detection frames are located, and performing scaling processing on the target images to obtain target images with preset sizes.
S807, performing face key point recognition in a target image with a preset size by using a Pfld algorithm to obtain face key points in each face detection frame.
S808, determining the position information of the key points of the face in the face detection frame in the image containing the face detection frame according to the recorded position information of each pixel point in each face detection frame in the image and the pixel point of the key point of the face in each face detection frame.
S809, masking and/or Gaussian blur processing, namely desensitization processing is carried out on the area surrounded by part or all of the face key points in the image containing the face detection frame.
S810, all face detection frames contained in the image do not contain faces, and if the face detection frames in the image are determined to be misidentified, the image does not need to be subjected to desensitization processing. Of course, if no face is detected in the image, that is, no face detection frame is determined, the image does not need to be subjected to desensitization.
S811, converting the image subjected to the desensitization and the image not subjected to the desensitization into a video according to the sequence of video frame removal, and obtaining the video subjected to the desensitization.
Based on the same inventive concept, an embodiment of the present application provides a desensitization processing apparatus for image data, as shown in fig. 9, including:
an acquisition unit 901 for acquiring an image to be processed.
The first processing unit 902 is configured to perform face detection in an image to be processed by using a first type of algorithm, and if a face is detected, determine a face detection frame including the face in the image to be processed.
The second processing unit 903 is configured to identify key points of a face in the face detection frame by using a second type algorithm, and determine key points of a face in the face detection frame, where the detection effect of the first type algorithm on the face frame in the image is better than the second type algorithm, and the identification effect of the second type algorithm on the key points of a face in the face detection frame is better than the first type algorithm.
And the third processing unit 904 is configured to determine a position of the face key point in the image to be processed, and perform desensitization processing on an area surrounded by the face key point in the image to be processed.
In a possible implementation manner, the second processing unit 903 is specifically configured to:
when the face detection frame is determined to contain a face based on the characteristics of the face detection frame, the face key points are identified in the face detection frame by utilizing a second class algorithm, and the face key points are determined in the face detection frame.
In a possible implementation manner, the second processing unit 903 is specifically configured to:
When the face detection frame is determined to meet the following conditions based on the characteristics of the face detection frame, the face detection frame is determined to contain a face:
the ratio of the area of the face detection frame to the area of the image to be processed is smaller than a preset proportion threshold value; or the similarity between the feature vector of the face detection frame and at least one feature vector in a pre-stored feature vector library is larger than a preset similarity threshold.
In a possible implementation manner, the second processing unit 903 is specifically configured to:
extracting a target image of an area where a face detection frame is located in an image to be processed, and performing scaling treatment on the target image to obtain a target image with a preset size;
performing face key point recognition in a target image with a preset size by using a second type algorithm, and determining face key points in the target image;
the third processing unit 904 is specifically configured to:
and determining the position of the face key point in the image to be processed according to the position information of each pixel point in the pre-recorded face detection frame in the image to be processed and the pixel point of the face key point in the target image.
In one possible implementation, the first type of algorithm is one of the following: retinaface recognition algorithm, hoG face positioning algorithm, blazeface recognition algorithm, and Pyramidbox detection algorithm.
In one possible implementation, the second type of algorithm is one of the following algorithms: pfld algorithm and Yolo algorithm.
In a possible implementation manner, the third processing unit 904 is specifically configured to: and (3) masking and/or Gaussian blur processing is carried out on the area surrounded by part or all of the face key points in the image to be processed.
Based on the same inventive concept, an embodiment of the present application provides an electronic device, including a memory and a processor, where the memory stores a computer program executable on the processor, and when the computer program is executed by the processor, implements the desensitization processing method of any one of the image data in the above embodiment.
Based on the same inventive concept, an embodiment of the present application provides a computer-readable storage medium, which when instructions in the storage medium are executed by a processor, enables the processor to perform the desensitization processing method of any one of the image data of the above embodiments.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. A method of desensitizing image data, comprising:
acquiring an image to be processed;
performing face detection in the image to be processed by using a first type of algorithm, and if a face is detected, determining a face detection frame containing the face in the image to be processed;
performing face key point recognition in the face detection frame by using a second type algorithm, and determining the face key points in the face detection frame, wherein the detection effect of the first type algorithm on the face frame in the image is superior to that of the second type algorithm, and the recognition effect of the second type algorithm on the face key points in the face detection frame is superior to that of the first type algorithm;
And determining the position of the face key point in the image to be processed, and performing desensitization treatment on the area surrounded by the face key point in the image to be processed.
2. The method of claim 1, wherein the performing face keypoint identification in the face detection frame using the second class of algorithms, determining the face keypoints in the face detection frame comprises:
and when the face detection frame is determined to contain a face based on the characteristics of the face detection frame, carrying out face key point recognition in the face detection frame by utilizing a second type algorithm, and determining the face key points in the face detection frame.
3. The method according to claim 2, wherein the face detection frame is determined to include a face when the face detection frame is determined to satisfy the following conditions based on the features of the face detection frame:
the ratio of the area of the face detection frame to the area of the image to be processed is smaller than a preset proportion threshold value; or the similarity between the feature vector of the face detection frame and at least one feature vector in a pre-stored feature vector library is larger than a preset similarity threshold.
4. The method of claim 1, wherein using the second class of algorithms to identify face keypoints in the face detection box, determining face keypoints in the face detection box comprises:
extracting a target image of an area where a face detection frame is located in the image to be processed, and performing scaling treatment on the target image to obtain a target image with a preset size;
performing face key point recognition in the target image with the preset size by using a second type algorithm, and determining face key points in the target image;
the determining the position of the face key point in the image to be processed includes:
and determining the position of the face key point in the image to be processed according to the position information of each pixel point in the pre-recorded face detection frame in the image to be processed and the pixel point of the face key point in the target image.
5. The method of claim 1, wherein the first type of algorithm is one of the following algorithms: retinaface recognition algorithm, hoG face positioning algorithm, blazeface recognition algorithm, and Pyramidbox detection algorithm.
6. The method of claim 1, wherein the second class of algorithms is one of the following algorithms: pfld algorithm and Yolo algorithm.
7. The method according to any one of claims 1 to 6, wherein the desensitizing the region surrounded by the face keypoints in the image to be processed includes:
and masking and/or Gaussian blur processing is carried out on the area surrounded by part or all of the face key points in the image to be processed.
8. A desensitizing processing apparatus of image data, characterized by comprising:
an acquisition unit configured to acquire an image to be processed;
the first processing unit is used for carrying out face detection in the image to be processed by utilizing a first type of algorithm, and if the face is detected, a face detection frame containing the face is determined in the image to be processed;
the second processing unit is used for recognizing the key points of the human face in the human face detection frame by using a second type algorithm and determining the key points of the human face in the human face detection frame, wherein the detection effect of the first type algorithm on the human face frame in the image is better than that of the second type algorithm, and the recognition effect of the second type algorithm on the key points of the human face in the human face detection frame is better than that of the first type algorithm;
And the third processing unit is used for determining the position of the face key point in the image to be processed and carrying out desensitization processing on the area surrounded by the face key point in the image to be processed.
9. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program executable on the processor, the computer program, when executed by the processor, implementing the method of any of claims 1-7.
10. A computer-readable storage medium having a computer program stored therein, characterized in that: the computer program, when executed by a processor, implements the method of any of claims 1-7.
CN202310966731.0A 2023-08-02 2023-08-02 Desensitization processing method, device, equipment and medium for image data Pending CN117171767A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310966731.0A CN117171767A (en) 2023-08-02 2023-08-02 Desensitization processing method, device, equipment and medium for image data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310966731.0A CN117171767A (en) 2023-08-02 2023-08-02 Desensitization processing method, device, equipment and medium for image data

Publications (1)

Publication Number Publication Date
CN117171767A true CN117171767A (en) 2023-12-05

Family

ID=88936551

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310966731.0A Pending CN117171767A (en) 2023-08-02 2023-08-02 Desensitization processing method, device, equipment and medium for image data

Country Status (1)

Country Link
CN (1) CN117171767A (en)

Similar Documents

Publication Publication Date Title
CN107423690B (en) Face recognition method and device
CN109858371B (en) Face recognition method and device
CN104933414B (en) A kind of living body faces detection method based on WLD-TOP
CN108564066B (en) Character recognition model training method and character recognition method
CN108268867B (en) License plate positioning method and device
KR101781358B1 (en) Personal Identification System And Method By Face Recognition In Digital Image
CN110795595A (en) Video structured storage method, device, equipment and medium based on edge calculation
CN111611873A (en) Face replacement detection method and device, electronic equipment and computer storage medium
CN108416291B (en) Face detection and recognition method, device and system
CN110781770B (en) Living body detection method, device and equipment based on face recognition
CN112487886A (en) Method and device for identifying face with shielding, storage medium and terminal
CN110826484A (en) Vehicle weight recognition method and device, computer equipment and model training method
CN113705290A (en) Image processing method, image processing device, computer equipment and storage medium
CN111401196A (en) Method, computer device and computer readable storage medium for self-adaptive face clustering in limited space
CN112464690A (en) Living body identification method, living body identification device, electronic equipment and readable storage medium
WO2023279799A1 (en) Object identification method and apparatus, and electronic system
CN111353385A (en) Pedestrian re-identification method and device based on mask alignment and attention mechanism
CN112257628A (en) Method, device and equipment for identifying identities of outdoor competition athletes
EP4332910A1 (en) Behavior detection method, electronic device, and computer readable storage medium
CN111914704B (en) Tricycle manned identification method and device, electronic equipment and storage medium
CN117171767A (en) Desensitization processing method, device, equipment and medium for image data
CN115984977A (en) Living body detection method and system
CN111091089B (en) Face image processing method and device, electronic equipment and storage medium
CN110751034B (en) Pedestrian behavior recognition method and terminal equipment
CN114581978A (en) Face recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination