CN116129496A - Image shielding method and device, computer equipment and storage medium - Google Patents

Image shielding method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN116129496A
CN116129496A CN202211704338.6A CN202211704338A CN116129496A CN 116129496 A CN116129496 A CN 116129496A CN 202211704338 A CN202211704338 A CN 202211704338A CN 116129496 A CN116129496 A CN 116129496A
Authority
CN
China
Prior art keywords
image
shielding
target face
region
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211704338.6A
Other languages
Chinese (zh)
Inventor
刘畅
朱树磊
殷俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202211704338.6A priority Critical patent/CN116129496A/en
Publication of CN116129496A publication Critical patent/CN116129496A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)
  • Collating Specific Patterns (AREA)

Abstract

The invention discloses an image shielding method, an image shielding device, computer equipment and a storage medium. Relates to the technical field of computer vision, and the method comprises the following steps: extracting features of the target face image to obtain a first feature vector; dividing the target face image into areas to obtain a plurality of areas in the target face image; selecting a region as a shielding region; shielding the shielding region in the target face image to obtain a shielding image; extracting features of the shielding image to obtain a second feature vector; determining the similarity degree of the first feature vector and the second feature vector; and when the similarity degree is higher than the preset degree, taking the shielding image as an output image. Through the mode, the problem of low image shielding precision can be solved.

Description

Image shielding method and device, computer equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to an image shielding method, an image shielding device, a computer device, and a storage medium.
Background
The target face image contains a large amount of privacy information, so that the target face information has extremely high privacy protection requirements. To protect the privacy of the target, it is common practice to manually add a mask later. However, in the method, in scenes such as real-time rebroadcasting, the image shielding precision is low.
Disclosure of Invention
The invention mainly solves the technical problem of providing an image shielding method, an image shielding device, computer equipment and a storage medium, and can solve the problem of lower image shielding precision.
In order to solve the technical problems, the invention adopts a technical scheme that: there is provided an image occlusion method, the method comprising: extracting features of the target face image to obtain a first feature vector; dividing the target face image into areas to obtain a plurality of areas in the target face image; selecting a region as a shielding region; shielding the shielding region in the target face image to obtain a shielding image; extracting features of the shielding image to obtain a second feature vector; determining the similarity degree of the first feature vector and the second feature vector; and when the similarity degree is higher than the preset degree, taking the shielding image as an output image.
In one embodiment, selecting an area as the occlusion area includes: ranking a plurality of regions in the target face image; and taking each area as a shielding area in sequence according to the area level.
In an embodiment, determining the degree of similarity of the first feature vector and the second feature vector includes: and when the similarity is lower than the preset degree, returning to execute the selection of the area as the shielding area until the similarity of the second characteristic vector and the first characteristic vector corresponding to the shielding area is higher than the preset degree.
In an embodiment, the area division is performed on the target face image to obtain a plurality of areas in the target face image, including: and dividing the target face image into areas according to the parts to obtain a plurality of areas in the target face image.
In one embodiment, feature extraction of a target face image includes: extracting features of the 5 images of the target face to obtain a first feature vector of the image of the target face and target face key points;
the method for dividing the region of the target face image according to the part comprises the following steps: and dividing the target face image according to the parts based on the target face key points to obtain a plurality of areas in the target face image.
0 in one embodiment, the plurality of regions in the target facial image includes a target eye region, a target mouth region, a target nose bridge region, a target chin region, a target cheek region.
In one embodiment, determining the degree of similarity of the first feature vector and the second feature vector includes: calculating a spatial distance between the first feature vector and the second feature vector, the spatial distance being used for the table
Degree of sign similarity;
and 5, when the similarity degree is higher than the preset degree, taking the shielding image as an output image, wherein the method comprises the following steps of: if it is
And if the spatial distance is smaller than the preset threshold, the similarity is higher than the preset degree, and the shielding image is used as an output image.
In order to solve the problems, another technical proposal adopted by the application is to provide an image shade
A gear device, the device comprising: the extraction module is used for extracting the characteristics of the target face image, and 0 is used for obtaining a first characteristic vector; extracting features of the shielding image to obtain a second feature vector;
the classification module is used for carrying out region division on the target face image to obtain a plurality of regions in the target face image; the shielding module selects one area as a shielding area; shielding the shielding region in the target face image to obtain a shielding image; the computing module determines a first feature vector and a second feature vector
The degree of similarity of the feature vectors; and the output module takes the shielding 5 image as an output image when the similarity degree is higher than the preset degree.
In order to solve the above problem, another technical solution adopted in the present application is to provide a computer device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the image occlusion method of any one of the above.
To solve the above problem, another aspect adopted in the present application is to provide a computer readable storage medium, where a computer program is stored, and the computer program is executed by a processor to implement an image occlusion method as any one of the above.
The beneficial effects of the invention are as follows: different from the prior art, the method and the device obtain a plurality of areas by dividing the area of the face image of the target 5; selecting one region from the multiple regions for shielding operation to obtain a shielding image; determining the similarity between the extracted feature vectors of the target face image (i.e. the image which is not shielded) and the respective extracted feature vectors of the shielded image, outputting the shielded image when the similarity is higher than a preset degree, determining the shielded region by means of region division and region selection,
and carrying out shielding operation on the target face image in a fine controllable manner. Through the mode, the problem of low image shielding precision can be solved by the method and the device of the invention.
Drawings
In order to more clearly describe the technical solutions in the embodiments of the present application, the embodiments will be described below
The drawings that are required for the description are briefly presented, and it is apparent that fig. 5 in the following description is merely some embodiments of the present application and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art. Wherein:
FIG. 1 is a flow chart of an embodiment of an image occlusion method of the present invention;
FIG. 2 is a schematic diagram of a feature extraction model of a target facial image of the present invention;
FIG. 3 is a schematic diagram of the present invention for region division of a target face image;
FIG. 4 is a flow chart of an embodiment of an image occlusion method of the present invention;
FIG. 5 is a schematic view of an embodiment of an image shielding device according to the present invention;
FIG. 6 is a schematic diagram of a computer device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the specific embodiments described herein are for purposes of illustration only and are not limiting. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present application are shown in the drawings. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present disclosure.
Referring to fig. 1, fig. 1 is a flowchart of an embodiment of an image shielding method according to the present invention, where the method includes:
step 101: and extracting the characteristics of the target face image to obtain a first characteristic vector.
Optionally, feature extraction may be performed on the target face image by using a feature extraction model, so as to obtain a first feature vector of the target face image. The feature extraction model may be RNN, CNN, or resnet, and the model variety is not limited. In other embodiments, the texture feature or gradient feature of the target face image may be extracted, and the extracted texture feature or gradient feature may be used as the first feature vector of the target face image.
Furthermore, before step 101, the target face image may be processed so that the size of the processed target face image is the same as the input size of the feature extraction model. For example, the target face image may be subjected to a size process.
The target face image may be acquired before the feature extraction is performed on the target face image. In one implementation, a target facial image may be acquired from the video, such that feature extraction is performed on the acquired target facial image at step 101. Alternatively, the target face image may be truncated directly from one image.
Types of target facial images include, but are not limited to, face images or non-face images. The non-human facial image may be an image of an animal face such as a pig face or a dog face.
Step 102: and carrying out region division on the target face image to obtain a plurality of regions in the target face image.
After the target face image is acquired, the target face image may be divided into regions to obtain a plurality of regions in the target face image.
In one implementation, the regions may be partitioned according to the location to determine candidate regions for subsequent occlusion steps.
In an example, in step 102, the target face image may be identified by identifying the target face portion to determine the region where each target face portion is located, so that the region where each target face portion is located is used as a candidate region for the subsequent masking step, so as to obtain a plurality of regions in the target face image.
In another example, the target face image may be divided into regions by region based on the target face key points, to obtain multiple regions in the target face image, i.e., to obtain candidate regions for subsequent occlusion steps.
In this example, in step 101, the target face key points in the target face image may be determined by performing feature extraction on the target face image. Specifically, as shown in fig. 2, fig. 2 is a schematic structural diagram of a feature extraction model of the target face image of the present invention. The target face image may be input into a feature extraction model, the target face and key point features and the like may be extracted using a basic network of the feature extraction model, and then an output layer of the feature extraction model may learn class positions, key point positions and the like from feature representations and supervision information of a previous layer, so that a first feature vector (bbox) and a target face key point (keypoints) of the target face image may be extracted using the feature extraction model.
Taking a face image as an example for description: and inputting the face image into a preset face key point detection model corresponding to the detected face image, outputting corresponding coordinate information, taking the coordinate position as the face key point information, and taking the coordinate position as coordinate point information corresponding to the corners of eyes, the nose center point, the corners of mouth, the chin and the face outline. Specifically, the face key point detection model detects key points such as eye corners, nose center points, mouth corners, chin, face contours and the like of a face, and detects face key point position information (such as eye corners, nose center points, mouth corners, chin, face contours and the like) in a face image. Recording the positions of the face key points in the face image according to an XY coordinate system, extracting features through a convolutional neural network to obtain a group of feature images, forming a feature vector by the features through a full-connection layer, and finally regressing the face key points to obtain the position information of the key points. In addition, the face key point detection model can be obtained by training by using the face image marked with the corresponding face key points as a training set.
As shown in fig. 3, the region division of the target face image according to the region based on the target face key points may be represented as: and determining the region of each part of the target face based on the key points on the target face image, and taking the region of each part of the target face as a candidate region of the subsequent shielding step.
In the above-described scheme of region division according to the parts, the target face image may be region-divided according to the set division requirement, to obtain at least one face part region. For example, if the set dividing requirement is to divide the eye region, the mouth region and the nose bridge region, the target eye region, the mouth region and the nose bridge region may be obtained in step 102. For another example, if the set dividing requirement is to divide the eye region, the mouth region, the chin region, the cheek region and the nose bridge region, the target eye region a, the target mouth region B, the target nose bridge region C, the target chin region D and the target cheek region E can be obtained in step 102.
In another implementation manner, the target face image may be divided in a horizontal direction or a vertical direction, so as to obtain a plurality of regions in the target face image.
Step 103: selecting one area as a shielding area.
After determining a plurality of areas in the target face image based on the steps, one area can be selected as an occlusion area, so that an output occlusion image can be determined based on the occlusion area.
In one implementation, an area may be randomly selected as the occlusion area from among a plurality of areas in the target face image.
In another implementation, a certain region may be designated as an occlusion region as desired. Specifically, an area of the preset face part may be taken as an occlusion area.
In yet another implementation, multiple regions in the target face image may be ranked; the occlusion regions may then be selected in accordance with a rank order of the plurality of regions in the target face image.
In an example, each region may be sequentially taken as an occlusion region in order of region level from high to low. Under the condition that the level sequence represents the importance degree, the operation of shielding the unnecessary area is reduced in the mode, and the time cost is saved.
In another example, each region may be sequentially taken as an occlusion region in order of region level from low to high.
Step 104: shielding the shielding region in the target face image to obtain a shielding image; and extracting features of the shielding image to obtain a second feature vector.
Specifically, after a plurality of areas in the target face image are obtained through division, one area is selected as an occlusion area, and occlusion operation is carried out on the area to obtain an occlusion image after the occlusion operation.
Alternatively, the occlusion operation may be blurring the region. In other embodiments, the occlusion operation may also be: and superposing a shielding strip on a shielding area in the target face image.
After the occlusion image is obtained, feature extraction can be performed on the occlusion image to obtain a second feature vector.
And extracting the characteristics of the shielding image through the characteristic extraction model to obtain a second characteristic vector. The feature extraction model may be RNN, CNN, or resnet, and the model variety is not limited. In other embodiments, it is also possible to extract a texture feature or gradient feature of the occlusion image, and use the extracted texture feature or gradient feature as the second feature vector.
Step 105: a degree of similarity of the first feature vector and the second feature vector is determined.
Specifically, after the feature extraction is performed on the target face image and the occlusion image to obtain the corresponding first feature vector and second feature vector, the similarity degree of the first feature vector and the second feature vector can be determined, that is, the features respectively extracted from the target face image and the occlusion image are compared, so that whether the occlusion image is to be used as an output image or not can be judged through the similarity degree.
In one embodiment, in step 105, a spatial distance between the first feature vector and the second feature vector may be calculated, and the calculated spatial distance may be indicative of a degree of similarity between the first feature vector and the second feature vector.
In this embodiment, the greater the spatial distance between the first feature vector and the second feature vector, the lower the degree of similarity between the target face image and the occlusion image. Thus, in this embodiment, if the spatial distance between the first feature vector and the second feature vector is smaller than the preset threshold, it may be characterized that the degree of similarity between the first feature vector and the second feature vector is higher than the preset degree, in which case step 106 may be performed to take the occlusion image as the output image.
Conversely, if the spatial distance between the first feature vector and the second feature vector is smaller, the similarity degree between the target face image and the occlusion image is higher. Thus, in this embodiment, if the spatial distance between the first feature vector and the second feature vector is higher than the preset threshold, it may be characterized that the degree of similarity between the first feature vector and the second feature vector is lower than the preset degree.
Alternatively, the spatial distance may be Euclidean distance, chebyshev distance, minkowski distance, manhattan distance, or cosine distance, etc., without limitation herein.
In another embodiment, the similarity calculation module may determine the similarity between the first feature vector and the second feature vector. The first feature vector and the second feature vector can be input into a similarity calculation model to obtain the similarity degree of the first feature vector and the second feature vector. In this embodiment, if the similarity between the determined first feature vector and the determined second feature vector is higher than the preset degree, step 106 is performed.
Step 106: and when the similarity degree is higher than the preset degree, taking the shielding image as an output image.
When the similarity is higher than the preset degree, the occlusion image is indicated to reach the occlusion standard, and the occlusion image is directly output. Therefore, the similarity between the characteristics of the target face in the image after shielding and the characteristics of the target face in the image before shielding can be higher under the condition of fine shielding.
In the present embodiment, a plurality of regions are obtained by dividing the target face image into parts; selecting one region from the multiple regions for shielding operation to obtain a shielding image; determining the similarity between the extracted features of the target face image (i.e. the image which is not shielded) and the extracted features of the shielding image, outputting the shielding image when the similarity is greater than a preset degree, determining the shielding region in a region dividing and region selecting mode, performing shielding operation on the target face image in a fine and controllable mode, and outputting the features of the shielding image when the similarity is greater than the preset degree, so that the similarity between the features of the target face in the shielding image and the features of the target face in the image before shielding is higher when the shielding is fine.
And, since the higher the degree of similarity between the first feature vector and the second feature vector, the greater the probability that the facial features of the target facial image and the facial features in the occlusion image are determined to belong to the same individual; the lower the degree of similarity between the first feature vector and the second feature vector, the smaller the probability that the facial features of the target facial image and the facial features in the occlusion image are determined to belong to the same individual, and the smaller the probability that the facial features of the target facial image and the facial features in the occlusion image are determined to belong to different individuals. The image shielding method is used for determining whether the characteristics of the output shielding image and the characteristics of the target face in the image before shielding are similar to each other, and the probability that the characteristics of the output shielding image and the characteristics of the target face in the image before shielding are judged to belong to the same individual is high.
Referring to fig. 4, fig. 4 is a flowchart of an embodiment of an image shielding method according to the present invention, which includes:
step 301: and extracting the characteristics of the target face image to obtain a first characteristic vector.
Step 302: and carrying out region division on the target face image to obtain a plurality of regions in the target face image.
Step 303: the plurality of regions in the target face image are ranked.
Specifically, the plurality of regions in the target face image obtained after division are classified, and the plurality of regions in the target face image are given a high-to-low level (region level order).
For example, the eye region level is highest, the mouth region level is next highest, the nasal bridge region level is medium, the chin region level is slightly low, and the cheek region level is lowest.
Alternatively, the plurality of regions in the target face image may be respectively assigned levels according to the importance of the plurality of regions in the target face image.
Alternatively, a plurality of regions in the target face image may be randomly assigned a level. Of course, the respective areas may be assigned a level by other means, which is not limited herein.
Step 304: and taking each area as a shielding area in sequence according to the area level.
The occlusion regions may be selected according to a rank order of the plurality of regions in the target face image. The order of the levels may be a low to high order or may be a high to low order.
For example, each region may be sequentially regarded as an occlusion region in order of the level from high to low of a plurality of blocks in the target face image.
Step 305: shielding the shielding region in the target face image to obtain a shielding image; and extracting features of the shielding image to obtain a second feature vector.
Step 306: a degree of similarity of the first feature vector and the second feature vector is determined.
Step 307: and when the similarity degree is higher than the preset degree, taking the shielding image as an output image.
When the similarity is higher than the preset degree, the occlusion image is indicated to reach the occlusion standard, and the occlusion image is directly output.
Optionally, the degree of similarity between the first feature vector extracted from the target face image and the second feature vector of the occlusion image may be determined, and if the degree of similarity is higher than a threshold value, the occlusion image is output, so that the degree of similarity between the feature of the target face in the image after occlusion and the feature of the target face in the image before occlusion is higher under the condition of fine occlusion.
Illustratively, when the degree of similarity between the first feature vector and the second feature vector is determined by calculating the spatial distance between the first feature vector and the second feature vector, if the spatial distance is smaller than the preset threshold, it is indicated that the degree of similarity is higher than the preset degree, and in step 307, the occlusion image may be taken as the output image.
In addition, if the similarity between the first feature vector and the second feature vector is lower than the preset degree, the step 304 is returned to be executed, wherein each region is sequentially used as an shielding region according to the region level sequence, so that the next region of the current shielding region is used as the latest shielding region according to the region level sequence, and then the latest shielding region is utilized to conduct image shielding until the similarity between the second feature vector and the first feature vector corresponding to the shielding region is higher than the preset degree.
In a specific example, the target eye region a, the target mouth region B, the target nose bridge region C, the target chin region D, and the target cheek region E correspond to the levels of 5, 4, 3, 2, and 1, respectively; when the obtained shielding image can not be output after the target eye area A is selected for shielding, selecting the highest-grade shielding operation from the target mouth area B, the target nose bridge area C, the target chin area D and the target cheek area E at the moment until the similarity degree of the second characteristic vector and the first characteristic vector corresponding to the shielding area is higher than the preset degree.
In summary, in the image shielding method provided by the invention, the target face image can be divided into a plurality of areas to obtain the target face image, and the plurality of areas are given a high-to-low grade; sequentially taking each region as a shielding region according to the region level sequence; image shielding is carried out based on the shielding region, and a shielding image corresponding to the shielding region is obtained; then outputting the shielding image corresponding to the shielding region and ending the flow under the condition that the shielding image corresponding to the shielding region meets the output condition; if the occlusion image corresponding to the occlusion region does not meet the output condition, returning to the step 304, taking the next region of the occlusion region according to the sequence of the region levels as the latest occlusion region, and continuing to execute image occlusion based on the occlusion region until the occlusion image corresponding to one occlusion region meets the output condition or all regions are traversed.
In other embodiments, each region may be sequentially used as an occlusion region according to a region level order, and whether an occlusion image corresponding to each region meets an output condition may be determined; and outputting the occlusion image meeting the output conditions.
And if the similarity degree of the second characteristic vector of the shielding image and the first characteristic vector of the target face image is higher than the preset degree, the shielding image accords with the output condition.
Referring to fig. 5, fig. 5 is a schematic structural diagram of an embodiment of an image shielding device according to the present invention, where the device includes an extracting module 41, a classifying module 42, a shielding module 43, a calculating module 44 and an output module 45.
The extracting module 41 is configured to perform feature extraction on the target face image to obtain a first feature vector; and extracting features of the shielding image to obtain a second feature vector.
The classification module 42 is configured to perform region division on the target face image to obtain a plurality of regions in the target face image.
The shielding module 43 is used for selecting an area as a shielding area; and shielding the shielding region in the target face image to obtain a shielding image.
The calculation module 44 is configured to determine a degree of similarity between the first feature vector and the second feature vector.
The output module 45 is configured to take the occlusion image as an output image when the similarity is higher than a preset degree.
Optionally, the classification module 42 is further configured to rank the plurality of regions of the target face image. The occlusion module 43 may be configured to treat each region in turn as an occlusion region in a region level order.
Optionally, the calculating module may be configured to return to execute the selecting the region as the shielding region when the similarity is lower than a preset degree, until the similarity between the second feature vector corresponding to the shielding region and the first feature vector is higher than the preset degree.
Alternatively, the classification module 42 may be configured to divide the target face image into regions according to the location, so as to obtain a plurality of regions in the target face image.
Further, the extracting module 41 is further configured to perform feature extraction on the target face image to obtain a plurality of parts of the target face image and target face key points; the classification module 42 may be configured to divide the target face image into regions according to the location based on the target face key points, so as to obtain a plurality of regions in the target face image.
Wherein the plurality of regions in the target facial image include a target eye region, a target mouth region, a target nose bridge region, a target chin region, and a target cheek region.
Further, the calculating module 44 is further configured to calculate a spatial distance between the first feature vector and the second feature vector, where the spatial distance is used to characterize the similarity; further, the output module 45 further includes: if the spatial distance is smaller than the preset threshold, the similarity is higher than the preset degree, and the shielding image is used as an output image.
It should be noted that, because the content of information interaction and execution process between the modules is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and details are not repeated herein.
It will be apparent to those skilled in the art that the above-described modules are merely illustrated in terms of division for convenience and brevity, and that in practical applications, the above-described functional allocation may be performed by different modules, i.e., the internal structure of the apparatus is divided into different modules, to perform all or part of the above-described functions. The modules in the embodiment may be integrated in one first processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the modules are only for distinguishing from each other, and are not used to limit the protection scope of the present application. The specific working process of the modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of a computer device according to the present invention, where the computer device includes a processor 50 and a memory 51 coupled to each other for cooperating with each other to implement the image occlusion method described in any of the above embodiments. Also stored in the memory 51 is at least one computer program 52 running on the processor 50, which processor 50 implements the steps of any of the various image occlusion method embodiments described above when executing the computer program 52.
The processor 50 may be a central first processing unit (Central Processing Unit, CPU), the processor 50 may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 51 may be a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash Card (Flash Card), or the like. Further, the memory 51 may also include both an internal storage unit and an external storage device. The memory 51 is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program codes of computer programs, etc. The memory 51 may also be used to temporarily store data that has been output or is to be output.
The embodiments of the present application also provide a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements steps of the foregoing method embodiments.
The present embodiments provide a computer program product which, when run on a terminal device, causes the terminal device to perform steps that enable the respective method embodiments described above to be implemented.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the flow in the methods of the above embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, where the computer program may implement the steps of the above embodiments of the methods when executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a terminal device, a recording medium, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunication signal, and a software distribution medium. Such as a U-disk, removable hard disk, magnetic or optical disk, etc.
The foregoing description is only of embodiments of the present application, and is not intended to limit the scope of the patent application, and all equivalent structures or equivalent processes using the descriptions and the drawings of the present application or direct or indirect application in other related technical fields are included in the scope of the present application.

Claims (10)

1. An image occlusion method, comprising:
extracting features of the target face image to obtain a first feature vector;
performing region division on the target face image to obtain a plurality of regions in the target face image;
selecting one area as a shielding area;
shielding the shielding region in the target face image to obtain a shielding image;
extracting features of the shielding image to obtain a second feature vector;
determining the similarity degree of the first characteristic vector and the second characteristic vector;
and when the similarity degree is higher than a preset degree, taking the shielding image as an output image.
2. The method of claim 1, wherein selecting the region as an occlusion region comprises:
ranking a plurality of regions in the target face image;
and taking each region as the shielding region in turn according to the region level order.
3. The method of claim 2, wherein the determining the degree of similarity of the first feature vector and the second feature vector, then comprises:
and when the similarity is lower than a preset degree, returning to execute the selection of the area as the shielding area until the similarity of the second characteristic vector corresponding to the shielding area and the first characteristic vector is higher than the preset degree.
4. The method of claim 1, wherein the performing region division on the target face image to obtain a plurality of regions in the target face image includes:
and dividing the target face image according to the parts to obtain a plurality of areas in the target face image.
5. The method of claim 4, wherein the feature extraction of the target facial image comprises:
extracting features of the target face image to obtain a first feature vector of the target face image and target face key points;
the dividing the target face image according to the region comprises the following steps:
and dividing the target face image according to the parts based on the target face key points to obtain a plurality of areas in the target face image.
6. The method of claim 4, wherein the plurality of regions in the target facial image comprise a target eye region, a target mouth region, a target nose bridge region, a target chin region, a target cheek region.
7. The method of claim 1, wherein the determining a degree of similarity of the first feature vector and the second feature vector comprises:
calculating a spatial distance between the first feature vector and the second feature vector, wherein the spatial distance is used for representing the similarity degree;
and when the similarity is higher than a preset degree, taking the shielding image as an output image, wherein the method comprises the following steps:
and if the spatial distance is smaller than a preset threshold, the similarity is higher than a preset degree, and the shielding image is used as an output image.
8. An image occlusion device, the device comprising:
the extraction module is used for extracting the characteristics of the target face image to obtain a first characteristic vector; extracting features of the shielding image to obtain a second feature vector;
the classification module is used for carrying out region division on the target face image to obtain a plurality of regions in the target face image;
the shielding module selects one area as a shielding area; shielding the shielding region in the target face image to obtain a shielding image;
the computing module is used for determining the similarity degree of the first characteristic vector and the second characteristic vector;
and the output module is used for taking the shielding image as an output image when the similarity degree is higher than a preset degree.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the image occlusion method of any of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the image occlusion method of any of claims 1 to 7.
CN202211704338.6A 2022-12-23 2022-12-23 Image shielding method and device, computer equipment and storage medium Pending CN116129496A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211704338.6A CN116129496A (en) 2022-12-23 2022-12-23 Image shielding method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211704338.6A CN116129496A (en) 2022-12-23 2022-12-23 Image shielding method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116129496A true CN116129496A (en) 2023-05-16

Family

ID=86302149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211704338.6A Pending CN116129496A (en) 2022-12-23 2022-12-23 Image shielding method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116129496A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116798099A (en) * 2023-07-07 2023-09-22 广州广旭科技有限公司 Intelligent identification and management method and system for identities of labor workers

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116798099A (en) * 2023-07-07 2023-09-22 广州广旭科技有限公司 Intelligent identification and management method and system for identities of labor workers
CN116798099B (en) * 2023-07-07 2024-01-12 广州广旭科技有限公司 Intelligent identification and management method and system for identities of labor workers

Similar Documents

Publication Publication Date Title
CN107944020B (en) Face image searching method and device, computer device and storage medium
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN109815788B (en) Picture clustering method and device, storage medium and terminal equipment
CN110378235B (en) Fuzzy face image recognition method and device and terminal equipment
CN109829448B (en) Face recognition method, face recognition device and storage medium
CN105144239B (en) Image processing apparatus, image processing method
CN112419170B (en) Training method of shielding detection model and beautifying processing method of face image
CN109829396B (en) Face recognition motion blur processing method, device, equipment and storage medium
CN111080654B (en) Image lesion region segmentation method and device and server
CN108765315B (en) Image completion method and device, computer equipment and storage medium
CN110781770B (en) Living body detection method, device and equipment based on face recognition
CN109284757A (en) A kind of licence plate recognition method, device, computer installation and computer readable storage medium
CN113706502B (en) Face image quality assessment method and device
CN110796135A (en) Target positioning method and device, computer equipment and computer storage medium
CN112836653A (en) Face privacy method, device and apparatus and computer storage medium
CN111881789B (en) Skin color identification method, device, computing equipment and computer storage medium
CN113705294A (en) Image identification method and device based on artificial intelligence
CN116129496A (en) Image shielding method and device, computer equipment and storage medium
CN113228105A (en) Image processing method and device and electronic equipment
CN110610131A (en) Method and device for detecting face motion unit, electronic equipment and storage medium
CN112200004B (en) Training method and device for image detection model and terminal equipment
CN111736988A (en) Heterogeneous acceleration method, equipment and device and computer readable storage medium
CN116137061A (en) Training method and device for quantity statistical model, electronic equipment and storage medium
CN113012030A (en) Image splicing method, device and equipment
CN105224957A (en) A kind of method and system of the image recognition based on single sample

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination