CN113627339A - Privacy protection method, device and equipment - Google Patents

Privacy protection method, device and equipment Download PDF

Info

Publication number
CN113627339A
CN113627339A CN202110917455.XA CN202110917455A CN113627339A CN 113627339 A CN113627339 A CN 113627339A CN 202110917455 A CN202110917455 A CN 202110917455A CN 113627339 A CN113627339 A CN 113627339A
Authority
CN
China
Prior art keywords
target
privacy
area
scene image
zone
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110917455.XA
Other languages
Chinese (zh)
Inventor
张德钧
牛永岭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN202110917455.XA priority Critical patent/CN113627339A/en
Publication of CN113627339A publication Critical patent/CN113627339A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of computers, and provides a privacy protection method, a device and equipment, wherein the method comprises the following steps: acquiring a current scene image; inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image; if a target object is detected in a target privacy area, acquiring the position information of the target object relative to the target privacy area; and if the position information identifies that the target object is located in the target privacy area, performing distortion processing on the target privacy area. According to the method, the target privacy zone detection model corresponding to the current scene image can be obtained in a targeted manner, the corresponding target privacy zone is detected through the target privacy zone detection model, the target privacy zone is directly adjusted and protected after the target privacy zone is obtained, removal is not needed after the privacy of others is obtained, and the risk of invading the privacy is reduced.

Description

Privacy protection method, device and equipment
Technical Field
The present application belongs to the field of computers, and in particular, to a method, an apparatus, and a device for privacy protection.
Background
With the rapid development of the fields of social economy and electronic science and technology, the security monitoring market is rapidly expanded, the number of monitoring cameras in various application occasions is more and more, and the condition of invading the privacy of other people easily occurs when the cameras are used, for example, the intelligent doorbell has legal risk of invading the privacy of neighbors.
In the prior art, in order to protect privacy, after image information is obtained, a user needs to manually set a privacy area, and then information in the privacy area is removed to avoid invading the privacy. The method is to acquire the privacy of others and then remove the others, and actually, the risk of invading the privacy still exists.
Disclosure of Invention
The embodiment of the application provides a privacy protection method, a privacy protection device and privacy protection equipment, and the problems can be solved.
In a first aspect, an embodiment of the present application provides a privacy protection method, including:
acquiring a current scene image;
inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image;
if a target object is detected in the target privacy zone, acquiring the position information of the target object relative to the target privacy zone;
and if the position information marks that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
Further, before the current scene image is input to a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image, the method further includes:
acquiring scene characteristic information according to a current scene image;
determining a current application scene according to the current scene image and the scene characteristic information;
and taking the privacy area detection neural network model corresponding to the current application scene as a target privacy area detection model.
Further, before the taking the privacy area detection neural network detection model corresponding to the current application scenario as the target privacy area detection model, the method further includes:
acquiring sample training sets of different application scenes; the sample training set comprises a sample scene image and a privacy zone label corresponding to the sample scene image;
and training an initial neural network model by using the sample training set to obtain privacy area detection neural network detection models corresponding to different trained application scenes.
Further, the training of the initial neural network model by using the sample training set to obtain the trained neural network detection models for detecting the privacy regions corresponding to different application scenes includes:
inputting the sample scene image into an initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
calculating a target loss value according to the sample privacy zone, the privacy zone label and a preset loss function;
if the target loss value does not meet the preset suspension condition, updating the initial neural network model according to the target loss value, and returning to execute the input of the sample scene image into the initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
and if the target loss value meets a preset suspension condition, outputting a trained privacy area detection neural network detection model.
Further, after the training of the initial neural network model by using the sample training set is performed to obtain the trained neural network detection models for detecting the privacy zones corresponding to different application scenarios, the method further includes:
and if the sample training set is detected to be updated, optimizing the neural network detection model for detecting the privacy area according to the updated sample training set.
Further, after the position information of the target object relative to the target privacy zone, the method further includes:
and if the position information marks that the target object part is located in a non-privacy area, normally displaying the target object, and carrying out distortion processing on the target privacy area except the target area of the target object.
Further, if the location information identifies that the target object is located in the target privacy area, performing distortion processing on the target privacy area, including:
and if the target object is judged to be located in the target privacy zone according to the preset privacy zone experience parameters and the position information, carrying out distortion processing on the target privacy zone.
Further, before the distortion processing is performed on the target privacy zone if the position information identifies that the target object is located in the target privacy zone, the method further includes:
acquiring the object type of the target object;
if the position information identifies that the target object is located in the target privacy area, performing distortion processing on the target privacy area, including:
and if the object type and the position information mark that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
Further, the distorting the target privacy zone includes:
and compressing the target privacy area according to a preset compression strategy.
In a second aspect, an embodiment of the present application provides an apparatus for privacy protection, including:
the first acquisition unit is used for acquiring a current scene image;
the first processing unit is used for inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image;
the second processing unit is used for acquiring the position information of the target object relative to the target privacy area if the target object is detected in the target privacy area;
and the third processing unit is used for performing distortion processing on the target privacy area if the position information identifies that the target object is located in the target privacy area.
Further, the privacy protecting apparatus further includes:
the second acquisition unit is used for acquiring scene characteristic information according to the current scene image;
a first determining unit, configured to determine a current application scene according to the current scene image and the scene feature information;
and the fourth processing unit is used for taking the privacy area detection neural network model corresponding to the current application scene as a target privacy area detection model.
Further, the privacy protecting apparatus further includes:
the third acquisition unit is used for acquiring sample training sets of different application scenes; the sample training set comprises a sample scene image and a privacy zone label corresponding to the sample scene image;
and the fifth processing unit is used for training the initial neural network model by using the sample training set to obtain privacy area detection neural network detection models corresponding to different trained application scenes.
Further, the fifth processing unit is specifically configured to:
inputting the sample scene image into an initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
calculating a target loss value according to the sample privacy zone, the privacy zone label and a preset loss function;
if the target loss value does not meet the preset suspension condition, updating the initial neural network model according to the target loss value, and returning to execute the input of the sample scene image into the initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
and if the target loss value meets a preset suspension condition, outputting a trained privacy area detection neural network detection model.
Further, the privacy protecting apparatus further includes:
and the sixth processing unit is used for optimizing the detection neural network detection model of the privacy zone according to the updated sample training set if the sample training set is detected to be updated.
Further, the privacy protecting apparatus further includes:
and the seventh processing unit is used for normally displaying the target object and carrying out distortion processing on the target privacy area except the target area of the target object if the position information identifies that the target object part is located in a non-privacy area.
Further, the third processing unit is specifically configured to:
and if the target object is judged to be located in the target privacy zone according to the preset privacy zone experience parameters and the position information, carrying out distortion processing on the target privacy zone.
Further, the privacy protecting apparatus further includes:
a fourth acquisition unit configured to acquire an object type of the target object;
the third processing unit is specifically configured to:
and if the object type and the position information mark that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
Further, the third processing unit is specifically configured to:
and compressing the target privacy area according to a preset compression strategy.
In a third aspect, an embodiment of the present application provides a privacy-preserving apparatus, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor, when executing the computer program, implements the privacy-preserving method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the method for privacy protection as described in the first aspect above is implemented.
In the embodiment of the application, a current scene image is obtained; inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image; if a target object is detected in a target privacy area, acquiring the position information of the target object relative to the target privacy area; and if the position information identifies that the target object is located in the target privacy area, performing distortion processing on the target privacy area. According to the method, the target privacy zone detection model corresponding to the current scene image can be obtained in a targeted manner, then the corresponding target privacy zone is detected through the target privacy zone detection model, after the target privacy zone is obtained, the target privacy zone is directly adjusted and protected, the target privacy zone is not required to be removed after the privacy of others is obtained, and the risk of invading the privacy is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart diagram of a method for privacy protection provided in a first embodiment of the present application;
FIG. 2 is a schematic diagram of an apparatus for privacy protection provided by a second embodiment of the present application;
fig. 3 is a schematic diagram of a privacy-preserving apparatus according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Referring to fig. 1, fig. 1 is a schematic flow chart of a privacy protection method according to a first embodiment of the present application. The execution subject of the privacy protection method in this embodiment is a device having a privacy protection function, for example, an intelligent doorbell, a monitoring camera device, and the like. The method of privacy protection as shown in fig. 1 may include:
s101: and acquiring a current scene image.
The method comprises the steps that a device obtains a current scene image, wherein the current scene image can be an image frame of a current scene directly collected or an image frame obtained by framing a current scene video after the current scene video is collected.
For example, when the device is an intelligent doorbell and a person presses the doorbell, an image of a door of the intelligent doorbell, which is acquired by the camera, is the current scene image. When the equipment is the vehicle camera terminal, the image of the vehicle running on the road and the road condition image acquired by the vehicle camera terminal are the current scene image.
In this embodiment, when the device itself has a camera function, the device may directly capture a still image or a video under a corresponding trigger condition, so as to obtain a current scene image. For example, when the device is an intelligent doorbell, a person presses the doorbell to trigger the intelligent doorbell to start an image acquisition function, and an image of a door acquired by the intelligent doorbell through a camera is a current scene image. When the apparatus itself does not have a camera function, the apparatus can receive a current scene image captured by the camera apparatus. For example, when the device is a server with a privacy protection function, the server may receive a current scene image captured by the front-end camera device through a network.
S102: and inputting the current scene image into a target privacy zone detection model for detection to obtain a target privacy zone corresponding to the current scene image.
A target privacy area detection model is stored in the device in advance, and the target privacy area detection model is used for identifying a target privacy area corresponding to a current scene image. The device inputs the current scene image into a target privacy zone detection model for detection, and a target privacy zone corresponding to the current scene image is obtained.
The target privacy zone detection model may be pre-trained by other devices, and then the freeze-trained target privacy zone detection model is stored in the device. Or the equipment carries out personalized training through a sample training set to obtain a target privacy zone detection model.
The target privacy zone detection model may include an input layer, an implied layer, and an output layer (loss function layer), among others. The input layer includes an input layer node for receiving an input image of a current scene. The hidden layer is used for processing the current scene image and extracting a target privacy area corresponding to the current scene image. The output layer is used for outputting a target privacy zone corresponding to the current scene image.
In a possible implementation manner, due to the fact that different devices have different definitions of privacy areas in different application scenes, for example, a home outdoor security camera shoots an outdoor scene, and scenes such as doors and windows of neighboring homes all belong to privacy areas of other people; when the device is applied to household indoor monitoring and shooting of indoor scenes, the doors and windows of the house do not belong to privacy areas. The vehicle camera terminal for observing road conditions belongs to privacy of other people for picture areas such as faces, and the security camera terminal of the shop building needs to shoot the faces. Therefore, in order to improve the accuracy of judging the target privacy area, different target privacy area detection models can be set for different scenes, a plurality of different application scenes and privacy area detection neural network models corresponding to the different application scenes can be stored in the device in advance, the device identifies the current application scene, and accordingly the target privacy area detection models are determined in a targeted mode according to the current application scene, so that the target privacy area detection models are divided according to the application scenes, different application scenes have different privacy area detection neural network models, and the privacy areas can be further accurately judged.
Specifically, the device acquires scene feature information according to a current scene image, where the scene feature information is related information that can reflect current scene features, and the scene feature information is not limited in this embodiment as long as the scene features can be reflected. For example, the scene characteristic information may include items in the current scene, the types of items in the current scene, the number of items in the current scene, and so on. When the current scene image is an image of an office area, the device may extract scene feature information from the current scene image, which may be office desks and chairs, the number of office desks and chairs, office staff, and the like. When the current scene image is an image of an area in front of a building door, the device can extract scene characteristic information from the current scene image, wherein the scene characteristic information can be doors and windows, building corridor tiles, elevator doors and the like of the indoor area of the building.
After the device extracts the scene feature information, the current application scene can be judged according to the scene feature information, and the current application scene is determined by taking the current scene image as reference information and according to the current scene image and the scene feature information. For example, when the device acquires that the scene characteristic information is a vehicle and a road, identifies the current scene image, and identifies that the scene image is an outdoor environment, the device may determine that the current application scene is an application scene that identifies vehicles and road conditions outside the room.
The device obtains a privacy zone detection neural network model corresponding to the current application scene according to the corresponding relation between the application scene and the privacy zone detection neural network model, and takes the privacy zone detection neural network model corresponding to the current application scene as a target privacy zone detection model.
In a possible implementation manner, the device may identify the scene image through a preset scene identification model, so as to obtain an application scene corresponding to the scene image. The scene recognition model method can be realized by a classification algorithm of a neural network. The scene data set construction of the scene recognition model can be performed based on the application range of the product, for example, an indoor monitoring camera, the application scene of which includes home, corridor, entrance (doorway), and the like, and the application scene data set of the product is constructed by collecting a large number of application scene images and labeling with category labels. When training a scene recognition model, performing classification model training by using an application scene data set, wherein the classification model can be a machine learning model or a deep learning model; the device applies the trained scene classification model to a product end or a server end for scene recognition.
In a possible implementation manner, privacy area detection neural network detection models corresponding to different application scenarios are trained in advance by the home terminal device. The training method of the neural network detection model for detecting the privacy zone can be as follows:
the method comprises the steps that a device obtains sample training sets of different application scenes; the sample training set comprises a sample scene image and a privacy zone label corresponding to the sample scene image; and training the initial neural network model by using a sample training set to obtain privacy area detection neural network detection models corresponding to different trained application scenes.
The device trains a sample training set corresponding to each scene independently, in the training process, a sample scene image and a privacy zone label corresponding to the sample scene image are used as training data and input into an initial neural network model, and the model is continuously improved by adjusting a loss function of the initial neural network model, so that a final privacy zone detection neural network detection model corresponding to different application scenes is obtained.
Specifically, the device inputs the sample scene image into the initial neural network model for processing, and a sample privacy area corresponding to the sample scene image is obtained. The device prestores a preset loss function, and calculates a target loss value according to the sample privacy area, the privacy area label and the preset loss function. If the target loss value does not meet the preset suspension condition, updating the initial neural network model according to the target loss value, and returning to execute to input the sample scene image into the initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image; and if the target loss value meets the preset suspension condition, outputting the trained privacy area detection neural network detection model.
In one embodiment, when a user marks a new privacy zone, the device can learn autonomously, train the current privacy zone detection neural network detection model according to new data, and optimize the privacy zone detection neural network detection model. When a user marks a new privacy area, the device can automatically update the sample training set corresponding to the application scene according to the new privacy area input by the user, and if the device detects that the sample training set is updated, the device optimizes the privacy area detection neural network detection model according to the updated sample training set. Therefore, the privacy area detection neural network detection model corresponding to each scene can be updated in real time according to the requirements of the user, and the applicability of the whole method is further improved.
S103: and if the target object is detected in the target privacy zone, acquiring the position information of the target object relative to the target privacy zone.
After the device identifies the target privacy area through the target privacy area detection model, the area except the target privacy area in the current scene image is the non-privacy area.
The device detects whether a target object exists in the target privacy area, specific content in the target privacy area cannot be acquired in order to protect privacy, and the device only preliminarily detects whether the target object exists in the target privacy area.
When the device detects the target object in the target privacy zone, the device acquires the position information of the target object relative to the target privacy zone.
The position information of the target object relative to the target privacy area is determined based on the position of the target privacy area, and the position information of the target object relative to the target privacy area may include position information of the target object and the target privacy area, contour position information of the target object and the target privacy area, size information of the target object and the target privacy area, and the like, which may be used as reference parameters for subsequently determining whether the target object is in the target privacy area.
It is understood that objects may exist in the non-privacy area, but since the objects in the non-privacy area are not protected, it is important to detect whether the target object exists in the target privacy area.
S104: and if the position information marks that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
In this embodiment, the device determines whether the target object is located within the target privacy area based on the position information, for example, when the size information of the target object is smaller than the size information of the target privacy area and the outline of the target object is within the outline of the target privacy area, it determines that the position information of the target object with respect to the target privacy area is that the target object is completely within the target privacy area.
If the device judges that the target object of the position information identification is located in the target privacy area, the target object needs to be completely protected without exposure, and the target privacy area can be directly subjected to distortion treatment and complete protection.
When judging whether the target object is located in the target privacy area, the judgment can be carried out according to the preset privacy area experience parameters and the position information. In this embodiment, the method for obtaining the target position of the target object relative to the target privacy area may be performed by combining an empirical model, that is, preset privacy area empirical parameters are stored in the device in advance, where the preset privacy area empirical parameters include actual size empirical parameters of all privacy areas, for example, area size parameters of common doors and windows, and target size parameters of human faces.
The device can acquire the target position of the target object relative to the target privacy zone through preset privacy zone experience parameters. For example, when the target object is a person, the person is located under the same distance plane as the corresponding target privacy zone, and the ratio of the size of the target object to the size of the target privacy zone is close to the preset empirical ratio. If the target object is in the object privacy area and is obviously smaller than the experience ratio, the target position of the target object relative to the target privacy area is in the privacy area, and the target object is shielded and protected by the product; if the target object is located outside the object privacy area and is obviously larger than the experience ratio, the target position of the target object relative to the target privacy area is located outside the privacy area, and the target object is a foreground target outside the object privacy area and is normally displayed.
Specifically, when performing the distortion processing, the device may compress the target privacy area according to a preset compression policy, and when performing the compression, the target privacy area may be completely distorted. Therefore, privacy can be protected, and meanwhile, the storage space of the system can be saved after compression. Further, the saved space can also be provided for the non-privacy area to improve the image quality of the non-privacy area.
In another embodiment, if the position information identifies that the target object is partially located in the non-privacy area, in this case, the target object may be regarded as a foreground object, and the target object partially overlaps with both the target privacy area and the non-privacy area. At this time, the target object is normally displayed, and distortion processing is performed on the target privacy area except for the target area of the target object. For example, the intelligent doorbell shoots a person in a neighboring door area, the neighboring door area is a target privacy area, the person is a target object, and if the person is judged to be in the door, the privacy area of the neighboring door is completely adjusted and protected, so that the protection area is completely compressed and distorted; and if the human part is outside the door, normally displaying the human region, and performing compression distortion on the remaining target privacy region which is not blocked by the human. The target objects which are not in the target privacy area are normally displayed, and the scenes in the rest target privacy areas are adjusted and protected, so that the privacy areas can be effectively protected, and interested information is not lost.
In addition, the device may acquire the object type of the target object in addition to the position information, and then determine whether the target object is located in the target privacy area through the object type and the position information. The object type of the target object can be identified through the outline of the target object, the identification rule is determined according to empirical data, and the object type of the target object is identified.
In the embodiment of the application, a current scene image is obtained; inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image; if a target object is detected in a target privacy area, acquiring the position information of the target object relative to the target privacy area; and if the position information identifies that the target object is located in the target privacy area, performing distortion processing on the target privacy area. According to the method, the target privacy zone detection model corresponding to the current scene image can be obtained in a targeted manner, then the corresponding target privacy zone is detected through the target privacy zone detection model, after the target privacy zone is obtained, the target privacy zone is directly adjusted and protected, the target privacy zone is not required to be removed after the privacy of others is obtained, and the risk of invading the privacy is reduced.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 2, fig. 2 is a schematic diagram of an apparatus for privacy protection according to a second embodiment of the present application. The units are included for performing the steps in the corresponding embodiment of fig. 1. Please refer to fig. 2 for a related description of the embodiment. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 2, the privacy-preserving apparatus 2 includes:
a first obtaining unit 210, configured to obtain a current scene image;
the first processing unit 220 is configured to input the current scene image into a target privacy area detection model for detection, so as to obtain a target privacy area corresponding to the current scene image;
a second processing unit 230, configured to, if a target object is detected in the target privacy area, obtain location information of the target object relative to the target privacy area;
a third processing unit 240, configured to perform distortion processing on the target privacy area if the location information identifies that the target object is located in the target privacy area.
Further, the privacy protection apparatus 2 further includes:
the second acquisition unit is used for acquiring scene characteristic information according to the current scene image;
a first determining unit, configured to determine a current application scene according to the current scene image and the scene feature information;
and the fourth processing unit is used for taking the privacy area detection neural network model corresponding to the current application scene as a target privacy area detection model.
Further, the privacy protection apparatus 2 further includes:
the third acquisition unit is used for acquiring sample training sets of different application scenes; the sample training set comprises a sample scene image and a privacy zone label corresponding to the sample scene image;
and the fifth processing unit is used for training the initial neural network model by using the sample training set to obtain privacy area detection neural network detection models corresponding to different trained application scenes.
Further, the fifth processing unit is specifically configured to:
inputting the sample scene image into an initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
calculating a target loss value according to the sample privacy zone, the privacy zone label and a preset loss function;
if the target loss value does not meet the preset suspension condition, updating the initial neural network model according to the target loss value, and returning to execute the input of the sample scene image into the initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
and if the target loss value meets a preset suspension condition, outputting a trained privacy area detection neural network detection model.
Further, the privacy protection apparatus 2 further includes:
and the sixth processing unit is used for optimizing the detection neural network detection model of the privacy zone according to the updated sample training set if the sample training set is detected to be updated.
Further, the privacy protection apparatus 2 further includes:
and the seventh processing unit is used for normally displaying the target object and carrying out distortion processing on the target privacy area except the target area of the target object if the position information identifies that the target object part is located in a non-privacy area.
Further, the third processing unit 240 is specifically configured to:
and if the target object is judged to be located in the target privacy zone according to the preset privacy zone experience parameters, carrying out distortion processing on the target privacy zone.
Further, the privacy protection apparatus 2 further includes:
a fourth acquisition unit configured to acquire an object type of the target object;
the third processing unit 240 is specifically configured to:
and if the object type and the position information mark that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
Further, the third processing unit 240 is specifically configured to:
and compressing the target privacy area according to a preset compression strategy.
Fig. 3 is a schematic diagram of a privacy-preserving apparatus according to a third embodiment of the present application. As shown in fig. 3, the privacy-preserving apparatus 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32, such as a privacy preserving program, stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps in the various privacy preserving method embodiments described above, such as the steps 101 to 104 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 210 to 240 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution of the computer program 32 in the privacy protecting device 3. For example, the computer program 32 may be divided into a first acquiring unit, a first processing unit, a second processing unit, and a third processing unit, and the specific functions of each unit are as follows:
the first acquisition unit is used for acquiring a current scene image;
the first processing unit is used for inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image;
the second processing unit is used for acquiring the position information of the target object relative to the target privacy area if the target object is detected in the target privacy area;
and the third processing unit is used for performing distortion processing on the target privacy area if the position information identifies that the target object is located in the target privacy area.
The privacy preserving device may include, but is not limited to, a processor 30, a memory 31. It will be appreciated by those skilled in the art that fig. 3 is merely an example of the privacy preserving apparatus 3 and does not constitute a limitation of the privacy preserving apparatus 3 and may include more or less components than those shown, or combine some components, or different components, for example, the privacy preserving apparatus may also include input output devices, network access devices, buses, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 31 may be an internal storage unit of the privacy preserving apparatus 3, such as a hard disk or a memory of the privacy preserving apparatus 3. The memory 31 may also be an external storage device of the privacy-preserving apparatus 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are equipped on the privacy-preserving apparatus 3. Further, the privacy preserving apparatus 3 may also include both an internal storage unit and an external storage device of the privacy preserving apparatus 3. The memory 31 is used for storing the computer programs and other programs and data required by the privacy preserving apparatus. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. A method of privacy protection, comprising:
acquiring a current scene image;
inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image;
if a target object is detected in the target privacy zone, acquiring the position information of the target object relative to the target privacy zone;
and if the position information marks that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
2. The method for protecting privacy of claim 1, wherein before the inputting the current scene image into a target privacy zone detection model for detection to obtain a target privacy zone corresponding to the current scene image, the method further comprises:
acquiring scene characteristic information according to a current scene image;
determining a current application scene according to the current scene image and the scene characteristic information;
and taking the privacy area detection neural network model corresponding to the current application scene as a target privacy area detection model.
3. The method of privacy protection as claimed in claim 2, wherein before the taking the privacy zone detection neural network detection model corresponding to the current application scenario as a target privacy zone detection model, further comprising:
acquiring sample training sets of different application scenes; the sample training set comprises a sample scene image and a privacy zone label corresponding to the sample scene image;
and training an initial neural network model by using the sample training set to obtain privacy area detection neural network detection models corresponding to different trained application scenes.
4. The method of claim 3, wherein the training an initial neural network model using the sample training set to obtain the trained privacy zone detection neural network detection models corresponding to different application scenarios comprises:
inputting the sample scene image into an initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
calculating a target loss value according to the sample privacy zone, the privacy zone label and a preset loss function;
if the target loss value does not meet the preset suspension condition, updating the initial neural network model according to the target loss value, and returning to execute the input of the sample scene image into the initial neural network model for processing to obtain a sample privacy area corresponding to the sample scene image;
and if the target loss value meets a preset suspension condition, outputting a trained privacy area detection neural network detection model.
5. The method of claim 3, wherein after the training of the initial neural network model using the sample training set to obtain the trained privacy zone detection neural network detection models corresponding to different application scenarios, the method further comprises:
and if the sample training set is detected to be updated, optimizing the neural network detection model for detecting the privacy area according to the updated sample training set.
6. The privacy protection method of claim 1, further comprising, after the location information of the target object relative to the target privacy zone:
and if the position information marks that the target object part is located in a non-privacy area, normally displaying the target object, and carrying out distortion processing on the target privacy area except the target area of the target object.
7. The privacy protection method of claim 1, wherein if the location information identifies that the target object is located within the target privacy zone, distorting the target privacy zone comprises:
and if the target object is judged to be located in the target privacy zone according to the preset privacy zone experience parameters and the position information, carrying out distortion processing on the target privacy zone.
8. The privacy protection method of claim 1, wherein before the distorting the target privacy zone if the location information identifies that the target object is located within the target privacy zone, further comprising:
acquiring the object type of the target object;
if the position information identifies that the target object is located in the target privacy area, performing distortion processing on the target privacy area, including:
and if the object type and the position information mark that the target object is located in the target privacy area, performing distortion processing on the target privacy area.
9. The privacy protection method of claim 1, wherein the distorting the target privacy zone comprises:
and compressing the target privacy area according to a preset compression strategy.
10. An apparatus for privacy protection, comprising:
the first acquisition unit is used for acquiring a current scene image;
the first processing unit is used for inputting the current scene image into a target privacy area detection model for detection to obtain a target privacy area corresponding to the current scene image;
the second processing unit is used for acquiring the position information of the target object relative to the target privacy area if the target object is detected in the target privacy area;
and the third processing unit is used for performing distortion processing on the target privacy area if the position information identifies that the target object is located in the target privacy area.
11. A privacy preserving apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method of any one of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 9.
CN202110917455.XA 2021-08-11 2021-08-11 Privacy protection method, device and equipment Pending CN113627339A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110917455.XA CN113627339A (en) 2021-08-11 2021-08-11 Privacy protection method, device and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110917455.XA CN113627339A (en) 2021-08-11 2021-08-11 Privacy protection method, device and equipment

Publications (1)

Publication Number Publication Date
CN113627339A true CN113627339A (en) 2021-11-09

Family

ID=78384288

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110917455.XA Pending CN113627339A (en) 2021-08-11 2021-08-11 Privacy protection method, device and equipment

Country Status (1)

Country Link
CN (1) CN113627339A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114519818A (en) * 2022-01-14 2022-05-20 杭州未名信科科技有限公司 Method and device for detecting home scene, electronic equipment and medium
WO2023109299A1 (en) * 2021-12-13 2023-06-22 中兴通讯股份有限公司 Image processing method and apparatus, and device and storage medium
WO2024045591A1 (en) * 2022-08-31 2024-03-07 比亚迪股份有限公司 Privacy-glass control method and apparatus for rail vehicle, and rail vehicle
CN118509700A (en) * 2024-07-19 2024-08-16 浙江德施曼科技智能股份有限公司 Privacy processing method, device and equipment for intelligent lock lens and storage medium
CN118509700B (en) * 2024-07-19 2024-10-29 浙江德施曼科技智能股份有限公司 Privacy processing method, device and equipment for intelligent lock lens and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473283A (en) * 2010-07-06 2012-05-23 松下电器产业株式会社 Image delivery device
CN108781277A (en) * 2016-03-23 2018-11-09 日本电气株式会社 Monitoring system, image processing equipment, image processing method and program recorded medium
CN111586361A (en) * 2020-05-19 2020-08-25 浙江大华技术股份有限公司 Image processing method and related device
CN112347512A (en) * 2020-11-13 2021-02-09 支付宝(杭州)信息技术有限公司 Image processing method, device, equipment and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102473283A (en) * 2010-07-06 2012-05-23 松下电器产业株式会社 Image delivery device
CN108781277A (en) * 2016-03-23 2018-11-09 日本电气株式会社 Monitoring system, image processing equipment, image processing method and program recorded medium
CN111586361A (en) * 2020-05-19 2020-08-25 浙江大华技术股份有限公司 Image processing method and related device
CN112347512A (en) * 2020-11-13 2021-02-09 支付宝(杭州)信息技术有限公司 Image processing method, device, equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023109299A1 (en) * 2021-12-13 2023-06-22 中兴通讯股份有限公司 Image processing method and apparatus, and device and storage medium
CN114519818A (en) * 2022-01-14 2022-05-20 杭州未名信科科技有限公司 Method and device for detecting home scene, electronic equipment and medium
WO2024045591A1 (en) * 2022-08-31 2024-03-07 比亚迪股份有限公司 Privacy-glass control method and apparatus for rail vehicle, and rail vehicle
CN118509700A (en) * 2024-07-19 2024-08-16 浙江德施曼科技智能股份有限公司 Privacy processing method, device and equipment for intelligent lock lens and storage medium
CN118509700B (en) * 2024-07-19 2024-10-29 浙江德施曼科技智能股份有限公司 Privacy processing method, device and equipment for intelligent lock lens and storage medium

Similar Documents

Publication Publication Date Title
CN113627339A (en) Privacy protection method, device and equipment
CN108038176B (en) Method and device for establishing passerby library, electronic equipment and medium
CN108268867B (en) License plate positioning method and device
CN108492431B (en) Intelligent access control system
CN109784274B (en) Method for identifying trailing and related product
CN103258191A (en) Community access control system based on face recognition
CN109116129B (en) Terminal detection method, detection device, system and storage medium
CN111444555B (en) Temperature measurement information display method and device and terminal equipment
US10373015B2 (en) System and method of detecting moving objects
CN112115979B (en) Fusion method and device of infrared image and visible image
CN110599514B (en) Image segmentation method and device, electronic equipment and storage medium
CN111612104A (en) Vehicle loss assessment image acquisition method, device, medium and electronic equipment
CN114387548A (en) Video and liveness detection method, system, device, storage medium and program product
CN105844649A (en) Statistical method, apparatus and system for the quantity of people
CN115147705B (en) Face copying detection method and device, electronic equipment and storage medium
CN111079687A (en) Certificate camouflage identification method, device, equipment and storage medium
CN113971821A (en) Driver information determination method and device, terminal device and storage medium
CN107832598B (en) Unlocking control method and related product
CN114898443A (en) Face data acquisition method and device
CN113869115A (en) Method and system for processing face image
CN112560683A (en) Method and device for identifying copied image, computer equipment and storage medium
CN116881967A (en) Privacy protection method, device and equipment
CN110751163B (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN112601054B (en) Pickup picture acquisition method and device, storage medium and electronic equipment
CN115719469A (en) Target identification method and device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination