CN113793258A - Privacy protection method and device for monitoring video image - Google Patents

Privacy protection method and device for monitoring video image Download PDF

Info

Publication number
CN113793258A
CN113793258A CN202111110376.4A CN202111110376A CN113793258A CN 113793258 A CN113793258 A CN 113793258A CN 202111110376 A CN202111110376 A CN 202111110376A CN 113793258 A CN113793258 A CN 113793258A
Authority
CN
China
Prior art keywords
image
style
monitoring
original
loss function
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111110376.4A
Other languages
Chinese (zh)
Inventor
闫军
丁丽珠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Super Vision Technology Co Ltd
Original Assignee
Super Vision Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Super Vision Technology Co Ltd filed Critical Super Vision Technology Co Ltd
Priority to CN202111110376.4A priority Critical patent/CN113793258A/en
Publication of CN113793258A publication Critical patent/CN113793258A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/40Filling a planar surface by adding surface attributes, e.g. colour or texture

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Bioethics (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computer Security & Cryptography (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses a privacy protection method and a privacy protection device for monitoring video images. The method comprises the following steps: generating a first generated image and a second generated image according to the monitoring original image and the style image; monitoring the original image, the style image, the original style migration model, the first generated image, the style original migration model and the third generated image to form a first cyclic link, and forming a second cyclic link according to the style image, the monitoring original image, the style original migration model, the second generated image, the original style migration model and the fourth generated image; and adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in the initial monitoring privacy protection model to form a stable monitoring privacy protection model. And converting the monitoring original image into a privacy protection image according to the stable monitoring privacy protection model.

Description

Privacy protection method and device for monitoring video image
Technical Field
The invention relates to the technical field of image processing, in particular to a privacy protection method and a privacy protection device for monitoring video images.
Background
With the development of the science and technology industry, the monitoring cameras are distributed all over the street lane. The monitoring video image captured by the monitoring camera plays an important role in a plurality of tasks such as urban traffic management, vehicle identification, parking lot charging management, violation processing, pedestrian re-identification and the like. In different application scenes or when different people view the same video image, privacy protection needs to be performed on privacy area information in the frame image except a target area required by the user. The privacy zone information can comprise privacy information of pedestrians, vehicles, license plates and the like in the monitoring video image.
However, in the conventional privacy protection method for monitoring video images, in order to protect the privacy information of the user, only the image of the moving object is transmitted, and the background is completely black or white, not the picture shot by the camera. Because the background is filled into full black or full white in the traditional privacy protection method, only the image of the moving target is kept, and the reality of the monitoring video image in application is reduced.
Disclosure of Invention
The invention aims to solve the technical problem that the traditional privacy protection method for monitoring video images is low in authenticity. In order to achieve the above purpose, the present invention provides a privacy protection method for monitoring video images and a device thereof.
The invention provides a privacy protection method for monitoring video images, which comprises the following steps:
acquiring a style image set and a monitoring original image set;
generating a first generated image and a second generated image according to the monitoring original image in the monitoring original image set and the style image in the style image set, wherein the first generated image is obtained by inputting the monitoring original image and the style image into an original style migration model, and the second generated image is obtained by inputting the style image and the monitoring original image into the original style migration model and outputting the style image and the monitoring original image;
forming a first cyclic link according to the monitoring original image, the style image, the original style migration model, the first generated image, the original style migration model and a third generated image, and forming a second cyclic link according to the style image, the monitoring original image, the original style migration model, the second generated image, the original style migration model and a fourth generated image, wherein the third generated image is obtained by inputting the first generated image into the original style migration model, and the fourth generated image is obtained by inputting the second generated image into the original style migration model;
adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in an initial monitoring privacy protection model to form a stable monitoring privacy protection model;
and converting the monitoring original image into a privacy protection image according to the stable monitoring privacy protection model.
In one embodiment, generating a first generated image from the monitoring original image in the monitoring original image set and the grid image in the grid image set comprises:
inputting the monitoring original image and the style image into an original style generator network, and outputting the first generated image;
inputting the style image and the first generated image into an original style discriminator network, and discriminating the style image and the first generated image;
wherein the primitive style generator network and the primitive style discriminator network form the primitive style migration model.
In one embodiment, generating a second generated image from the monitored original image in the monitored original image set and the grid image in the grid image set comprises:
inputting the style image and the monitoring original image into a style original generator network, and outputting the second generated image;
inputting the monitoring original image and the second generated image into a style original discriminator network, and discriminating the monitoring original image and the second generated image;
wherein the network of primitive style generators and the network of primitive style discriminators form the model of primitive style migration.
In one embodiment, a first cyclic link is formed according to the monitoring raw image, the first generated image and a third generated image, and the third generated image is obtained by inputting the first generated image into the style raw generator network.
In one embodiment, a second cyclic link is formed based on the stylistic image, the second generated image, and a fourth generated image, the fourth generated image being obtained by inputting the second generated image into the primitive stylistic generator network.
In one embodiment, adjusting a loss function of the primitive style migration model, a loss function of the style primitive migration model, a loss function of the first cyclic link, and a loss function of the second cyclic link in an initial monitoring privacy protection model to form a stable monitoring privacy protection model includes:
adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in the initial monitoring privacy protection model according to the loss function of the monitoring privacy protection model;
the loss function of the monitoring privacy protection model is as follows:
Figure BDA0003270336100000033
wherein X represents the monitoring raw image, Y represents the style image, G represents the raw style generator network, DYRepresenting the primitive style discriminator network, F representing the style primitive generator network, DXRepresenting said network of primitive discriminators, LGAN(G,DYX, Y) represents a loss function of the primitive style migration model, LGAN(F,DXY, X) represents a loss function of the style primitive migration model, λ is a weight parameter,
Figure BDA0003270336100000031
representing the first cycleThe loss function of the link is then determined,
Figure BDA0003270336100000032
a loss function representing the second cyclic link;
adjusting the loss function of the monitoring privacy protection model to a target balance point to form a stable monitoring privacy protection model;
the target balance points are:
Figure BDA0003270336100000041
wherein G is*Representing the weight of the primitive style discriminator network, F*Representing weights of the raw-style generator network, and the min-max function representing finding a discriminator network such that a difference between a feature distribution of an input image and a generation distribution of the generator network is maximized and a difference between the feature distribution of the input image and the generation distribution of the generator network is minimized.
In one embodiment, the privacy protection method for monitoring video images further includes:
and replacing the target image area of the privacy protection image with the target image area of the monitoring original image to obtain the monitoring video image after privacy protection.
In one embodiment, the present invention provides a privacy preserving apparatus for monitoring video images, comprising:
the acquisition module is used for acquiring the style image set and the monitoring original image set;
the image generation module is used for generating a first generated image and a second generated image according to the monitoring original image in the monitoring original image set and the style image in the style image set, wherein the first generated image is obtained by inputting the monitoring original image and the style image into an original style migration model, and the second generated image is obtained by inputting the style image and the monitoring original image into the style original migration model and outputting the style image and the monitoring original image;
a cyclic link module, configured to form a first cyclic link according to the monitoring original image, the first generated image, and a third generated image, and form a second cyclic link according to the style image, the second generated image, and a fourth generated image, where the third generated image is obtained by inputting the first generated image into the style original migration model, and the fourth generated image is obtained by inputting the second generated image into the original style migration model;
the model optimization module is used for adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in an initial monitoring privacy protection model to form a stable monitoring privacy protection model;
and the privacy protection image generation module is used for converting the monitoring original image into a privacy protection image according to the stable monitoring privacy protection model.
In one embodiment, the image generation module includes a primitive style migration module. The original style migration module is used for inputting the monitoring original image and the style image into an original style generator network, outputting the first generated image, inputting the style image and the first generated image into an original style discriminator network, and discriminating the style image and the first generated image.
In one embodiment, the image generation module further comprises a style raw migration module. The style primitive migration module is configured to input the style image and the monitoring primitive image into a style primitive generator network, output the second generated image, input the monitoring primitive image and the second generated image into a style primitive discriminator network, and discriminate the monitoring primitive image and the second generated image.
In one embodiment, the third generated image in the cyclic link module is obtained by inputting the first generated image into the raw style generator network.
In one embodiment, in the cyclic link module, the fourth generated image is obtained by inputting the second generated image into the primitive style generator network.
In one embodiment, the model optimization module comprises:
a parameter adjusting module, configured to adjust a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link, and a loss function of the second cyclic link in an initial monitoring privacy protection model according to a loss function of the monitoring privacy protection model;
wherein the loss function of the monitoring privacy protection model is:
Figure BDA0003270336100000051
x represents the monitoring raw image, Y represents the style image, G represents the raw style generator network, DYRepresenting the primitive style discriminator network, F representing the style primitive generator network, DXRepresenting said network of primitive discriminators, LGAN(G,DYX, Y) represents a loss function of the primitive style migration model, LGAN(F,DXY, X) represents a loss function of the style primitive migration model, λ is a weight parameter,
Figure BDA0003270336100000061
a loss function representing the first cyclic link,
Figure BDA0003270336100000062
a loss function representing the second cyclic link;
the model generation module is used for adjusting the loss function of the monitoring privacy protection model to a target balance point to form a stable monitoring privacy protection model;
the target balance points are:
Figure BDA0003270336100000063
wherein G is*Representing the weight of the primitive style discriminator network, F*Representing weights of the raw-style generator network, and the min-max function representing finding a discriminator network such that a difference between a feature distribution of an input image and a generation distribution of the generator network is maximized and a difference between the feature distribution of the input image and the generation distribution of the generator network is minimized.
In one embodiment, the apparatus further comprises:
and the monitoring video image generating module is used for replacing the target image area of the privacy protection image with the target image area of the monitoring original image to obtain the monitoring video image after privacy protection.
In the privacy protection method for monitoring video images, the stylized image set and the monitoring original image set are constructed, and model optimization and iteration are performed according to the original style migration model, the style original migration model, the first cyclic link, the second cyclic link and a corresponding loss function to obtain the monitoring privacy protection model. Therefore, the monitoring original image needing style migration is sent to the monitoring privacy protection model, and the image after style migration, namely the privacy protection image, can be obtained. The privacy protection image is similar to the style image in the aspects of stylization such as texture, color and the like, desensitization treatment is carried out on sensitive information such as character features, license plate numbers and the like, the content and scene information of the monitoring original image are still reserved, and the authenticity of the monitoring video image in application is improved.
Drawings
FIG. 1 is a schematic flow chart of a privacy protection method for monitoring video images provided by the present invention;
FIG. 2 is a schematic diagram of a first cyclic link and a second cyclic link provided by the present invention;
FIG. 3 is a schematic flow chart of a privacy protection method for monitoring video images provided by the present invention;
fig. 4 is a schematic structural diagram of a privacy protecting apparatus for monitoring video images provided by the present invention.
Detailed Description
The technical solution of the present invention is further described in detail by the accompanying drawings and embodiments.
Referring to fig. 1, the present invention provides a privacy protection method for monitoring video images, including:
s10, acquiring a style image set and a monitoring original image set;
s20, generating a first generated image and a second generated image according to the monitoring original image in the monitoring original image set and the style image in the style image set, wherein the first generated image is obtained by inputting the monitoring original image and the style image into an original style migration model, and the second generated image is obtained by inputting the style image and the monitoring original image into the style original migration model and outputting the style image and the monitoring original image;
s30, forming a first cyclic link according to the monitoring original image, the style image, the original style migration model, the first generated image, the original style migration model, and a third generated image, and forming a second cyclic link according to the style image, the monitoring original image, the original style migration model, the second generated image, the original style migration model, and a fourth generated image, where the third generated image is obtained by inputting the first generated image into the original style migration model, and the fourth generated image is obtained by inputting the second generated image into the original style migration model;
s40, adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in an initial monitoring privacy protection model to form a stable monitoring privacy protection model;
s50, converting the monitoring original image into a privacy protection image according to the stable monitoring privacy protection model.
In S10, the stylistic image set includes a plurality of stylistic images. The style images can be collected by means of network collection, video frame interception and the like. The style image set can be an abstract style such as cartoon wind, cut-in wind, oil painting wind and the like. The abstract style can be animation wind, insertion drawing wind, oil drawing wind and other styles.
In S20, the monitoring original image and the style image are input into the original style migration model, and the first generated image is output, where the original style migration model is used to map the monitoring original image to the style image and distinguish the style image from the first generated image. The number of the monitoring original images input into the original style migration model can be one, two or three, and the like. The number of the style images input to the primitive style migration model may be one, two, three, or the like. The first generated image is an image obtained after the monitoring original image is mapped to the style image. The primitive style migration model may be used to determine whether the generated first generated image is real by discriminating between the style image and the first generated image. Whether the first generated image is authentic may be embodied by a loss function of the primitive style migration model.
Inputting the style image and the monitoring original image into the style original migration model, and outputting the second generated image, wherein the style original migration model is used for mapping the style image to the monitoring original image and distinguishing the monitoring original image from the second generated image. The number of the style images input to the style primitive migration model may be one, two, three, or the like. The number of the monitoring original images input into the style original migration model can be one, two or three, and the like. The second generated image is an image obtained after the style image is mapped to the monitoring original image. The style original migration model may be used to determine whether the generated second generated image is real by determining the monitoring original image and the second generated image. Whether the second generated image is real or not can be embodied by a loss function of the style primitive migration model.
In S30, the first generated image is input into the style primitive migration model, and the third generated image is output. Referring to fig. 2, the monitoring original image, the style image, the original style migration model, the first generated image, the style original migration model, and the third generated image form a loop process, i.e., the first loop link. During model training in a cyclic process, the mapping of the monitoring original image to the style image and then back to the third generated image is learned. The third generated image has a greater similarity to the monitoring original image, forming the first cyclic link. And if X represents the monitoring original image, Y represents the style image, and the similarity between the third generated image and the monitoring original image is larger and is represented as X ', an X-Y-X' cyclic link is formed. Furthermore, the similarity between the third generated image X' and the monitoring original image X can be embodied by the loss function of the first cyclic link, so that the feasibility of the monitoring privacy protection model of the present invention is embodied.
And inputting the second generated image into the original style migration model, and outputting the fourth generated image. Referring to fig. 2, the style image, the monitoring original image, the style original migration model, the second generated image, the original style migration model, and the fourth generated image form a loop process, i.e., the second loop link. During model training in a cyclic process, the mapping of the style image to the monitoring original image is learned at the same time, and then the mapping is returned to the fourth generated image. The fourth generated image has a greater similarity to the stylized image, forming the second cyclical link. Furthermore, the similarity between the fourth generated image and the style image can be embodied through the loss function of the second cyclic link, so that the feasibility of monitoring the privacy protection model is embodied.
In S40, by adjusting parameters of the original style migration model and the style original migration model involved in S20 to S30, adjustment of a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link, and a loss function of the second cyclic link in the initial monitoring privacy protection model can be achieved, iterative optimization of the monitoring privacy protection model is achieved, and a more accurate and stable monitoring privacy protection model is formed. Therefore, the monitoring privacy protection model generates a more real privacy protection image, and the privacy of the monitoring video image is better protected.
In S50, the monitoring original image may be a monitoring original image of a specific application scene. And inputting the monitoring original image into the stable monitoring privacy protection model, and outputting the privacy protection image. The monitoring original image of a specific application scene may refer to a monitoring original image in a certain application scene. For example: the method comprises the steps of monitoring an original image under the scene of a parking lot, or monitoring an original image under the scene of an intersection, or monitoring an original image under the scene of a highway, and the like.
According to the privacy protection method for the monitoring video image, the stylized image set and the monitoring original image set are constructed, model optimization and iteration are carried out according to the original style migration model, the style original migration model, the first cyclic link, the second cyclic link and the corresponding loss function, and the monitoring privacy protection model is obtained. Therefore, the monitoring original image needing style migration is sent to the monitoring privacy protection model, and the image after style migration, namely the privacy protection image, can be obtained. The privacy protection image is similar to the style image in the aspects of stylization such as texture, color and the like, desensitization treatment is carried out on sensitive information such as character features, license plate numbers and the like, the content and scene information of the monitoring original image are still reserved, and the authenticity of the monitoring video image in application is improved.
Referring to fig. 3, in an embodiment, the method for protecting privacy of a surveillance video image further includes:
and S60, replacing the target image area of the privacy protection image with the target image area of the monitoring original image to obtain the monitoring video image after privacy protection.
In S60, the target image area of the privacy-preserving image refers to a characteristic area of the monitoring original image that needs to be preserved. For example: the original monitoring image comprises a target image area A needing to be reserved and an image area B needing to be protected in privacy, the target image area of the privacy protection image is replaced by the target image area A needing to be reserved, and the image area B needing to be protected in privacy is still the stylized image feature. Information that requires privacy protection may include sensitive information such as the facial features of a pedestrian or the license plate number of a vehicle. Therefore, the original monitoring image not only keeps a real target image area, but also protects an image area with sensitive privacy through the privacy protection method of the monitoring video image, and plays a role in privacy protection of other surrounding scenes.
In one embodiment, the stylistic image collection may be stylistic images of the same author, such as an animate-style image collection taken by the same director, or an pictorial-style image collection taken by the same author, or the like. The style image set of the same creator can be used for enabling the style image set to have the same style, and stylization processing of the monitoring original image is facilitated.
In one embodiment, the size of the style image in the style image set is the same as the size of the monitoring original image in the monitoring original image set, which is beneficial for combining the style image and the monitoring video image to form the privacy protection image.
In one embodiment, the S20 includes:
s210, inputting the monitoring original image and the style image into an original style generator network, and outputting the first generated image;
s220, inputting the style image and the first generated image into an original style discriminator network for discriminating the style image and the first generated image;
wherein the primitive style generator network and the primitive style discriminator network form the primitive style migration model.
In S20, the primitive style migration model may be constructed by generating countermeasure Networks (GANs). The generation countermeasure network comprises a generator network and a discriminator network, and the two networks are subjected to iterative training. Furthermore, based on the generated countermeasure network, the primitive style migration model can realize the applications of image coloring, image style migration, image super-resolution and the like.
In S210, the original style generator network may be a generator network, configured to generate a stylized image, map the monitoring original image to the style image, and output to generate the first generated image. The first generated image is a generated stylized image.
In S220, the primitive style discriminator network may be a discriminator network, configured to discriminate the style image from the first generated image, and further determine whether the generated first generated image (i.e., the generated stylized image) is real. The original style generator network and the original style discriminator network adjust respective parameters through mutual game, so that a loss function of the original style migration model reaches a target balance point, a better privacy protection image is generated, and better protection of privacy information is realized.
In one embodiment, in S210, the image feature size of the monitoring original image is 256 × 3(H × W × C, Height × Width Channel). The monitoring original image passes through a plurality of convolution layers, normalization layers and nonlinear activation layers to form an image with a characteristic diagram size of 64 x 256. The image with the feature size of 64 × 256 is subjected to further feature extraction, and the image feature size is restored to the original size, that is, 256 × 3, by passing through the residual convolution module and the plurality of deconvolution layers, thereby generating the first generated image (that is, the generated stylized image).
In one embodiment, the residual convolution module includes a concatenation of a plurality of residual convolution neural networks. The normalization layer comprises an instance normalization layer or an adaptive instance normalization layer and the like. The nonlinear activation layer comprises nonlinear activation functions such as a modified Linear Unit (ReLU).
In one embodiment, in S220, the primitive style discriminator network is used to discriminate the authenticity of the first generated image. The original style discriminator network is a two-classification network, and the classification target is to distinguish the data of the monitoring original image and the data of the first generated image. The number of layers of the original style discriminator network is small, the size of the image feature input to the original style discriminator network is 256 × 3(H × W × C, Height × Width Channel number), and the down-sampling operation of the feature map is performed through a plurality of convolution operations, and then the features are classified by using a binary classification loss function. The two-classification loss function comprises a cross entropy loss function or a logarithmic loss function and the like.
In one embodiment, the S20 further includes:
s230, inputting the style image and the monitoring original image into a style original generator network, and outputting the second generated image;
s240, inputting the monitoring original image and the second generated image into a style original discriminator network for discriminating the monitoring original image and the second generated image;
wherein the network of primitive style generators and the network of primitive style discriminators form the model of primitive style migration.
In S20, the style primitive migration model may be constructed by generating an antagonistic network. Based on the generated countermeasure network, the style primitive migration model can realize the applications of image coloring, image style migration, image super-resolution and the like.
In S230, the original style generator network may be a generator network, configured to generate a stylized image, map the style image to the monitoring original image, and output and generate the second generated image. The second generated image is a generated stylized image.
In S240, the original style discriminator network may be a discriminator network, configured to discriminate the monitoring original image from the second generated image, and further determine whether the generated second generated image (i.e., the generated stylized image) is real. The style primitive generator network and the style primitive discriminator network adjust respective parameters through mutual game, so that a loss function of the style primitive migration model reaches a target balance point, a better privacy protection image is generated, and better protection of privacy information is realized.
In an embodiment, each functional layer of the primitive style generator network in S230 is the same as each functional layer of the primitive style generator network in S210, and the feature size of the input image is also the same, and the detailed description may refer to the description in S210 in the foregoing embodiment.
In an embodiment, each functional layer of the original style discriminator network in S240 is the same as that of the original style discriminator network in S220, and the feature size of the input image is also the same, and the detailed description may refer to the description in S220 in the foregoing embodiment.
In one embodiment, the third generated image is obtained in S30 by inputting the first generated image into the raw style generator network.
And inputting the first generated image into the original style generator network, and outputting the third generated image. The monitoring raw image, the style image, the raw style generator network, the first generated image, the style raw generator network, and the third generated image form the first cyclic link. And if X represents the monitoring original image, Y represents the style image, and the similarity between the third generated image and the monitoring original image is larger and is represented as X ', an X-Y-X' cyclic link is formed.
In one embodiment, in S30, the fourth generated image is obtained by inputting the second generated image into the primitive style generator network.
The stylistic image, the monitor raw image, the stylistic raw generator network, the second generated image, the raw stylistic generator network, and the fourth generated image form the second cyclical link. And the similarity between the fourth generated image and the style image is larger and is denoted as Y ', so that a Y-X-Y' cyclic link is formed.
Thus, the first cyclic link and the second cyclic link adopt a training mode of cyclic generation countermeasure Networks (cyclegans). The first cyclic link utilizes unpaired training data and two mirror-symmetric generation countermeasure networks, and simultaneously learns the process of mapping the monitoring original image X to the style image Y and then mapping the monitoring original image X to the third generated image X 'and the process of mapping the style image Y to the monitoring original image X and then mapping the style image Y to the fourth generated image Y' during model training.
In one embodiment, the S40 includes:
s410, adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in the initial monitoring privacy protection model according to the loss function of the monitoring privacy protection model;
wherein the loss function of the monitoring privacy protection model is:
Figure BDA0003270336100000141
wherein X represents the monitoring raw image, Y represents the style image, G represents the raw style generator network, DYRepresenting the primitive style discriminator network, F representing the style primitiveNetwork of origin generators, DXRepresenting said network of primitive discriminators, LGAN(G,DYX, Y) represents a loss function of the primitive style migration model, LGAN(F,DXY, X) represents a loss function of the style primitive migration model, λ is a weight parameter,
Figure BDA0003270336100000142
a loss function representing the first cyclic link,
Figure BDA0003270336100000143
a loss function representing the second cyclic link;
s420, adjusting the loss function of the monitoring privacy protection model to a target balance point to form a stable monitoring privacy protection model;
the target balance points are:
Figure BDA0003270336100000144
wherein G is*Representing the weight of the primitive style discriminator network, F*Representing weights of the raw-style generator network, and the min-max function representing finding a discriminator network such that a difference between a feature distribution of an input image and a generation distribution of the generator network is maximized and a difference between the feature distribution of the input image and the generation distribution of the generator network is minimized.
In S410, training and optimizing the monitoring privacy protection model is performed by adjusting a loss function of the monitoring privacy protection model. L isGAN(G,DYX, Y) may also be understood as a loss function of the primitive style generator network and the primitive style discriminator network. L isGAN(F,DXY, X) can also be understood as a loss function of the network of raw style generators and the network of raw style discriminators.
Figure BDA0003270336100000151
Also can understandSolving a loss function of mapping the monitoring original image to the style image and then mapping the style image back to the third generated image (with larger similarity to the monitoring original image).
Figure BDA0003270336100000152
It can also be understood as a loss function of the mapping of the stylized image to the monitoring raw image and back to the fourth generated image (with greater similarity to the stylized image).
The primitive style discriminator network D by adjusting the parameters of the primitive style generator network GYOf the raw style generator network F and of the raw style discriminator network DXCan realize the pair LGAN(G,DY,X,Y)、LGAN(F,DX,Y,X)、
Figure BDA0003270336100000153
And further realizing the adjustment of the loss function of the monitoring privacy protection model.
In S420, the target balance point is an adjustment target of a loss function of the monitoring privacy protection model. And the discriminator network in the min-max function comprises the original style discriminator network and the style original discriminator network. The generator network in the min-max function comprises the primitive style generator network and the style primitive generator network.
Specifically, the min-max function represents: searching the primitive style discriminator network to maximize the difference between the feature distribution of the monitoring primitive image and the generation distribution of the primitive style generator network, and minimize the difference between the feature distribution of the monitoring primitive image and the generation distribution of the primitive style generator network; and searching the network of primitive style discriminators such that a difference between the feature distribution of the stylized image and the generation distribution of the network of primitive style generators is maximized and a difference between the feature distribution of the stylized image and the generation distribution of the network of primitive style generators is minimized.
By adjusting theParameters of primitive style generator network G, said primitive style discriminator network DYOf the raw style generator network F and of the raw style discriminator network DXThe loss function of the monitoring privacy protection model reaches the target balance point, the monitoring original image is stylized, sensitive information such as human features and license plates is desensitized and protected, the target image area is replaced by the original image features, and the authenticity of the monitoring original image is kept.
In one embodiment, when a target balance point is found at which a difference between a feature distribution of an input image and a generation distribution of a generator network is maximized and the difference between the input image and the generation distribution of the generator network is minimized, functions such as KL Divergence (Kullback-Leibler Divergence), JS Divergence (Jensen-Shannon Divergence) and the like may be used to describe the difference.
In one embodiment, the L-loss function may be an L1 norm loss function or an L2 norm loss function, or the like.
Referring to fig. 4, in one embodiment, the invention provides a privacy protecting apparatus 100 for monitoring video images. The privacy protection apparatus 100 for monitoring video images comprises an acquisition module 10, an image generation module 20, a cyclic link module 30, a model optimization module 40 and a privacy protection image generation module 50. The acquisition module 10 is configured to acquire a stylistic image collection and a monitoring original image collection. The image generation module 20 is configured to generate a first generated image and a second generated image according to the monitoring original image in the monitoring original image set and the style image in the style image set, where the first generated image is obtained by inputting the monitoring original image and the style image into an original style migration model, and the second generated image is obtained by inputting the style image and the monitoring original image into an original style migration model and outputting the style image and the monitoring original image into the style migration model.
The cyclic link module 30 is configured to form a first cyclic link according to the monitoring original image, the first generated image, and a third generated image, and form a second cyclic link according to the style image, the second generated image, and a fourth generated image, where the third generated image is obtained by inputting the first generated image into the style original migration model, and the fourth generated image is obtained by inputting the second generated image into the original style migration model. The model optimization module 40 is configured to adjust a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link, and a loss function of the second cyclic link in the initial monitoring privacy protection model to form a stable monitoring privacy protection model. The privacy-preserving image generation module 50 is configured to convert the monitoring original image into a privacy-preserving image according to the stable monitoring privacy-preserving model.
In the privacy protecting apparatus 100 for monitoring video images, the obtaining module 10, the image generating module 20, the cyclic link module 30, the model optimizing module 40, and the privacy protecting image generating module 50 are respectively in one-to-one correspondence with the steps S10 to S50 in the privacy protecting method for monitoring video images, and the description in the above embodiments may be referred to for relevant description.
In one embodiment, the image generation module 20 includes a primitive style migration module (not shown). The original style migration module is used for inputting the monitoring original image and the style image into an original style generator network, outputting the first generated image, inputting the style image and the first generated image into an original style discriminator network, and discriminating the style image and the first generated image.
The primitive style migration modules correspond to the S210 and the S220 in a one-to-one manner, and the related description may refer to the description in the above embodiments.
In one embodiment, the image generation module 20 further comprises a style raw migration module (not shown). The style original migration module is used for inputting the style image and the monitoring original image into a style original generator network, outputting the second generated image, inputting the monitoring original image and the second generated image into a style original discriminator network, and discriminating the monitoring original image and the second generated image.
The style primitive migration modules correspond to the S230 and the S240 in a one-to-one manner, and the related description may refer to the description in the above embodiments.
In one embodiment, the third generated image in the cyclic link module is obtained by inputting the first generated image into the raw style generator network, and the related description may refer to the related description in the above embodiment in the S30.
In one embodiment, in the cyclic link module, the fourth generated image is obtained by inputting the second generated image into the primitive style generator network, and the relevant description may refer to the relevant description in the above embodiment in the S30.
In one embodiment, the model optimization module 40 includes a parameter adjustment module (not shown) and a model generation module (not shown). The parameter adjusting module is configured to adjust a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link, and a loss function of the second cyclic link in the initial monitoring privacy protection model according to the loss function of the monitoring privacy protection model. The model generation module is used for adjusting the loss function of the monitoring privacy protection model to a target balance point to form a stable monitoring privacy protection model.
Wherein the loss function of the monitoring privacy protection model is:
Figure BDA0003270336100000181
wherein X represents the monitoring raw image, Y represents the style image, G represents the raw style generator network, DYRepresenting the primitive style discriminator network, F representing the style primitive generator network, DXRepresenting said network of primitive discriminators, LGAN(G,DYX, Y) represents a loss function of the primitive style migration model, LGAN(F,DXY, X) represents a loss function of the style primitive migration model, λ is a weight parameter,
Figure BDA0003270336100000182
a loss function representing the first cyclic link,
Figure BDA0003270336100000183
representing a loss function of the second cyclic link.
The target balance points are:
Figure BDA0003270336100000184
wherein G is*Representing the weight of the primitive style discriminator network, F*Representing weights of the raw-style generator network, and the min-max function representing finding a discriminator network such that a difference between a feature distribution of an input image and a generation distribution of the generator network is maximized and a difference between the feature distribution of the input image and the generation distribution of the generator network is minimized.
And the privacy protection image generation module is used for inputting the monitoring original image into the stable monitoring privacy protection model and outputting the privacy protection image.
The parameter adjusting modules correspond to the S410 in a one-to-one manner, and the description in the above embodiments may be referred to for related description. The model generation modules correspond to the S420 in a one-to-one manner, and the description in the above embodiments may be referred to for related description.
Referring to fig. 4, in an embodiment, the privacy protecting apparatus 100 for monitoring video images further includes a monitoring video image generating module 60. The surveillance video image generation module 60 is configured to replace the target image area of the privacy-protected image with the target image area of the surveillance original image, so as to obtain a surveillance video image after privacy protection.
The monitoring video image generation module 60 corresponds to the S60 one by one, and the related description may refer to the description in the above embodiment.
In an embodiment, the obtaining module 10, the image generating module 20, the cyclic link module 30, the model optimizing module 40, the privacy protecting image generating module 50, and the monitoring video image generating module 60 may include, but are not limited to, a Central Processing Unit (CPU), an embedded MicroController Unit (MCU), an embedded microprocessor Unit (MPU), and an embedded System on Chip (SoC).
In one embodiment, the invention provides a computer apparatus comprising a processor, a memory for storing a computer program. The processor is configured to execute the program code stored in the memory to implement the method steps in any of the above embodiments. The computer devices may differ according to configuration or performance. The computer device also includes a memory. The memory has stored therein at least one instruction that is loaded and executed by the processor to implement the data provided by the methods of the various embodiments described above. The computer device may also have components such as a wired or wireless network interface, a keyboard, and an input-output interface for input and output. The computer device may further include other components for implementing the device functions, which are not described herein.
In one embodiment, the invention provides a computer-readable storage medium. The computer-readable storage medium has stored therein a program code, which when executed by a processor implements the method steps of any of the above embodiments. The computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, and the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
In one embodiment, the privacy protection method and the privacy protection device for the monitoring video image can be applied to scenes of viewing the monitoring video image, providing vehicle violation evidence for an owner and the like. The privacy protection method and the privacy protection device of the monitoring video image can carry out privacy protection on privacy areas except for the target image area of the monitoring original image and present a real target image.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (14)

1. A privacy protection method for monitoring video images is characterized by comprising the following steps:
acquiring a style image set and a monitoring original image set;
generating a first generated image and a second generated image according to the monitoring original image in the monitoring original image set and the style image in the style image set, wherein the first generated image is obtained by inputting the monitoring original image and the style image into an original style migration model, and the second generated image is obtained by inputting the style image and the monitoring original image into the original style migration model and outputting the style image and the monitoring original image;
forming a first cyclic link according to the monitoring original image, the style image, the original style migration model, the first generated image, the original style migration model and a third generated image, and forming a second cyclic link according to the style image, the monitoring original image, the original style migration model, the second generated image, the original style migration model and a fourth generated image, wherein the third generated image is obtained by inputting the first generated image into the original style migration model, and the fourth generated image is obtained by inputting the second generated image into the original style migration model;
adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in an initial monitoring privacy protection model to form a stable monitoring privacy protection model;
and converting the monitoring original image into a privacy protection image according to the stable monitoring privacy protection model.
2. The method of claim 1, wherein generating a first generated image based on the monitored original images in the monitored original image set and the grid images in the style image set comprises:
inputting the monitoring original image and the style image into an original style generator network, and outputting the first generated image;
inputting the style image and the first generated image into an original style discriminator network, and discriminating the style image and the first generated image;
wherein the primitive style generator network and the primitive style discriminator network form the primitive style migration model.
3. The method of claim 2, wherein generating a second generated image based on the monitored original images in the monitored original image set and the grid images in the style image set comprises:
inputting the style image and the monitoring original image into a style original generator network, and outputting the second generated image;
inputting the monitoring original image and the second generated image into a style original discriminator network, and discriminating the monitoring original image and the second generated image;
wherein the network of primitive style generators and the network of primitive style discriminators form the model of primitive style migration.
4. The method of claim 3, wherein a first cyclic link is formed according to the monitoring source image, the first generated image, and a third generated image, and the third generated image is obtained by inputting the first generated image into the raw style generator network.
5. The method of claim 4, wherein a second cyclic link is formed according to the style image, the second generated image, and a fourth generated image, and the fourth generated image is obtained by inputting the second generated image into the primitive style generator network.
6. The method according to claim 5, wherein the adjusting the loss function of the primitive style transition model, the loss function of the style primitive transition model, the loss function of the first cyclic link, and the loss function of the second cyclic link in the initial monitoring privacy protection model to form a stable monitoring privacy protection model comprises:
adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in the initial monitoring privacy protection model according to the loss function of the monitoring privacy protection model;
wherein the loss function of the monitoring privacy protection model is:
Figure FDA0003270336090000031
x represents the monitoring raw image, Y represents the style image, G represents the raw style generator network, DYRepresenting the primitive style discriminator network, F representing the style primitive generator network, DXRepresenting said network of primitive discriminators, LGAN(G,DYX, Y) represents a loss function of the primitive style migration model, LGAN(F,DXY, X) represents a loss function of the style primitive migration model, λ is a weight parameter,
Figure FDA0003270336090000032
a loss function representing the first cyclic link,
Figure FDA0003270336090000033
a loss function representing the second cyclic link;
adjusting the loss function of the monitoring privacy protection model to a target balance point to form a stable monitoring privacy protection model;
the target balance points are:
Figure FDA0003270336090000034
wherein G is*Representing the weight of the primitive style discriminator network, F*Representing weights of the raw-style generator network, and the min-max function representing finding a discriminator network such that a difference between a feature distribution of an input image and a generation distribution of the generator network is maximized and a difference between the feature distribution of the input image and the generation distribution of the generator network is minimized.
7. The privacy protection method for surveillance video images as claimed in claim 1, further comprising:
and replacing the target image area of the privacy protection image with the target image area of the monitoring original image to obtain the monitoring video image after privacy protection.
8. A privacy preserving apparatus for monitoring video images, comprising:
the acquisition module is used for acquiring the style image set and the monitoring original image set;
the image generation module is used for generating a first generated image and a second generated image according to the monitoring original image in the monitoring original image set and the style image in the style image set, wherein the first generated image is obtained by inputting the monitoring original image and the style image into an original style migration model, and the second generated image is obtained by inputting the style image and the monitoring original image into the style original migration model and outputting the style image and the monitoring original image;
a cyclic link module, configured to form a first cyclic link according to the monitoring original image, the first generated image, and a third generated image, and form a second cyclic link according to the style image, the second generated image, and a fourth generated image, where the third generated image is obtained by inputting the first generated image into the style original migration model, and the fourth generated image is obtained by inputting the second generated image into the original style migration model;
the model optimization module is used for adjusting a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link and a loss function of the second cyclic link in an initial monitoring privacy protection model to form a stable monitoring privacy protection model;
and the privacy protection image generation module is used for converting the monitoring original image into a privacy protection image according to the stable monitoring privacy protection model.
9. The apparatus for protecting privacy of surveillance video images according to claim 8, wherein the image generation module comprises:
and the original style migration module is used for inputting the monitoring original image and the style image into an original style generator network, outputting the first generated image, inputting the style image and the first generated image into an original style discriminator network, and discriminating the style image and the first generated image.
10. The apparatus for privacy protection of surveillance video images according to claim 9, wherein the image generation module further comprises:
and the style primitive migration module is used for inputting the style image and the monitoring primitive image into a style primitive generator network, outputting the second generated image, inputting the monitoring primitive image and the second generated image into a style primitive discriminator network, and discriminating the monitoring primitive image and the second generated image.
11. The apparatus for privacy protection of surveillance video images as claimed in claim 10, wherein the third generated image in the cyclic link module is obtained by inputting the first generated image into the raw style generator network.
12. The apparatus for privacy protection of surveillance video images as claimed in claim 11, wherein the loop link module inputs the fourth generated image into the primitive style generator network for obtaining the second generated image.
13. The apparatus for privacy protection of surveillance video images as claimed in claim 12, wherein the model optimization module comprises:
a parameter adjusting module, configured to adjust a loss function of the original style migration model, a loss function of the style original migration model, a loss function of the first cyclic link, and a loss function of the second cyclic link in an initial monitoring privacy protection model according to a loss function of the monitoring privacy protection model;
wherein the loss function of the monitoring privacy protection model is:
Figure FDA0003270336090000051
x represents the monitoring raw image, Y represents the style image, G represents the raw style generator network, DYRepresenting the primitive style discriminator network, F representing the style primitive generator network, DXRepresenting said network of primitive discriminators, LGAN(G,DYX, Y) represents a loss function of the primitive style migration model, LGAN(F,DXY, X) represents a loss function of the style primitive migration model, λ is a weight parameter,
Figure FDA0003270336090000052
a loss function representing the first cyclic link,
Figure FDA0003270336090000053
a loss function representing the second cyclic link;
the model generation module is used for adjusting the loss function of the monitoring privacy protection model to a target balance point to form a stable monitoring privacy protection model;
the target balance points are:
Figure FDA0003270336090000054
wherein G is*Representing the weight of the primitive style discriminator network, F*Representing weights of the raw-style generator network, and the min-max function representing finding a discriminator network such that a difference between a feature distribution of an input image and a generation distribution of the generator network is maximized and a difference between the feature distribution of the input image and the generation distribution of the generator network is minimized.
14. The apparatus for privacy protection of surveillance video images as claimed in claim 13, further comprising:
and the monitoring video image generating module is used for replacing the target image area of the privacy protection image with the target image area of the monitoring original image to obtain the monitoring video image after privacy protection.
CN202111110376.4A 2021-09-18 2021-09-18 Privacy protection method and device for monitoring video image Pending CN113793258A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111110376.4A CN113793258A (en) 2021-09-18 2021-09-18 Privacy protection method and device for monitoring video image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111110376.4A CN113793258A (en) 2021-09-18 2021-09-18 Privacy protection method and device for monitoring video image

Publications (1)

Publication Number Publication Date
CN113793258A true CN113793258A (en) 2021-12-14

Family

ID=79184163

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111110376.4A Pending CN113793258A (en) 2021-09-18 2021-09-18 Privacy protection method and device for monitoring video image

Country Status (1)

Country Link
CN (1) CN113793258A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463992A (en) * 2022-02-11 2022-05-10 超级视线科技有限公司 Night roadside parking management video conversion method and device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220929A (en) * 2017-06-23 2017-09-29 深圳市唯特视科技有限公司 A kind of non-paired image method for transformation using the consistent confrontation network of circulation
CN108615073A (en) * 2018-04-28 2018-10-02 北京京东金融科技控股有限公司 Image processing method and device, computer readable storage medium, electronic equipment
CN109829849A (en) * 2019-01-29 2019-05-31 深圳前海达闼云端智能科技有限公司 A kind of generation method of training data, device and terminal
CN109859096A (en) * 2018-12-28 2019-06-07 北京达佳互联信息技术有限公司 Image Style Transfer method, apparatus, electronic equipment and storage medium
CN110363183A (en) * 2019-07-30 2019-10-22 贵州大学 Service robot visual method for secret protection based on production confrontation network
CN110458216A (en) * 2019-07-31 2019-11-15 中山大学 The image Style Transfer method of confrontation network is generated based on condition
CN110766638A (en) * 2019-10-31 2020-02-07 北京影谱科技股份有限公司 Method and device for converting object background style in image
CN110930295A (en) * 2019-10-25 2020-03-27 广东开放大学(广东理工职业学院) Image style migration method, system, device and storage medium
CN111161132A (en) * 2019-11-15 2020-05-15 上海联影智能医疗科技有限公司 System and method for image style conversion
CN111179299A (en) * 2018-11-09 2020-05-19 珠海格力电器股份有限公司 Image processing method and device
CN111402151A (en) * 2020-03-09 2020-07-10 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111524205A (en) * 2020-04-23 2020-08-11 北京信息科技大学 Image coloring processing method and device based on loop generation countermeasure network
CN111753908A (en) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 Image classification method and device and style migration model training method and device
CN112330522A (en) * 2020-11-09 2021-02-05 深圳市威富视界有限公司 Watermark removal model training method and device, computer equipment and storage medium
US20210241498A1 (en) * 2020-06-12 2021-08-05 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for processing image, related electronic device and storage medium
CN113259583A (en) * 2020-02-13 2021-08-13 北京小米移动软件有限公司 Image processing method, device, terminal and storage medium
CN113271469A (en) * 2021-07-16 2021-08-17 南京大学 Safety and reversible video privacy safety protection system and protection method

Patent Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107220929A (en) * 2017-06-23 2017-09-29 深圳市唯特视科技有限公司 A kind of non-paired image method for transformation using the consistent confrontation network of circulation
CN108615073A (en) * 2018-04-28 2018-10-02 北京京东金融科技控股有限公司 Image processing method and device, computer readable storage medium, electronic equipment
CN111179299A (en) * 2018-11-09 2020-05-19 珠海格力电器股份有限公司 Image processing method and device
CN109859096A (en) * 2018-12-28 2019-06-07 北京达佳互联信息技术有限公司 Image Style Transfer method, apparatus, electronic equipment and storage medium
US20200242409A1 (en) * 2019-01-29 2020-07-30 Cloudminds (Shenzhen) Robotics Systems Co., Ltd. Method, device and terminal for generating training data
CN109829849A (en) * 2019-01-29 2019-05-31 深圳前海达闼云端智能科技有限公司 A kind of generation method of training data, device and terminal
CN110363183A (en) * 2019-07-30 2019-10-22 贵州大学 Service robot visual method for secret protection based on production confrontation network
CN110458216A (en) * 2019-07-31 2019-11-15 中山大学 The image Style Transfer method of confrontation network is generated based on condition
CN110930295A (en) * 2019-10-25 2020-03-27 广东开放大学(广东理工职业学院) Image style migration method, system, device and storage medium
CN110766638A (en) * 2019-10-31 2020-02-07 北京影谱科技股份有限公司 Method and device for converting object background style in image
CN111161132A (en) * 2019-11-15 2020-05-15 上海联影智能医疗科技有限公司 System and method for image style conversion
CN113259583A (en) * 2020-02-13 2021-08-13 北京小米移动软件有限公司 Image processing method, device, terminal and storage medium
CN111402151A (en) * 2020-03-09 2020-07-10 北京字节跳动网络技术有限公司 Image processing method, image processing device, electronic equipment and computer readable medium
CN111524205A (en) * 2020-04-23 2020-08-11 北京信息科技大学 Image coloring processing method and device based on loop generation countermeasure network
US20210241498A1 (en) * 2020-06-12 2021-08-05 Beijing Baidu Netcom Science And Technology Co., Ltd. Method and device for processing image, related electronic device and storage medium
CN111753908A (en) * 2020-06-24 2020-10-09 北京百度网讯科技有限公司 Image classification method and device and style migration model training method and device
CN112330522A (en) * 2020-11-09 2021-02-05 深圳市威富视界有限公司 Watermark removal model training method and device, computer equipment and storage medium
CN113271469A (en) * 2021-07-16 2021-08-17 南京大学 Safety and reversible video privacy safety protection system and protection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
钟跃崎: "人工智能技术原理与应用", vol. 1, 30 September 2020, 东华大学出版社, pages: 240 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114463992A (en) * 2022-02-11 2022-05-10 超级视线科技有限公司 Night roadside parking management video conversion method and device

Similar Documents

Publication Publication Date Title
Elhoseiny et al. Weather classification with deep convolutional neural networks
González et al. On-board object detection: Multicue, multimodal, and multiview random forest of local experts
CN108256547A (en) Generate the training image for the object recognition system based on machine learning
CN109657715B (en) Semantic segmentation method, device, equipment and medium
CN105404886A (en) Feature model generating method and feature model generating device
CN111160481B (en) Adas target detection method and system based on deep learning
Bai et al. Inconspicuous adversarial patches for fooling image-recognition systems on mobile devices
CN117157679A (en) Perception network, training method of perception network, object recognition method and device
Xu et al. Reliability of gan generated data to train and validate perception systems for autonomous vehicles
CN113793258A (en) Privacy protection method and device for monitoring video image
CN111833360A (en) Image processing method, device, equipment and computer readable storage medium
CN114301850A (en) Military communication encrypted flow identification method based on generation countermeasure network and model compression
CN104008374B (en) Miner's detection method based on condition random field in a kind of mine image
CN116977484A (en) Image desensitizing method, device, electronic equipment and storage medium
Özyurt et al. A new method for classification of images using convolutional neural network based on Dwt-Svd perceptual hash function
CN113177917B (en) Method, system, equipment and medium for optimizing snap shot image
CN114140674B (en) Electronic evidence availability identification method combined with image processing and data mining technology
JP6778625B2 (en) Image search system, image search method and image search program
Kezebou et al. Joint image enhancement and localization framework for vehicle model recognition in the presence of non-uniform lighting conditions
Lin et al. Generating synthetic training data for object detection using multi-task generative adversarial networks
TWI801717B (en) A physical image generation method and device, device, non-transitory computer-readable storage medium and computer program product
CN114445916A (en) Living body detection method, terminal device and storage medium
Liu et al. Weather recognition of street scene based on sparse deep neural networks
Zhao et al. Small object detection of imbalanced traffic sign samples based on hierarchical feature fusion
KR102589306B1 (en) Method and device for generating blackbox image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination