CN109284684A - A kind of information processing method, device and computer storage medium - Google Patents

A kind of information processing method, device and computer storage medium Download PDF

Info

Publication number
CN109284684A
CN109284684A CN201810956986.8A CN201810956986A CN109284684A CN 109284684 A CN109284684 A CN 109284684A CN 201810956986 A CN201810956986 A CN 201810956986A CN 109284684 A CN109284684 A CN 109284684A
Authority
CN
China
Prior art keywords
image
sensitive information
training
processed
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810956986.8A
Other languages
Chinese (zh)
Other versions
CN109284684B (en
Inventor
张弓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810956986.8A priority Critical patent/CN109284684B/en
Publication of CN109284684A publication Critical patent/CN109284684A/en
Application granted granted Critical
Publication of CN109284684B publication Critical patent/CN109284684B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/60Protecting data
    • G06F21/62Protecting access to data via a platform, e.g. using keys or access control rules
    • G06F21/6218Protecting access to data via a platform, e.g. using keys or access control rules to a system of files or objects, e.g. local or distributed file system or database
    • G06F21/6245Protecting personal data, e.g. for financial or medical purposes
    • G06F21/6254Protecting personal data, e.g. for financial or medical purposes by anonymising data, e.g. decorrelating personal data from the owner's identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Bioethics (AREA)
  • Medical Informatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Computer Hardware Design (AREA)
  • Databases & Information Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the invention discloses a kind of information processing method, device and computer storage mediums, by obtaining image to be processed;Based on the first training pattern, sensitive information is detected from the image to be processed and obtains the type of the sensitive information;Corresponding second training pattern of type based on the sensitive information generates replacement information corresponding with the sensitive information;Processing is replaced to the sensitive information in the image to be processed according to the replacement information, the image that obtains that treated;The true sensitive information of user is hidden to realize, protective effect also is played to the true sensitive information of user in the case where not reducing original image aesthetic feeling.

Description

Information processing method and device and computer storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an information processing method and apparatus, and a computer storage medium.
Background
With the development of scientific technology, the security of personal information is more and more emphasized by people, and how to avoid the leakage of the personal information is a current research focus, especially the wide application of deep learning technology, so that the personal information is hidden and has further development and wider prospects. For example, in some self-portrait and life photographs, people often block areas that involve personal information, such as identification numbers, license plates, flight numbers, and even facial images, by playing a mosaic or with small patterns. However, mosaicing and blocking with small patterns do serve to hide personal information, but often also can make the picture partially aesthetically unappealing.
Disclosure of Invention
The invention mainly aims to provide an information processing method, an information processing device and a computer storage medium, which can hide real sensitive information of a user by generating virtual replacement information 'with false or false', so that the real sensitive information of the user can be protected without reducing the aesthetic feeling of an original image.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an information processing method, where the method includes:
acquiring an image to be processed;
detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information;
generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information;
and replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image.
In the foregoing scheme, the acquiring an image to be processed specifically includes:
receiving a camera opening instruction to open the camera;
and receiving a photographing instruction, and acquiring the image to be processed according to the photographing instruction.
In the above scheme, before the detecting sensitive information from the image to be processed based on the first training model and obtaining the type of the sensitive information, the method further includes:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
carrying out block division on each image in the image sample to obtain a divided image block set;
marking each image block in the divided image block set to obtain a marked image block set;
taking the marked image block set as a first training set, and training a mark corresponding to each image block in the first training set and the first training set to obtain a first training model; the first training model is used for detecting whether the image to be processed contains sensitive information and the type of the sensitive information.
In the foregoing scheme, the training a label corresponding to each image block in the first training set and the first training set to obtain a first training model specifically includes:
inputting the first training set into a convolutional neural network model, and performing image recognition based on the convolutional neural network model to determine the category of each image block in the first training set;
determining a value of a loss function based on the determined category of each image block and the mark corresponding to each image block;
based on the value of the loss function, performing parameter adjustment on the convolutional neural network model, re-determining the category of each image block in the first training set according to the convolutional neural network model after parameter adjustment, and re-determining the value of the loss function, until the value of the loss function is smaller than a preset threshold value, determining that the training of the convolutional neural network model is completed;
and obtaining a first training model based on the trained convolutional neural network model.
In the above solution, before the generating, based on the second training model corresponding to the type of the sensitive information, the replacement information corresponding to the sensitive information, the method further includes:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
sensitive information detection is carried out on each image in the image sample based on a first training model;
acquiring the type of the sensitive information in each image based on the detection result;
grouping the image samples based on the acquired multiple types to obtain multiple groups of image sets;
taking the multiple groups of image sets as a second training set, and performing individual training on each group of image sets in the second training set to obtain multiple groups of second training models; wherein the plurality of sets of second training models have corresponding relationships with the plurality of types.
In the foregoing scheme, the generating, based on the second training model corresponding to the type of the sensitive information, the replacement information corresponding to the sensitive information specifically includes:
obtaining a second training model corresponding to the type of the sensitive information according to the corresponding relation between the plurality of groups of second training models and the plurality of types;
and generating and training the sensitive information based on the corresponding second training model to obtain the replacement information corresponding to the sensitive information.
In the foregoing scheme, the replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image specifically includes:
determining a sensitive information area corresponding to the sensitive information in the image to be processed;
and covering the replacement information on the sensitive information area to obtain a processed image.
In the above aspect, the method further includes:
and based on a first training model, if sensitive information is not detected from the image to be processed, ending the information processing flow of the image to be processed.
In the above scheme, after the detecting sensitive information from the image to be processed based on the first training model and obtaining the type of the sensitive information, the method further includes:
sending a consultation instruction, wherein the consultation instruction is used for confirming whether the sensitive information needs to be processed or not;
if a confirmation instruction is received, continuing the information processing flow of the image to be processed according to the confirmation instruction;
and if a cancel instruction is received, ending the information processing flow of the image to be processed according to the cancel instruction.
In a second aspect, an embodiment of the present invention provides an information processing apparatus, including: a network interface, a memory, and a processor; wherein,
the network interface is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
the memory for storing a computer program operable on the processor;
the processor is configured to, when executing the computer program, perform the steps of the information processing method according to any one of the first aspect.
In a third aspect, an embodiment of the present invention provides a computer storage medium, where an information processing program is stored, and the information processing program, when executed by at least one processor, implements the steps of the method for processing information according to any one of the first aspect.
The embodiment of the invention provides an information processing method, an information processing device and a computer storage medium, wherein images to be processed are obtained; detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information; generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information; replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image; therefore, the real sensitive information of the user is hidden, and the real sensitive information of the user is protected under the condition that the aesthetic feeling of the original image is not reduced.
Drawings
Fig. 1 is a schematic flowchart of an information processing method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a hardware structure of a mobile terminal according to an embodiment of the present invention;
fig. 3 is a detailed flowchart of an information processing method according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an original image of an identity card according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of an identity card original image partition block according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a processed id card image according to an embodiment of the present invention;
FIG. 7 is a block diagram of an information processing apparatus according to an embodiment of the present invention;
FIG. 8 is a schematic diagram of another information processing apparatus according to an embodiment of the present invention;
FIG. 9 is a schematic diagram of a component structure of another information processing apparatus according to an embodiment of the present invention;
FIG. 10 is a block diagram of another information processing apparatus according to an embodiment of the present invention;
FIG. 11 is a block diagram of another information processing apparatus according to an embodiment of the present invention;
fig. 12 is a schematic diagram of a specific hardware structure of an information processing apparatus according to an embodiment of the present invention.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention.
Convolutional Neural Network (CNN) is a kind of feed-forward Neural Network, and has been successfully applied to image recognition. The basic structure of CNN includes two layers, one is a feature extraction layer, the input of each neuron is connected with the local receiving domain of the previous layer, and the local feature is extracted; the second is a feature mapping layer, each computation layer of the network is composed of a plurality of feature mappings, each feature mapping is a plane, and the weights of all neurons on the plane are equal. The CNN is mainly used for identifying two-dimensional graphs of displacement, zooming, image information and other forms of distortion invariance, and a feature extraction layer of the CNN learns through training data, so that displayed feature extraction is avoided, and learning is implicitly performed from the training data; meanwhile, the CNN network can learn in parallel based on the same weight of the neurons on the same feature mapping surface, and the CNN network is also a great advantage of the convolutional neural network relative to a network in which the neurons are connected with each other. The convolution neural network has unique superiority in the aspects of voice recognition and image processing by virtue of a special structure with shared local weight, the layout of the convolution neural network is closer to that of an actual biological neural network, the complexity of the network is reduced by virtue of weight sharing, and particularly, the complexity of data reconstruction in the processes of feature extraction and classification is avoided by virtue of the characteristic that an image of a multi-dimensional input vector can be directly input into the network.
The Generative confrontation network (GAN) is a deep learning model proposed in 2014, and is one of the most promising methods for unsupervised learning in complex distribution in recent years, and the method is based on the thought of "gambling theory" and constructs (at least) two models in a framework: a generation Model (G) for capturing data distribution and a discriminant Model (D) for estimating the probability that a sample comes from training data are trained simultaneously, and G can generate data which is 'false-to-false' and D cannot correctly distinguish the generated data from real data in an optimal state by utilizing a dynamic 'game process' formed by G and D.
Deep learning is a powerful technique derived from a neural network, and is increasingly applied to image recognition and information hiding applications by constructing multi-level neurons and repeatedly training a large number of data samples. In the embodiment of the invention, sensitive information of a person, such as an identification number, a license plate number, a flight number, a face image and the like, is detected from the acquired image through a first training model (such as a convolutional neural network model). Generating a group of 'false-to-false' virtual replacement information through a second training model (such as a generative confrontation network model), and then covering real sensitive information in an original image by using the virtual replacement information, so that the real sensitive information of a user is hidden, and the real sensitive information of the user is protected under the condition that the aesthetic feeling of the original image is not reduced; embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
Example one
Referring to fig. 1, an information processing method according to an embodiment of the present invention is shown, where the method may include:
s101: acquiring an image to be processed;
s102: detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information;
s103: generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information;
s104: and replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image.
Based on the technical scheme shown in FIG. 1, an image to be processed is obtained; detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information; generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information; replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image; therefore, the real sensitive information of the user is hidden, and the real sensitive information of the user is protected under the condition that the aesthetic feeling of the original image is not reduced.
For the technical solution shown in fig. 1, in a possible implementation manner, the acquiring an image to be processed specifically includes:
receiving a camera opening instruction to open the camera;
and receiving a photographing instruction, and acquiring the image to be processed according to the photographing instruction.
It should be noted that, an image to be processed is obtained, and the image to be processed may include sensitive information. The image to be processed can be obtained from the existing image library, can also be obtained by downloading from the internet, and can also be obtained by directly shooting through a camera; for example, after the camera is turned on, the image to be processed can be acquired through a photographing instruction.
For the technical solution shown in fig. 1, in a possible implementation manner, before the detecting sensitive information from the image to be processed based on the first training model and obtaining the type of the sensitive information, the method further includes:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
carrying out block division on each image in the image sample to obtain a divided image block set;
marking each image block in the divided image block set to obtain a marked image block set;
taking the marked image block set as a first training set, and training a mark corresponding to each image block in the first training set and the first training set to obtain a first training model; the first training model is used for detecting whether the image to be processed contains sensitive information and the type of the sensitive information.
In the above implementation, specifically, the label includes a category of the image block and a boundary of the sensitive information; wherein the categories include types of sensitive information and non-sensitive information.
It should be noted that the acquired image sample includes a plurality of images, and each image includes sensitive information; the sensitive information is privacy information used for representing personal identity, such as an identification number, a mobile phone number, a flight number, a license plate number, a face image and the like. Specifically, the image sample may be obtained by downloading an image from an image library on the internet or locally, or may be obtained from a pre-established image sample library; to facilitate uniformity of the image samples, all images in the image samples may be normalized to a uniform resolution, such as a resolution of 448 × 448 or a resolution of 224 × 224; however, in the embodiment of the present invention, the resolution of the image sample is not particularly limited.
It should be further noted that, after the image samples are obtained, in order to facilitate subsequent labeling and model training, block division may be performed on each image in the image samples, for example, each image is divided into 10 × 10 image blocks; then, each image block is marked, that is, a mark is added to each image block, and the mark includes two parts: the category of the image block and the boundary of the sensitive information; the image block category comprises the type of sensitive information (such as identity card, contact phone, flight, license plate, face and the like) and non-sensitive information, and for the non-sensitive information, the boundary (bounding box) of the sensitive information is invalid; when an image block is marked, the marked content serves as a label of the image block and can be used for determining whether the image block contains sensitive information and the type and the boundary of the sensitive information; the marked image block set is used as a first training set, and the first training set and the mark are trained by using a current existing convolutional neural network model, so that a first training model, such as a VGG model (Visual Geometry group network, VGGNet), can be obtained.
It can be understood that, for a specific training process of the first training model, taking a convolutional neural network model as an example, in the foregoing implementation manner, specifically, the obtaining the first training model by training the label corresponding to each image block in the first training set and the first training set includes:
inputting the first training set into a convolutional neural network model, and performing image recognition based on the convolutional neural network model to determine the category of each image block in the first training set;
determining a value of a loss function based on the determined category of each image block and the mark corresponding to each image block;
based on the value of the loss function, performing parameter adjustment on the convolutional neural network model, re-determining the category of each image block in the first training set according to the convolutional neural network model after parameter adjustment, and re-determining the value of the loss function, until the value of the loss function is smaller than a preset threshold value, determining that the training of the convolutional neural network model is completed;
and obtaining a first training model based on the trained convolutional neural network model.
It should be noted that the preset threshold is a determination value used for measuring whether the convolutional neural network model is trained completely. In the embodiment of the present invention, it is assumed that the determined category of each image block is defined asThe mark corresponding to each image block is defined as YiN, N represents the number of image blocks in the first training set; using norm of L2 grade, calculatingAnd YiTaking the square of the difference value as the value of the Loss function, and taking Loss as the Loss function, wherein the value of the Loss function isComparing the calculated value of the loss function with a preset threshold, and if the value of the loss function is greater than the preset threshold, continuously adjusting the parameters of the convolutional neural network model; and re-determining the value of the loss function according to the adjusted convolutional neural network model; when the value of the loss function is smaller than the preset threshold value, the completion of the training of the convolutional neural network model can be determined, and the first training model is obtained.
For the technical solution shown in fig. 1, in a possible implementation manner, before the generating, based on the second training model corresponding to the type of the sensitive information, the replacement information corresponding to the sensitive information, the method further includes:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
sensitive information detection is carried out on each image in the image sample based on a first training model;
acquiring the type of the sensitive information in each image based on the detection result;
grouping the image samples based on the acquired multiple types to obtain multiple groups of image sets;
taking the multiple groups of image sets as a second training set, and performing individual training on each group of image sets in the second training set to obtain multiple groups of second training models; wherein the plurality of sets of second training models have corresponding relationships with the plurality of types.
It should be noted that the acquired image sample includes a plurality of images, and each image includes sensitive information; at this time, sensitive information detection can be performed on each image in the image sample according to the first training model, so that sensitive information of each image and the type of the sensitive information can be obtained; grouping the image samples according to the obtained types, so that a plurality of groups of image sets can be obtained; such as an image set corresponding to an identity card type, an image set corresponding to a license plate type, an image set corresponding to a contact phone type and the like; the multiple groups of image sets are used as a second training set, each group of image set in the second training set is trained independently, and after training, multiple groups of second training models are obtained and correspond to the types of the sensitive information; for example, after the image set corresponding to the identity card type is trained, a second training model corresponding to the identity card type is obtained; after the image set corresponding to the license plate type is trained, a second training model corresponding to the license plate type is obtained; and after the image set corresponding to the contact phone type is trained, obtaining a second training model corresponding to the contact phone type.
It should be further noted that, for the specific training process of the second training model, the embodiment of the present invention takes a generative confrontation network model as an example, where the generative confrontation network model includes a generative model G and a discriminant model D, and the generative confrontation network model trains a certain group of image sets in the second training set according to the generative confrontation network model, and in the training process, the objective of the generative model G is to generate an image as close to reality as possible to discriminate the discriminant network D, and the objective of the discriminant network D is to distinguish the image generated by G from the real image in the certain group of image sets as possible; thus G and D constitute a dynamic "gaming process"; the final result of the training enables G to generate enough images to be 'spurious', and D is difficult to judge whether the images generated by G are real images or not, and the judgment probability is 0.5 respectively; this results in a second training model that can generate "false-to-false" virtual information.
In the foregoing implementation manner, specifically, the generating, based on the second training model corresponding to the type of the sensitive information, the replacement information corresponding to the sensitive information includes:
obtaining a second training model corresponding to the type of the sensitive information according to the corresponding relation between the plurality of groups of second training models and the plurality of types;
and generating and training the sensitive information based on the corresponding second training model to obtain the replacement information corresponding to the sensitive information.
It should be noted that after the type of the sensitive information in the image to be processed is determined, the second training model corresponding to the type of the sensitive information can be obtained according to the corresponding relationship between the plurality of groups of second training models and the plurality of types; and inputting the sensitive information into the corresponding second training model for training, so as to generate virtual replacement information infinitely close to the sensitive information, namely, obtain the replacement information corresponding to the sensitive information.
As to the technical solution shown in fig. 1, in a possible implementation manner, the performing replacement processing on the sensitive information in the image to be processed according to the replacement information to obtain a processed image specifically includes:
determining a sensitive information area corresponding to the sensitive information in the image to be processed;
and covering the replacement information on the sensitive information area to obtain a processed image.
It should be noted that after the replacement information corresponding to the sensitive information is obtained, the sensitive information in the image to be processed is conveniently hidden; a sensitive information area corresponding to the sensitive information in the image to be processed can be determined, wherein the sensitive information area is obtained based on a boundary (bounding box) of the sensitive information; the replacement information is then used to overlay the sensitive information area so that a processed image can be obtained. Here, in the processed image, the real sensitive information is already hidden by the replacement information, so that the real sensitive information of the user is protected without reducing the aesthetic feeling of the original image.
It can be understood that the image to be processed may or may not contain sensitive information; if the image to be processed does not contain the sensitive information, the information processing flow of the image to be processed does not need to be executed; therefore, for the technical solution shown in fig. 1, in a possible implementation manner, the method further includes:
and based on a first training model, if sensitive information is not detected from the image to be processed, ending the information processing flow of the image to be processed.
It should be noted that, based on the first training model, if sensitive information is detected from the image to be processed, the information processing flow of the image to be processed is continued, for example, the type of the sensitive information is obtained, and then, based on the second training model corresponding to the type of the sensitive information, replacement information corresponding to the sensitive information is generated; replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image; based on the first training model, if the sensitive information is not detected from the image to be processed, the information processing flow of the image to be processed is ended, and the image information processing of the image to be processed is not required.
It can be understood that, when it is detected that the image to be processed contains sensitive information, the sensitive information may not be so sensitive, or the image to be processed may not be disclosed to the public, in which case, the sensitive information does not need to be processed, and an information processing flow of the image to be processed does not need to be executed; therefore, for the technical solution shown in fig. 1, in a possible implementation manner, the method further includes:
after the detecting sensitive information from the image to be processed based on the first training model and obtaining the type of the sensitive information, the method further includes:
sending a consultation instruction, wherein the consultation instruction is used for confirming whether the sensitive information needs to be processed or not;
if a confirmation instruction is received, continuing the information processing flow of the image to be processed according to the confirmation instruction;
and if a cancel instruction is received, ending the information processing flow of the image to be processed according to the cancel instruction.
It should be noted that when it is determined that the image to be processed contains sensitive information, a consultation instruction may be sent at this time; then confirming whether the sensitive information needs to be processed by a user; when a confirmation instruction is received, confirming that the sensitive information needs to be processed, namely, continuing the information processing flow of the image to be processed; when a cancel instruction is received, it is determined that the sensitive information does not need to be processed, that is, the information processing flow of the image to be processed can be ended; therefore, the real sensitive information of the user can be protected under the condition of not reducing the aesthetic feeling of the original image, the processing operation of hiding unnecessary sensitive information can be avoided, and the system resource is saved.
The embodiment provides an information processing method, which comprises the steps of obtaining an image to be processed; detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information; generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information; replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image; therefore, the real sensitive information of the user is hidden, and the real sensitive information of the user is protected under the condition that the aesthetic feeling of the original image is not reduced.
Example two
Based on the same inventive concept of the foregoing embodiment, referring to fig. 2, a mobile terminal structure example that can be applied to the technical solution of the foregoing embodiment is shown, the mobile terminal 200 has a photographing function and a display function, and may be, but is not limited to, a portable electronic device such as a mobile phone, a tablet computer, a personal digital assistant, an electronic book reader, a multimedia playing device, a smart photographing device, and a wearable device. As shown in fig. 2, the structure of the mobile terminal 200 may include: a Radio Frequency (RF) unit 210, a memory 220, an input unit 230, a display unit 240, a camera 250, a sensor 260, a processor 270, a power supply 280, and the like; the main functions of the components of the mobile terminal shown in fig. 2 are described as follows:
the rf unit 210 is used for receiving and transmitting information or receiving and transmitting signals during a call; radio frequency unit 210 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a duplexer, etc.; specifically, after receiving the downlink information of the base station, the downlink information is sent to the processor 270 for processing; in addition, the uplink data is sent to the base station;
the memory 220 is used for storing software programs and various data, the memory 220 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system, application programs required by at least one function (such as an application program for taking a picture, a first training model application program for detecting sensitive information and sensitive information types, a second training model application program for generating replacement information, and the like), and the like; the storage data area may store data created according to use of the mobile terminal;
the input unit 230 is used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal; specifically, the input unit 230 may include a touch panel 231 (also referred to as a touch screen) and other input devices 232 (including but not limited to a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a mouse, etc.).
The display unit 240 is used for displaying information input by a user or information provided to the user, the display unit 240 includes a display panel 241, and the touch panel 231 may cover the display panel 241; although in fig. 2, the touch panel 231 and the display panel 241 are implemented as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 231 and the display panel 241 may be integrated to implement the input and output functions of the mobile terminal;
the camera 250 is used for still pictures, continuous shooting functions or short video shooting, the camera 250 is internally arranged and externally arranged, and the internally arranged camera means that the camera is arranged in the mobile terminal, so that the use is more convenient; the external camera is connected with the external camera through a data line or an interface of the mobile terminal to complete a shooting function, so that the weight of the mobile terminal can be reduced; the camera 250 generally has functions of video camera/transmission and still image capture, etc., and is used for transmitting captured information to the memory 107 through a serial-parallel port or other interfaces;
the mobile terminal further includes at least one sensor 260, such as a light sensor, a gravity sensor, a gyroscope, and other sensors; when a gravity sensor or a gyroscope inside the mobile terminal detects that the mobile terminal is in a shaking state, a corresponding photographing processing mode, such as starting an anti-shaking function, may be executed by the processor 270.
The processor 270 is a control center of the mobile terminal, connects various parts of the mobile terminal by using various interfaces and lines, and executes various functions and processes data of the mobile terminal by running or executing software programs and/or modules stored in the memory 220 and calling data stored in the memory 220, thereby implementing overall monitoring of the mobile terminal;
the mobile terminal also includes a power supply 280 (e.g., a battery) for powering the various components, which may be logically coupled to the processor 270 via a power management system that may be configured to manage charging, discharging, and power consumption.
Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 2 is not intended to be limiting of mobile terminals, and that a mobile terminal may include more or fewer components than shown, or some components may be combined, or a different arrangement of components.
Based on the structure example of the mobile terminal 20, referring to fig. 3, a detailed flow of an information processing method provided by an embodiment of the present invention is shown, where the detailed flow may include:
s301: receiving a camera opening instruction to open the camera;
s302: receiving a photographing instruction, and acquiring the image to be processed according to the photographing instruction;
for example, taking the mobile terminal 200 shown in fig. 2 as an example, when the mobile terminal 200 needs to take a picture, the camera 250 needs to be turned on first, and then a picture taking instruction is input through the input unit 230 (such as a physical key or a picture taking touch button on the touch panel 231), and an original image can be obtained according to the picture taking instruction, and the original image is used as an image to be processed for subsequent sensitive information detection and processing. Taking an original image of an identity card as an example, referring to fig. 4, a schematic diagram of an original image of an identity card shot by a mobile terminal according to an embodiment of the present invention is shown.
S303: sensitive information detection is carried out on the image to be processed based on a first training model;
s304: if sensitive information is detected from the image to be processed, acquiring the type of the sensitive information;
s305: if sensitive information is not detected from the image to be processed, ending the information processing flow of the image to be processed;
for example, taking the mobile terminal 200 shown in fig. 2 as an example, the memory 220 has stored therein an application program of the first training model and an application program of the second training model in advance; with reference to the above example, taking the original image of the identity card shown in fig. 4 as an example, the processor 270 performs sensitive information detection on the original image of the identity card through a pre-stored first training model; in the process of detecting the sensitive information, the processor 270 may further perform image block division on the original image, for example, as shown in fig. 5, divide the original image of the identity card into 5 × 5 image blocks, and then perform the sensitive information detection on each image block by using the first training model, which is not specifically limited in this embodiment of the present invention; after the sensitive information of the first training model is detected, if the sensitive information is not detected from the original image, for example, information such as an identification number, a mobile phone number, a flight number, a license plate number, a face image and the like is not detected from the original image, the information processing flow of the original image is directly ended; if sensitive information is detected from the original image, for example, information in the area shown by 501 is detected as an identification number, and information in the area shown by 502 is detected as a face image; based on the fact that the identity card number and the face image are sensitive information, the information processing flow of the original image needs to be continued.
S306: sending a consultation instruction, wherein the consultation instruction is used for confirming whether the sensitive information needs to be processed or not;
s307: if a confirmation instruction is received, continuing the information processing flow of the image to be processed according to the confirmation instruction;
s308: if a cancel instruction is received, ending the information processing flow of the image to be processed according to the cancel instruction;
after step S304, step S306 is executed; when it is determined that the sensitive information needs to be processed, step S307 is executed; when it is determined that the sensitive information does not need to be processed, step S308 is performed;
for example, taking the mobile terminal 200 shown in fig. 2 as an example, the memory 220 has stored therein an application program of the first training model and an application program of the second training model in advance; in conjunction with the above example, still taking the original image of the identification card shown in fig. 4 as an example, when it is detected that there is sensitive information in the original image, the processor 270 may also send a consultation instruction to the user, for example, in the form of a pop-up dialog box to consult the user, because it is considered that some sensitive information is not so sensitive or the original image is not disclosed to the public; when the user confirms that the sensitive information needs to be processed, a confirmation instruction can be sent to the mobile terminal 200 by clicking a "confirmation" button, and the processor 270 continues the information processing flow of the original image according to the received confirmation instruction; when the user confirms that the sensitive information does not need to be processed, a cancel command can be sent to the mobile terminal 200 by clicking a cancel button, and the processor 270 directly ends the information processing flow of the original image according to the received cancel command; therefore, the processing operation of hiding some unnecessary sensitive information can be avoided, and the system resource is saved.
S309: generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information;
s310: determining a sensitive information area corresponding to the sensitive information in the image to be processed;
s311: and covering the replacement information on the sensitive information area to obtain a processed image.
For example, taking the mobile terminal 200 shown in fig. 2 as an example, the memory 220 has stored therein an application program of the first training model and an application program of the second training model in advance; with reference to the foregoing example, still taking the original image of the identity card shown in fig. 4 as an example, when it is determined that the sensitive information in the original image needs to be processed, according to the type of the sensitive information, for example, for the type of the identity card, a second training model corresponding to the type of the identity card is selected to generate the replacement information corresponding to the identity card number; selecting a second training model corresponding to the face image type to generate replacement information corresponding to the face image; wherein the generated replacement information is infinitely close to the real sensitive information; then, according to the sensitive information area corresponding to the sensitive information in the determined image to be processed, the generated replacement information is covered in the sensitive information area, so that a processed image is obtained, for example, the processed image of the identity card shown in fig. 6 is obtained; therefore, the real sensitive information of the user is protected under the condition that the aesthetic feeling of the original image is not reduced.
Through the embodiment, the specific implementation of the embodiment is elaborated in detail, and it can be seen that through the technical scheme of the embodiment, the real sensitive information of the user is hidden, and the real sensitive information of the user is protected without reducing the aesthetic feeling of the original image.
EXAMPLE III
Based on the same inventive concept of the foregoing embodiment, referring to fig. 7, which shows the composition of an information processing apparatus 70 provided by the embodiment of the present invention, the information processing apparatus 70 may include: a first acquisition section 701, a first detection section 702, a generation section 703, and a replacement section 704; wherein,
the first acquisition part 701 is configured to acquire an image to be processed;
the first detection part 702 is configured to detect sensitive information from the image to be processed and obtain the type of the sensitive information based on a first training model;
the generating part 703 is configured to generate replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information;
the replacing part 704 is configured to perform replacement processing on the sensitive information in the image to be processed according to the replacement information to obtain a processed image.
In the above scheme, the first obtaining part 701 is specifically configured to:
receiving a camera opening instruction to open the camera;
and receiving a photographing instruction, and acquiring the image to be processed according to the photographing instruction.
In the above solution, referring to fig. 8, the information processing apparatus 70 further includes a first training section 705 configured to:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
carrying out block division on each image in the image sample to obtain a divided image block set;
marking each image block in the divided image block set to obtain a marked image block set;
taking the marked image block set as a first training set, and training a mark corresponding to each image block in the first training set and the first training set to obtain a first training model; the first training model is used for detecting whether the image to be processed contains sensitive information and the type of the sensitive information.
In the above scheme, the first training part 705 is specifically configured to:
inputting the first training set into a convolutional neural network model, and performing image recognition based on the convolutional neural network model to determine the category of each image block in the first training set;
determining a value of a loss function based on the determined category of each image block and the mark corresponding to each image block;
based on the value of the loss function, performing parameter adjustment on the convolutional neural network model, re-determining the category of each image block in the first training set according to the convolutional neural network model after parameter adjustment, and re-determining the value of the loss function, until the value of the loss function is smaller than a preset threshold value, determining that the training of the convolutional neural network model is completed;
and obtaining a first training model based on the trained convolutional neural network model.
In the above solution, referring to fig. 9, the information processing apparatus 70 further includes a second training section 706 configured to:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
sensitive information detection is carried out on each image in the image sample based on a first training model;
acquiring the type of the sensitive information in each image based on the detection result;
grouping the image samples based on the acquired multiple types to obtain multiple groups of image sets;
taking the multiple groups of image sets as a second training set, and performing individual training on each group of image sets in the second training set to obtain multiple groups of second training models; wherein the plurality of sets of second training models have corresponding relationships with the plurality of types.
In the above scheme, the generating part 703 is specifically configured to:
obtaining a second training model corresponding to the type of the sensitive information according to the corresponding relation between the plurality of groups of second training models and the plurality of types;
and generating and training the sensitive information based on the corresponding second training model to obtain the replacement information corresponding to the sensitive information.
In the above scheme, the replacing part 704 is specifically configured to:
determining a sensitive information area corresponding to the sensitive information in the image to be processed;
and covering the replacement information on the sensitive information area to obtain a processed image.
In the above scheme, referring to fig. 10, the information processing apparatus 70 further includes a second detecting section 707 configured to:
and based on a first training model, if sensitive information is not detected from the image to be processed, ending the information processing flow of the image to be processed.
In the above scheme, referring to fig. 11, the information processing apparatus 70 further includes an advisory part 708 configured to:
sending a consultation instruction, wherein the consultation instruction is used for confirming whether the sensitive information needs to be processed or not;
if a confirmation instruction is received, continuing the information processing flow of the image to be processed according to the confirmation instruction;
and if a cancel instruction is received, ending the information processing flow of the image to be processed according to the cancel instruction.
It is understood that in this embodiment, "part" may be part of a circuit, part of a processor, part of a program or software, etc., and may also be a unit, and may also be a module or a non-modular.
In addition, each component in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit. The integrated unit can be realized in a form of hardware or a form of a software functional module.
Based on the understanding that the technical solution of the present embodiment essentially or a part contributing to the prior art, or all or part of the technical solution may be embodied in the form of a software product stored in a storage medium, and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to execute all or part of the steps of the method of the present embodiment. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
Accordingly, the present embodiment provides a computer storage medium storing an information processing program that implements the steps of the method of information processing described in the first embodiment above when executed by at least one processor.
Based on the above-mentioned composition of the information processing apparatus 70 and the computer storage medium, referring to fig. 12, which shows a specific hardware structure of the information processing apparatus 70 provided by the embodiment of the present invention, it may include: a network interface 1201, a memory 1202, and a processor 1203; the various components are coupled together by a bus system 1204. It is understood that the bus system 1204 is used to enable connective communication between these components. The bus system 1204 includes a power bus, a control bus, and a status signal bus, in addition to a data bus. For clarity of illustration, however, the various buses are labeled as bus system 1204 in fig. 12. The network interface 1201 is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
a memory 1202 for storing a computer program operable on the processor 1203;
a processor 1203, configured to execute, when executing the computer program:
acquiring an image to be processed;
detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information;
generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information;
and replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image.
It is to be understood that the memory 1202 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. Volatile Memory can be Random Access Memory (RAM), which acts as external cache Memory. By way of example, but not limitation, many forms of RAM are available, such as Static random access memory (Static RAM, SRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic random access memory (Synchronous DRAM, SDRAM), Double data rate Synchronous Dynamic random access memory (ddr DRAM), Enhanced Synchronous SDRAM (ESDRAM), Synchronous link SDRAM (SLDRAM), and Direct Rambus RAM (DRRAM). The memory 1202 of the systems and methods described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
And the processor 1203 may be an integrated circuit chip with signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 1203. The Processor 1203 may be a general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software module may be located in ram, flash memory, rom, prom, or eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 1202, and the processor 1203 reads the information in the memory 1202 to complete the steps of the above-mentioned method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the Processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented with modules (e.g., procedures, functions, and so on) that perform the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
Optionally, as another embodiment, the processor 1203 is further configured to execute the steps of the information processing method according to the first embodiment when the computer program is executed.
Optionally, an embodiment of the present invention further provides a mobile terminal, where the mobile terminal includes the information processing apparatus 70 described in any of the foregoing embodiments.
It should be noted that: the technical schemes described in the embodiments of the present invention can be combined arbitrarily without conflict.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (11)

1. An information processing method, characterized in that the method comprises:
acquiring an image to be processed;
detecting sensitive information from the image to be processed based on a first training model and obtaining the type of the sensitive information;
generating replacement information corresponding to the sensitive information based on a second training model corresponding to the type of the sensitive information;
and replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image.
2. The method according to claim 1, wherein the acquiring the image to be processed specifically includes:
receiving a camera opening instruction to open the camera;
and receiving a photographing instruction, and acquiring the image to be processed according to the photographing instruction.
3. The method according to claim 1, wherein before the detecting sensitive information from the image to be processed based on the first training model and obtaining the type of the sensitive information, the method further comprises:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
carrying out block division on each image in the image sample to obtain a divided image block set;
marking each image block in the divided image block set to obtain a marked image block set;
taking the marked image block set as a first training set, and training a mark corresponding to each image block in the first training set and the first training set to obtain a first training model; the first training model is used for detecting whether the image to be processed contains sensitive information and the type of the sensitive information.
4. The method according to claim 3, wherein the obtaining a first training model by training the label corresponding to each image block in the first training set and the first training set specifically includes:
inputting the first training set into a convolutional neural network model, and performing image recognition based on the convolutional neural network model to determine the category of each image block in the first training set;
determining a value of a loss function based on the determined category of each image block and the mark corresponding to each image block;
based on the value of the loss function, performing parameter adjustment on the convolutional neural network model, re-determining the category of each image block in the first training set according to the convolutional neural network model after parameter adjustment, and re-determining the value of the loss function, until the value of the loss function is smaller than a preset threshold value, determining that the training of the convolutional neural network model is completed;
and obtaining a first training model based on the trained convolutional neural network model.
5. The method of claim 1, wherein before the generating the replacement information corresponding to the sensitive information based on the second training model corresponding to the type of the sensitive information, the method further comprises:
acquiring an image sample; wherein each image in the image sample contains sensitive information;
sensitive information detection is carried out on each image in the image sample based on a first training model;
acquiring the type of the sensitive information in each image based on the detection result;
grouping the image samples based on the acquired multiple types to obtain multiple groups of image sets;
taking the multiple groups of image sets as a second training set, and performing individual training on each group of image sets in the second training set to obtain multiple groups of second training models; wherein the plurality of sets of second training models have corresponding relationships with the plurality of types.
6. The method according to claim 5, wherein the generating, based on the second training model corresponding to the type of the sensitive information, the replacement information corresponding to the sensitive information specifically includes:
obtaining a second training model corresponding to the type of the sensitive information according to the corresponding relation between the plurality of groups of second training models and the plurality of types;
and generating and training the sensitive information based on the corresponding second training model to obtain the replacement information corresponding to the sensitive information.
7. The method according to claim 1, wherein the replacing the sensitive information in the image to be processed according to the replacement information to obtain a processed image specifically includes:
determining a sensitive information area corresponding to the sensitive information in the image to be processed;
and covering the replacement information on the sensitive information area to obtain a processed image.
8. The method of claim 1, further comprising:
and based on a first training model, if sensitive information is not detected from the image to be processed, ending the information processing flow of the image to be processed.
9. The method according to claim 1, wherein after detecting sensitive information from the image to be processed based on the first training model and obtaining the type of the sensitive information, the method further comprises:
sending a consultation instruction, wherein the consultation instruction is used for confirming whether the sensitive information needs to be processed or not;
if a confirmation instruction is received, continuing the information processing flow of the image to be processed according to the confirmation instruction;
and if a cancel instruction is received, ending the information processing flow of the image to be processed according to the cancel instruction.
10. An information processing apparatus characterized by comprising: a network interface, a memory, and a processor; wherein,
the network interface is used for receiving and sending signals in the process of receiving and sending information with other external network elements;
the memory for storing a computer program operable on the processor;
the processor, when executing the computer program, is configured to perform the steps of the information processing method according to any one of claims 1 to 9.
11. A computer storage medium, characterized in that it stores an information processing program that, when executed by at least one processor, implements the steps of the method of information processing according to any one of claims 1 to 9.
CN201810956986.8A 2018-08-21 2018-08-21 Information processing method and device and computer storage medium Expired - Fee Related CN109284684B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810956986.8A CN109284684B (en) 2018-08-21 2018-08-21 Information processing method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810956986.8A CN109284684B (en) 2018-08-21 2018-08-21 Information processing method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN109284684A true CN109284684A (en) 2019-01-29
CN109284684B CN109284684B (en) 2021-06-01

Family

ID=65182861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810956986.8A Expired - Fee Related CN109284684B (en) 2018-08-21 2018-08-21 Information processing method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN109284684B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009018A (en) * 2019-03-25 2019-07-12 腾讯科技(深圳)有限公司 A kind of image generating method, device and relevant device
CN111768325A (en) * 2020-04-03 2020-10-13 南京信息工程大学 Security improvement method based on generation of countermeasure sample in big data privacy protection
CN111931148A (en) * 2020-07-31 2020-11-13 支付宝(杭州)信息技术有限公司 Image processing method and device and electronic equipment
CN112052347A (en) * 2020-10-09 2020-12-08 北京百度网讯科技有限公司 Image storage method and device and electronic equipment
CN112069820A (en) * 2020-09-10 2020-12-11 杭州中奥科技有限公司 Model training method, model training device and entity extraction method
CN112528318A (en) * 2020-11-27 2021-03-19 国家电网有限公司大数据中心 Image desensitization method and device and electronic equipment
CN112634382A (en) * 2020-11-27 2021-04-09 国家电网有限公司大数据中心 Image recognition and replacement method and device for unnatural object
CN112634129A (en) * 2020-11-27 2021-04-09 国家电网有限公司大数据中心 Image sensitive information desensitization method and device
CN112750072A (en) * 2020-12-30 2021-05-04 五八有限公司 Information processing method and device
CN113313215A (en) * 2021-07-30 2021-08-27 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium
CN113420322A (en) * 2021-05-24 2021-09-21 阿里巴巴新加坡控股有限公司 Model training and desensitizing method and device, electronic equipment and storage medium
CN114549951A (en) * 2020-11-26 2022-05-27 未岚大陆(北京)科技有限公司 Method for obtaining training data, related device, system and storage medium
CN114567797A (en) * 2021-03-23 2022-05-31 长城汽车股份有限公司 Image processing method and device and vehicle

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447392A (en) * 2014-08-22 2016-03-30 国际商业机器公司 Method and system for protecting specific information
CN107169329A (en) * 2017-05-24 2017-09-15 维沃移动通信有限公司 A kind of method for protecting privacy, mobile terminal and computer-readable recording medium
CN107239666A (en) * 2017-06-09 2017-10-10 孟群 A kind of method and system that medical imaging data are carried out with desensitization process
US20180004975A1 (en) * 2016-06-29 2018-01-04 Sophos Limited Content leakage protection
CN107590531A (en) * 2017-08-14 2018-01-16 华南理工大学 A kind of WGAN methods based on text generation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447392A (en) * 2014-08-22 2016-03-30 国际商业机器公司 Method and system for protecting specific information
US20180004975A1 (en) * 2016-06-29 2018-01-04 Sophos Limited Content leakage protection
CN107169329A (en) * 2017-05-24 2017-09-15 维沃移动通信有限公司 A kind of method for protecting privacy, mobile terminal and computer-readable recording medium
CN107239666A (en) * 2017-06-09 2017-10-10 孟群 A kind of method and system that medical imaging data are carried out with desensitization process
CN107590531A (en) * 2017-08-14 2018-01-16 华南理工大学 A kind of WGAN methods based on text generation

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009018A (en) * 2019-03-25 2019-07-12 腾讯科技(深圳)有限公司 A kind of image generating method, device and relevant device
CN111768325A (en) * 2020-04-03 2020-10-13 南京信息工程大学 Security improvement method based on generation of countermeasure sample in big data privacy protection
CN111931148A (en) * 2020-07-31 2020-11-13 支付宝(杭州)信息技术有限公司 Image processing method and device and electronic equipment
CN112069820B (en) * 2020-09-10 2024-05-24 杭州中奥科技有限公司 Model training method, model training device and entity extraction method
CN112069820A (en) * 2020-09-10 2020-12-11 杭州中奥科技有限公司 Model training method, model training device and entity extraction method
CN112052347B (en) * 2020-10-09 2024-06-04 北京百度网讯科技有限公司 Image storage method and device and electronic equipment
CN112052347A (en) * 2020-10-09 2020-12-08 北京百度网讯科技有限公司 Image storage method and device and electronic equipment
CN114549951A (en) * 2020-11-26 2022-05-27 未岚大陆(北京)科技有限公司 Method for obtaining training data, related device, system and storage medium
CN114549951B (en) * 2020-11-26 2024-04-23 未岚大陆(北京)科技有限公司 Method for obtaining training data, related device, system and storage medium
CN112634382A (en) * 2020-11-27 2021-04-09 国家电网有限公司大数据中心 Image recognition and replacement method and device for unnatural object
CN112634129A (en) * 2020-11-27 2021-04-09 国家电网有限公司大数据中心 Image sensitive information desensitization method and device
CN112528318A (en) * 2020-11-27 2021-03-19 国家电网有限公司大数据中心 Image desensitization method and device and electronic equipment
CN112634382B (en) * 2020-11-27 2024-03-19 国家电网有限公司大数据中心 Method and device for identifying and replacing images of unnatural objects
CN112750072A (en) * 2020-12-30 2021-05-04 五八有限公司 Information processing method and device
CN114567797A (en) * 2021-03-23 2022-05-31 长城汽车股份有限公司 Image processing method and device and vehicle
CN113420322B (en) * 2021-05-24 2023-09-01 阿里巴巴新加坡控股有限公司 Model training and desensitizing method and device, electronic equipment and storage medium
CN113420322A (en) * 2021-05-24 2021-09-21 阿里巴巴新加坡控股有限公司 Model training and desensitizing method and device, electronic equipment and storage medium
CN113313215A (en) * 2021-07-30 2021-08-27 腾讯科技(深圳)有限公司 Image data processing method, image data processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN109284684B (en) 2021-06-01

Similar Documents

Publication Publication Date Title
CN109284684B (en) Information processing method and device and computer storage medium
CN111079576B (en) Living body detection method, living body detection device, living body detection equipment and storage medium
TWI754887B (en) Method, device and electronic equipment for living detection and storage medium thereof
WO2020224479A1 (en) Method and apparatus for acquiring positions of target, and computer device and storage medium
US20220309836A1 (en) Ai-based face recognition method and apparatus, device, and medium
CN108121952A (en) Face key independent positioning method, device, equipment and storage medium
JP7286208B2 (en) Biometric face detection method, biometric face detection device, electronic device, and computer program
CN112036331B (en) Living body detection model training method, device, equipment and storage medium
CN109543714A (en) Acquisition methods, device, electronic equipment and the storage medium of data characteristics
CN107527053A (en) Object detection method and device
CN107833219A (en) Image-recognizing method and device
CN111897996A (en) Topic label recommendation method, device, equipment and storage medium
CN110162604B (en) Statement generation method, device, equipment and storage medium
CN106845398A (en) Face key independent positioning method and device
CN112733970B (en) Image classification model processing method, image classification method and device
CN107133354A (en) The acquisition methods and device of description information of image
CN110147533A (en) Coding method, device, equipment and storage medium
CN111881813B (en) Data storage method and system of face recognition terminal
CN111062248A (en) Image detection method, device, electronic equipment and medium
CN109254661B (en) Image display method, image display device, storage medium and electronic equipment
CN111898561A (en) Face authentication method, device, equipment and medium
CN111353475A (en) Self-service transaction equipment abnormality identification method and self-service transaction equipment
Singh et al. LBP and CNN feature fusion for face anti-spoofing
CN112381064B (en) Face detection method and device based on space-time diagram convolutional network
CN113470653B (en) Voiceprint recognition method, electronic equipment and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210601