CN110473135B - Image processing method, system, readable storage medium and intelligent device - Google Patents

Image processing method, system, readable storage medium and intelligent device Download PDF

Info

Publication number
CN110473135B
CN110473135B CN201910701385.7A CN201910701385A CN110473135B CN 110473135 B CN110473135 B CN 110473135B CN 201910701385 A CN201910701385 A CN 201910701385A CN 110473135 B CN110473135 B CN 110473135B
Authority
CN
China
Prior art keywords
image
target
error
marked
mark
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910701385.7A
Other languages
Chinese (zh)
Other versions
CN110473135A (en
Inventor
廖清
丁烨
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201910701385.7A priority Critical patent/CN110473135B/en
Publication of CN110473135A publication Critical patent/CN110473135A/en
Application granted granted Critical
Publication of CN110473135B publication Critical patent/CN110473135B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking

Abstract

The invention discloses an image processing method, an image processing system, a readable storage medium and intelligent equipment, wherein the method comprises the following steps: obtaining a target mark image, and interfering the target mark image through a countermeasure generation network model to obtain at least one interference image; converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target mark image; carrying out image recognition on the target marked image covered by the countermeasure sample to obtain error marked images, wherein the error marked images obtained by each user are different; and determining a target mark image of the error mark image according to the matching degree of the error mark image and the target area in the current target mark image. The invention can solve the problem that the existing watermark is easy to identify and interfere, improves the satisfaction degree of the user image privacy experience, and meets the actual application requirements.

Description

Image processing method, system, readable storage medium and intelligent device
Technical Field
The invention relates to the technical field of digital information security, in particular to an image processing method, an image processing system, a readable storage medium and intelligent equipment.
Background
With the continuous development of photography and image processing technologies, the augmented reality technology is gradually mature, which is a high-tech technology that realizes rendering virtual objects in the real world on a photographed picture and performing interaction by calculating the position and angle of a camera image in real time and adding a corresponding image technology, and is widely applied to life, and image assets are also applied.
Unlike content-insensitive image assets (e.g., news photos, etc.), detection and identification of such infringers is a challenge for content-sensitive image assets, such as personal photos, caricatures, confidential documents, etc., because copyright holders are generally unwilling or required to charge to privately disclose to designated authorized persons, and their copyright infringers generally do not reveal their identity when disclosing images. In order to detect and identify such infringers, the prior art usually adds diverse watermarks for different authorized persons to the picture, that is, for different authorized persons, the watermarks attached to the picture are different. When an unauthorized infringement picture is found, the identity of the corresponding infringer can be detected by extracting the watermark on the picture, and litigation or compensation is proposed.
However, the prior art generally faces the problems that the watermark is easily identified and easily disturbed. If the watermark can be identified by human eyes, the look and feel of the picture for a user can be influenced to a certain extent. For example, a two-dimensional code is added at a key position of a picture, so that when a user views the picture, the user experience is seriously reduced because the user views the two-dimensional code to be overlaid on the key position of the picture. Furthermore, failure to extract the watermark correctly results in failure of the process of detecting and identifying the identity of the infringer, and thus no lawsuits or compensation can be made for this purpose.
Disclosure of Invention
In order to solve the above problems, an object of the present invention is to provide an image processing method, system, and readable storage medium with tamper-resistant function, which can improve the security of image watermarking.
The image processing method provided by the invention comprises the following steps:
acquiring a target mark image, and interfering the target mark image through a countermeasure generation network model to obtain at least one interference image;
converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target marked image;
carrying out image recognition on the target mark image covered by the countermeasure sample to obtain error mark images, wherein the error mark images obtained by each user are different;
and determining the target marking image of the error marking image according to the matching degree of the error marking image and the target area in the current target marking image.
According to the image processing method provided by the invention, firstly, a target mark image is obtained, and the target mark image is interfered by a countermeasure generation network model to obtain at least one interference image; converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target marked image; carrying out image recognition on the target mark image covered by the countermeasure sample to obtain error mark images, wherein the error mark images obtained by each user are different; and determining the target marking image of the error marking image according to the matching degree of the error marking image and the target area in the current target marking image. According to the image processing method provided by the invention, different watermarks are manufactured for each user, and the generated watermarks cannot be identified by human eyes, so that the impression of the user on the image cannot be influenced, and the algorithm cannot be identified when the type of the used image identification model is uncertain, so that an identifiable area cannot be positioned. Even if the recognizable area is located, the added watermark is a specific regional countercheck training result, the algorithm is difficult to identify the additional information in the recognizable area, and in addition, due to the characteristics of the countercheck sample and the design of multiple recognizable areas, even if the picture is stained, the watermark can be identified and matched, and an infringing user can be accurately matched.
In addition, the image processing method according to the present invention may further include the following additional features:
further, the step of obtaining a target mark image, and interfering the target mark image by resisting the generation network model to obtain at least one interference image includes:
acquiring image information of an original image, and performing discretization processing on the original image according to the image information so as to divide the original image into a plurality of recognizable areas;
marking a plurality of identifiable regions of the original image through an image identification model to obtain a target marking image, wherein the target marking image carries correct labels of the identifiable regions;
and interfering the target mark image by a counter-generation network model to obtain a corresponding interference image, wherein the interference image carries an error label of an identifiable region.
Further, the step of marking a plurality of recognizable areas of the original image by an image recognition model to obtain the target marking image comprises:
extracting and decomposing a plurality of identifiable regions of the original image to form marking factors;
and (5) learning a machine algorithm according to the marking factors to obtain a correct label of the identifiable region.
Further, the calculation formula of the correct label of the identifiable region obtained by machine algorithm learning according to the marking factor is as follows:
Figure BDA0002150905470000031
where φ () is an activation function, V k To adjust the coefficient, W i As an initial weight, X i Is a marker factor, B k Is the offset accumulation.
Further, the error marked image corresponds to an initial weight matrix, and the step of determining the target marked image of the error marked image according to the matching degree of the error marked image and the target area in the current target marked image includes:
calculating a correlation matrix between the error marked image and the current target marked image;
correcting the initial weight matrix according to the correlation matrix to obtain a target weight matrix, wherein the target weight matrix is used for replacing the initial weight matrix when the error marked image is matched with the current target marked image;
judging the matching degree between the corresponding error marked image and the current target marked image after the initial weight matrix is replaced;
and when the matching degree between the error mark image and the current target mark image is greater than a preset matching value, taking the current target mark image as the target mark image of the error mark image.
Further, the value range of the preset matching value is 93% -98%.
Further, after the step of determining the matching degree between the error marked image corresponding to the replaced initial weight matrix and the current target marked image, the method further includes:
when the matching degree between the error marked image and the current target marked image is smaller than a preset matching value, judging whether the error marked image carries the associated identification of the current target marked image;
if yes, inquiring an association matching library according to the association identifier, matching the association matching library with the error marked image to obtain association data, and determining a target marked image of the error marked image according to the association data;
if not, an alarm prompt is sent out.
Another embodiment of the present invention provides an image processing system, which solves the problem that the existing watermark is easily identified and easily interfered, and improves the satisfaction of the user image privacy experience.
An image processing system according to an embodiment of the present invention includes:
the determining module is used for determining whether the current source address is a cold source address according to the space access quantity of the current source address in the access period;
the judging module is used for judging whether the reference count of the data block of the current source address at the first access position is smaller than a preset value when the current source address is the cold source address;
a deleting module, configured to delete the current source address;
and the migration module is used for moving the current source address to a second access position and storing the current source address, wherein the reference count of the data block of the current source address at the second access position is greater than that of the first access position.
Another embodiment of the invention also proposes a storage medium on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
Another embodiment of the present invention also proposes a smart device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present invention;
FIG. 2 is a detailed flowchart of step S101 in FIG. 1;
FIG. 3 is a detailed flowchart of step S104 in FIG. 1;
fig. 4 is a block diagram of an image processing system according to a second embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be obtained by a person skilled in the art without making any creative effort based on the embodiments in the present invention, belong to the protection scope of the present invention.
Referring to fig. 1, an image processing method according to a first embodiment of the present invention includes steps S101 to S104:
step S101, a target mark image is obtained, and the target mark image is interfered by a countercheck generation network model to obtain at least one interference image.
In the embodiment, an image processing apparatus is taken as an example for description, but it should be understood that the embodiment of the present invention is not limited thereto, and the method of the embodiment of the present invention may be applied to any intelligent apparatus, that is, any electronic apparatus capable of performing image processing. Specifically, in the prior art, various watermarks for different authorized persons are usually added to a picture, that is, for different authorized persons, the watermarks attached to the received picture are different. When an unauthorized infringement picture is found, the identity of the corresponding infringer can be detected by extracting the watermark on the picture, and lawsuits or compensation can be proposed for the infringement picture, but the watermark is easy to identify and is easy to interfere. Therefore, if the watermark can be identified by human eyes, the look and feel of the picture for the user can be influenced to a certain extent, and if the watermark is easy to remove or interfere, when an infringing picture is disclosed, because the watermark cannot be extracted correctly, the process of detecting and identifying the identity of the infringer is disabled, and thus lawsuits or compensation cannot be made for the infringer.
In this embodiment, when a target image acquisition instruction is received; the target mark image is obtained, and the target mark image is interfered by the anti-generation network model to obtain at least one interference image, so that when an unauthorized user discloses the image, the identity of the unauthorized user can be traced back to propose litigation or compensation, and in addition, the unauthorized user can not correctly identify the image information of the image, and the safety of the image is improved.
Referring to fig. 2, the method for obtaining at least one interference image by interfering the target mark image with the anti-generation network model includes the following steps:
in step S1011, image information of the original image is obtained, and discretization is performed on the original image according to the image information, so that the original image is divided into a plurality of identifiable regions.
Step S1012, marking a plurality of recognizable areas of the original image by using an image recognition model to obtain the target mark image, where the target mark image carries a correct label of the recognizable area.
Specifically, a plurality of identifiable regions of the original image are extracted and decomposed to form marking factors; and obtaining the correct label of the identifiable region after machine algorithm learning according to the marking factor.
And the calculation formula of the correct label of the identifiable region obtained by learning the machine algorithm according to the marking factor is as follows:
Figure BDA0002150905470000061
where φ () is an activation function, V k To adjust the coefficient, W i As an initial weight, X i Is a marker factor, B k Is the offset accumulation.
As described above, the marking factor X of the original image is obtained by inputting the image information of the original image, and performing data normalization and extraction decomposition i Configure the corresponding initial weight W i Sum and B k Offset accumulation by V k And adjusting the coefficient, activating the function phi (), and outputting a correct label of the identifiable region.
Step S1013, the target mark image is interfered by the anti-generation network model to obtain a corresponding interference image, where the interference image carries an error label in an identifiable area.
As described above, the original image is divided into a plurality of recognizable regions by discretizing the original image, and the recognizable regions are not randomly selected, fixed positions, entire images, or electronic information, so that the user can easily identify the corresponding target mark images according to the recognition result of the recognizable regions, and each image can support m n The number of the identifiable areas of the picture is m, the number of the identifiable marks of the image identification model is n, and the sizes of the identifiable areas can be consistent or inconsistent; marking a plurality of identifiable areas of the original image through an image identification model (such as YOLOv 3) to obtain the target mark image, wherein the target mark image carries a correct label of the identifiable area, and the correct label covers the whole original image, such as identifiable area 1 is identified as label a, identifiable area 2 is identified as label b, and identifiable area 3 is identified as label c.
Further, the counterforce generation network model interferes the target marking image to obtain a corresponding interference image, the interference image carries an error label of the identifiable region, it can be understood that the error label is generated by an identification error, the identification error is called a difference value (loss), different interference images can be generated by the counterforce generation network model by formulating different training targets, and different difference values are generated until convergence and a stable identification error is obtained. For example, when the countermeasure generation network converges by a prescribed convergence model (such as BP-Gradient), the interference image generated by it can make the image recognition model accurately and stably recognize the recognizable area 1 as the label x, the recognizable area 2 as the label y, and the recognizable area 3 as the label z.
And S102, converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target mark image.
As described above, when the countermeasure generation network converges through a specified convergence model (such as BP-Gradient), the generated interference image is called a countermeasure sample, and the countermeasure sample is overlaid on the current target mark image so as to mark the current target mark image, so that even if the image is dirty, the watermark can be identified and matched, and an infringing user can be accurately matched. Due to the generation characteristics of the confrontation sample, the confrontation sample is usually a pixel perturbation which is difficult to identify for human eyes, and different user identification information can be embedded in the sequence and the position of the confrontation sample through different identification errors. For example: the image obtained by the user A is an image covered by a confrontation sample which respectively identifies the identifiable areas 1, 2 and 3 as x, y and z in an error manner; the image obtained by the user B is an image covered with the countermeasure sample in which the recognizable areas 1, 2, and 3 are erroneously recognized as i, j, and k, respectively. The pictures obtained by the user a and the user B carry different information indicating the identities thereof, respectively, so that the owner of the picture can be found.
Step S103, carrying out image recognition on the target mark image covered by the countermeasure sample to obtain error mark images, wherein the error mark images obtained by each user are different.
Specifically, the image recognition model performs image recognition on the original image covered by the countermeasure sample, so as to obtain a target error label corresponding to the recognizable area of the image, where the recognizable area 1 is recognized as a label x, the recognizable area 2 is recognized as a label y, and the recognizable area 3 is recognized as a label z, and the "error label" refers to a label marked by the specified image recognition model and not matching with the label marked in the original image. Since the error marking picture obtained by each user is different, the owner of the picture can be traced.
And step S104, determining a target mark image of the error mark image according to the matching degree of the error mark image and the target area in the current target mark image.
Referring to fig. 3, the error marked image corresponds to an initial weight matrix, and the step of determining the target marked image of the error marked image according to the matching degree between the error marked image and the target area in the current target marked image includes:
step S1041, calculating a correlation matrix between the error marked image and the current target marked image.
Step S1042, correcting the initial weight matrix according to the correlation matrix to obtain a target weight matrix, wherein the target weight matrix is used for replacing the initial weight matrix when the error marked image is matched with the current target marked image.
Step S1043, determining a matching degree between the error marked image corresponding to the initial weight matrix after replacement and the current target marked image.
Step S1044 is to use the current target mark image as the target mark image of the error mark image when the matching degree between the error mark image and the current target mark image is greater than a preset matching value. The value range of the preset matching value is 93-98%.
As described above, when matching and comparing the error marked image with the current target marked image, the correlation matrix M (i, j) between the error marked image and the current target marked image is calculated, the initial weight matrix is corrected according to the obtained correlation matrix M (i, j) to obtain a target weight matrix K '(i, j), and after the target weight matrix K' (i, j) is determined, the matching degree between the error marked image corresponding to the initial weight matrix K (i, j) and the current target marked image is determined. Because the correlation values corresponding to the correlation matrix have different magnitudes and directly reflect the matching degree between the error marked image and the current target marked image, the initial weight matrix K (i, j) can be corrected according to the value of the correlation matrix M (i, j) to obtain the target weight matrix K' (i, j). When actual matching is carried out, the original initial weight matrix is replaced by the target weight matrix K '(i, j), so that the value of the target weight matrix K' (i, j) after replacement is more consistent with the value of the correlation matrix, and the matching accuracy corresponding to the mutual matching is improved. And when the matching degree between the error marked image and the current target marked image is greater than a preset matching value of 93% -98%, taking the current target marked image as the target marked image of the error marked image.
It should be further noted that, after the step of determining the matching degree between the error marked image corresponding to the replaced initial weight matrix and the current target marked image, the method further includes: when the matching degree between the error marked image and the current target marked image is smaller than a preset matching value, judging whether the error marked image carries the associated identification of the current target marked image; if so, inquiring an association matching library according to the association identifier, matching the association matching library with the error marked image to obtain association data, and determining a target marked image of the error marked image according to the association data; if not, an alarm prompt is sent out.
As described above, by judging whether the error marked image carries the associated identifier of the current target marked image, when the error marked image carries the associated identifier of the current target marked image, the association matching library is queried according to the associated identifier, and the association matching library is matched with the error marked image to obtain associated data, so that the target marked image of the error marked image is determined according to the associated data, a user can conveniently verify image information through a multi-party platform, and in addition, the user can conveniently maintain own rights and interests in time through the prompt of alarm information, and a certain deterrence effect is achieved.
As a specific embodiment, different error marking information corresponds to different users, for example, the image a passes through an image recognition model to perform image recognition on the original image covered by the countermeasure sample to obtain labels x, y, and z; and the labels i, j and k obtained by carrying out image recognition on the original image covered by the anti-sample through the image recognition model are obtained in the image B. It can be known through database matching that image a belongs to user a and image B belongs to user B. In addition, due to the fact that the information sequence can be embedded with certain anti-interference characteristics, even if the image recognition model cannot correctly recognize the labels of the partial recognizable regions due to image pollution, the result can still be matched with the corresponding user. For example, even if the image a is subjected to image recognition by the image recognition model on the original image covered by the countermeasure sample to obtain labels x and y, and the recognizable region 3 is not correctly recognized, since the labels x, y are more similar to the labels x, y, z of the user a than i, j, k, it can be determined that the picture a belongs to the user a.
According to the image processing method provided by the invention, firstly, a target mark image is obtained, and the target mark image is interfered by a countermeasure generation network model to obtain at least one interference image; converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target mark image; carrying out image recognition on the target marked image covered by the countermeasure sample to obtain error marked images, wherein the error marked images obtained by each user are different; and determining the target marking image of the error marking image according to the matching degree of the error marking image and the target area in the current target marking image. According to the image processing method provided by the invention, different watermarks are manufactured for each user, and the generated watermarks cannot be identified by human eyes, so that the impression of the user on the image cannot be influenced, and the algorithm cannot be identified when the type of the used image identification model is uncertain, so that an identifiable area cannot be positioned. Even if the recognizable area is located, the added watermark is a specific regional countercheck training result, the algorithm is difficult to identify the additional information in the recognizable area, and in addition, due to the characteristics of the countercheck sample and the design of multiple recognizable areas, even if the picture is stained, the watermark can be identified and matched, and an infringing user can be accurately matched.
Referring to fig. 4, based on the same inventive concept, an image processing system according to a second embodiment of the present invention includes:
the acquisition module 10 is configured to acquire a target marker image, and interfere the target marker image with a resist generation network model to obtain at least one interference image.
In this embodiment, the obtaining module 10 includes:
the acquiring unit 11 is configured to acquire image information of an original image, and perform discretization processing on the original image according to the image information, so as to divide the original image into a plurality of identifiable regions.
A marking unit 12, configured to mark, through an image recognition model, a plurality of recognizable areas of the original image to obtain the target mark image, where the target mark image carries a correct label of the recognizable area.
Specifically, a plurality of identifiable regions of the original image are extracted and decomposed to form marking factors; and obtaining the correct label of the identifiable region after machine algorithm learning according to the marking factor.
And the calculation formula of the correct label of the identifiable region obtained by learning the machine algorithm according to the marking factor is as follows:
Figure BDA0002150905470000111
where φ () is an activation function, V k To adjust the coefficient, W i As an initial weight, X i Is a marker factor, B k Is the offset accumulation.
And the interference unit 13 is configured to interfere with the target mark image by countering the generated network model to obtain a corresponding interference image, where the interference image carries an error label in an identifiable region.
And the convergence module 20 is configured to converge the interference image through a convergence model, use the interference image qualified in convergence as a countermeasure sample, and cover the countermeasure sample on the target mark image.
The identification module 30 is configured to perform image identification on the target mark image covered by the countermeasure sample to obtain error mark images, where the error mark images obtained by each user are different.
And the determining module 40 is configured to determine a target marker image of the error marker image according to a matching degree of the error marker image and a target region in the current target marker image.
In this embodiment, the determining module 40 includes:
a calculating unit 41, configured to calculate a correlation matrix between the error mark image and the current target mark image.
A correcting unit 42, configured to correct the initial weight matrix according to the correlation matrix to obtain a target weight matrix, where the target weight matrix is used to replace the initial weight matrix when the error marked image matches the current target marked image.
And a determining unit 43, configured to determine a matching degree between the error marked image corresponding to the initial weight matrix after being replaced and the current target marked image.
A determining unit 44, configured to, when a matching degree between the error mark image and the current target mark image is greater than a preset matching value, take the current target mark image as a target mark image of the error mark image. The value range of the preset matching value is 93-98%.
The determining unit 43 is further configured to determine whether the error marked image carries an associated identifier of the current target marked image when the matching degree between the error marked image and the current target marked image is smaller than a preset matching value; if yes, inquiring an association matching library according to the association identifier, matching the association matching library with the error marked image to obtain association data, and determining a target marked image of the error marked image according to the association data; if not, an alarm prompt is sent out.
According to the image processing system provided by the invention, firstly, a target mark image is obtained, and the target mark image is interfered by a countermeasure generation network model to obtain at least one interference image; converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target mark image; carrying out image recognition on the target marked image covered by the countermeasure sample to obtain error marked images, wherein the error marked images obtained by each user are different; and determining the target marking image of the error marking image according to the matching degree of the error marking image and the target area in the current target marking image. According to the image processing method provided by the invention, different watermarks are manufactured for each user, and the generated watermarks cannot be identified by human eyes, so that the impression of the user on the image cannot be influenced, and the algorithm cannot be identified when the type of the used image identification model is uncertain, so that an identifiable area cannot be positioned. Even if the recognizable area is located, the added watermark is a specific regional countercheck training result, the algorithm is difficult to identify the additional information in the recognizable area, and in addition, due to the characteristics of the countercheck sample and the design of multiple recognizable areas, even if the picture is stained, the watermark can be identified and matched, and an infringing user can be accurately matched.
The technical features and technical effects of the image processing system provided by the embodiment of the present invention are the same as those of the method provided by the embodiment of the present invention, and are not described herein again.
Furthermore, an embodiment of the present invention also proposes a storage medium, on which a computer program is stored, which when executed by a processor implements the steps of the above-mentioned method.
Furthermore, an intelligent device is provided in an embodiment of the present invention, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method when executing the program.
The logic and/or steps represented in the flowcharts or otherwise described herein, such as an ordered listing of executable instructions that can be considered to implement logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions. For the purposes of this description, a "computer-readable medium" can be any means that can contain, store, communicate, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device.
More specific examples (a non-exhaustive list) of the computer-readable medium would include the following: an electrical connection (electronic device) having one or more wires, a portable computer diskette (magnetic device), a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber device, and a portable compact disc read-only memory (CDROM). Additionally, the computer-readable medium could even be paper or another suitable medium upon which the program is printed, as the program can be electronically captured, via for instance optical scanning of the paper or other medium, then compiled, interpreted or otherwise processed in a suitable manner if necessary, and then stored in a computer memory.
It should be understood that portions of the present invention may be implemented in hardware, software, firmware, or a combination thereof. In the above embodiments, the various steps or methods may be implemented in software or firmware stored in memory and executed by a suitable instruction execution system. For example, if implemented in hardware, as in another embodiment, any one or combination of the following techniques, which are known in the art, may be used: a discrete logic circuit having a logic gate circuit for implementing a logic function on a data signal, an application specific integrated circuit having an appropriate combinational logic gate circuit, a Programmable Gate Array (PGA), a Field Programmable Gate Array (FPGA), or the like.
In the description herein, references to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the invention. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
While embodiments of the invention have been shown and described, it will be understood by those of ordinary skill in the art that: various changes, modifications, substitutions and alterations can be made to the embodiments without departing from the principles and spirit of the invention, the scope of which is defined by the claims and their equivalents.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the protection scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the embodiments described in the foregoing embodiments, or make equivalent substitutions for some features, within the scope of the disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present invention, and they should be construed as being included therein. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (10)

1. An image processing method, characterized in that it comprises the steps of:
obtaining a target mark image, and interfering the target mark image through a countermeasure generation network model to obtain at least one interference image;
converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target mark image;
carrying out image recognition on the target marked image covered by the countermeasure sample to obtain error marked images, wherein the error marked images obtained by each user are different;
and determining the target marking image of the error marking image according to the matching degree of the error marking image and the target area in the current target marking image.
2. The image processing method of claim 1, wherein the step of obtaining a target marker image, and disturbing the target marker image by countering the generative network model to obtain at least one disturbed image comprises:
acquiring image information of an original image, and performing discretization processing on the original image according to the image information so as to divide the original image into a plurality of recognizable areas;
marking a plurality of identifiable regions of the original image through an image identification model to obtain a target marking image, wherein the target marking image carries correct labels of the identifiable regions;
and interfering the target mark image by a counter-generation network model to obtain a corresponding interference image, wherein the interference image carries an error label of an identifiable region.
3. The image processing method according to claim 2, wherein the step of marking a plurality of recognizable areas of the original image by an image recognition model to obtain the target mark image comprises:
extracting and decomposing a plurality of identifiable regions of the original image to form marking factors;
and obtaining the correct label of the identifiable region after machine algorithm learning according to the marking factor.
4. The image processing method according to claim 3, wherein the calculation formula of the correct label of the recognizable area obtained by machine algorithm learning according to the labeling factor is as follows:
Figure FDA0002150905460000021
where φ () is an activation function, V k To adjust the coefficient, W i As an initial weight, X i As a marker factor, B k Is the offset accumulation.
5. The image processing method according to claim 1, wherein the error marked image corresponds to an initial weight matrix, and the step of determining the target marked image of the error marked image according to the matching degree of the error marked image and the target area in the current target marked image comprises:
calculating a correlation matrix between the error marked image and the current target marked image;
correcting the initial weight matrix according to the correlation matrix to obtain a target weight matrix, wherein the target weight matrix is used for replacing the initial weight matrix when the error marked image is matched with the current target marked image;
judging the matching degree between the corresponding error marked image and the current target marked image after the initial weight matrix is replaced;
and when the matching degree between the error mark image and the current target mark image is greater than a preset matching value, taking the current target mark image as the target mark image of the error mark image.
6. The image processing method according to claim 5, wherein the preset matching value ranges from 93% to 98%.
7. The image processing method according to claim 5, wherein after the step of determining the matching degree between the error marked image corresponding to the replaced initial weight matrix and the current target marked image, the method further comprises:
when the matching degree between the error marked image and the current target marked image is smaller than a preset matching value, judging whether the error marked image carries an associated identifier of the current target marked image;
if yes, inquiring an association matching library according to the association identifier, matching the association matching library with the error marked image to obtain association data, and determining a target marked image of the error marked image according to the association data;
if not, an alarm prompt is sent out.
8. An image processing system, characterized in that the system comprises:
the acquisition module is used for acquiring a target mark image and interfering the target mark image through a countermeasure generation network model to obtain at least one interference image;
the convergence module is used for converging the interference image through a convergence model, taking the interference image qualified in convergence as a countermeasure sample, and covering the countermeasure sample on a target mark image;
the identification module is used for carrying out image identification on the target mark image covered by the countermeasure sample to obtain error mark images, and the error mark images obtained by each user are different;
and the determining module is used for determining the target mark image of the error mark image according to the matching degree of the error mark image and the target area in the current target mark image.
9. A readable storage medium on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 7.
10. An intelligent device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the image processing method of any of claims 1 to 7 when executing the program.
CN201910701385.7A 2019-07-31 2019-07-31 Image processing method, system, readable storage medium and intelligent device Active CN110473135B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910701385.7A CN110473135B (en) 2019-07-31 2019-07-31 Image processing method, system, readable storage medium and intelligent device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910701385.7A CN110473135B (en) 2019-07-31 2019-07-31 Image processing method, system, readable storage medium and intelligent device

Publications (2)

Publication Number Publication Date
CN110473135A CN110473135A (en) 2019-11-19
CN110473135B true CN110473135B (en) 2022-12-27

Family

ID=68508424

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910701385.7A Active CN110473135B (en) 2019-07-31 2019-07-31 Image processing method, system, readable storage medium and intelligent device

Country Status (1)

Country Link
CN (1) CN110473135B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111177757A (en) * 2019-12-27 2020-05-19 支付宝(杭州)信息技术有限公司 Processing method and device for protecting privacy information in picture
CN113378118B (en) * 2020-03-10 2023-08-22 百度在线网络技术(北京)有限公司 Method, apparatus, electronic device and computer storage medium for processing image data
CN113689318B (en) * 2021-07-30 2023-07-07 南京信息工程大学 Deep semi-fragile watermarking method for image authentication and anti-sample defense

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2449171A1 (en) * 2001-06-06 2002-12-12 Spectra Systems Corporation Marking and authenticating articles
CN108921764A (en) * 2018-03-15 2018-11-30 中山大学 A kind of image latent writing method and system based on generation confrontation network
CN109447895A (en) * 2018-09-03 2019-03-08 腾讯科技(武汉)有限公司 Picture Generation Method and device, storage medium and electronic device
CN109615582A (en) * 2018-11-30 2019-04-12 北京工业大学 A kind of face image super-resolution reconstruction method generating confrontation network based on attribute description

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9965850B2 (en) * 2012-07-05 2018-05-08 Bernard Fryshman Object image recognition and instant active response with enhanced application and utility
US10937540B2 (en) * 2017-12-21 2021-03-02 International Business Machines Coporation Medical image classification based on a generative adversarial network trained discriminator

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2449171A1 (en) * 2001-06-06 2002-12-12 Spectra Systems Corporation Marking and authenticating articles
CN108921764A (en) * 2018-03-15 2018-11-30 中山大学 A kind of image latent writing method and system based on generation confrontation network
CN109447895A (en) * 2018-09-03 2019-03-08 腾讯科技(武汉)有限公司 Picture Generation Method and device, storage medium and electronic device
CN109615582A (en) * 2018-11-30 2019-04-12 北京工业大学 A kind of face image super-resolution reconstruction method generating confrontation network based on attribute description

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
数字图像隐写与隐写分析技术的研究;魏程程;《信息与电脑(理论版)》;20180625(第12期);全文 *

Also Published As

Publication number Publication date
CN110473135A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN110473135B (en) Image processing method, system, readable storage medium and intelligent device
JP5404644B2 (en) Computer program for temporal event-based image printing and apparatus for automatically detecting image piracy
Hsu et al. Detecting image splicing using geometry invariants and camera characteristics consistency
Singh et al. Video frame and region duplication forgery detection based on correlation coefficient and coefficient of variation
Gao et al. Geometric distortion insensitive image watermarking in affine covariant regions
Chierchia et al. On the influence of denoising in PRNU based forgery detection
GB2493514A (en) Using a measure of depth to detect if video data derives from a reference video
Thanh et al. Robust semi-blind video watermarking based on frame-patch matching
US11049350B2 (en) Method for authenticating and/or checking the integrity of a subject
Böhme et al. Media forensics
CN107529071A (en) A kind of video data handling procedure and device
CN109461110B (en) Method and device for determining traceability information of picture
CN108564520B (en) GIS vector data copyright authentication method based on Moran index
Sarma et al. A study on digital image forgery detection
Tralic et al. JPEG image tampering detection using blocking artifacts
Kaur et al. Review And Analysis of Image Forgery Detection Technique for Digital Images.
JP2009081827A (en) Restoring method of digital watermarking, and restoration processing program
Parasuraman et al. Reversible image watermarking using interpolation technique
Lu et al. A hierarchical protection scheme for intellectual property of semi-open source datasets based on double watermarking
JP2001136363A (en) Contents use acceptance managing method and its device
Priya et al. Robust feature based image watermarking process
Chaitra et al. Digital image forgery: taxonomy, techniques, and tools–a comprehensive study
Chen et al. Adaptive authentication schemes for 3D mesh models
Zeng et al. ISO Setting Estimation Based on Convolutional Neural Network and its Application in Image Forensics
Roshini et al. A novel approach to image forgery detection

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant