CN113469869B - Image management method and device - Google Patents

Image management method and device Download PDF

Info

Publication number
CN113469869B
CN113469869B CN202111033870.5A CN202111033870A CN113469869B CN 113469869 B CN113469869 B CN 113469869B CN 202111033870 A CN202111033870 A CN 202111033870A CN 113469869 B CN113469869 B CN 113469869B
Authority
CN
China
Prior art keywords
image
target
blocks
image blocks
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111033870.5A
Other languages
Chinese (zh)
Other versions
CN113469869A (en
Inventor
廖巍
王同洋
韩敏
王慧强
徐胜文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Huagong Anding Information Technology Co ltd
Original Assignee
Wuhan Huagong Anding Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Huagong Anding Information Technology Co ltd filed Critical Wuhan Huagong Anding Information Technology Co ltd
Priority to CN202111033870.5A priority Critical patent/CN113469869B/en
Publication of CN113469869A publication Critical patent/CN113469869A/en
Application granted granted Critical
Publication of CN113469869B publication Critical patent/CN113469869B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/10Protecting distributed programs or content, e.g. vending or licensing of copyrighted material ; Digital rights management [DRM]
    • G06F21/16Program or content traceability, e.g. by watermarking

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Physics & Mathematics (AREA)
  • Technology Law (AREA)
  • Computer Hardware Design (AREA)
  • Computer Security & Cryptography (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image management method and a device, in the method, a processing server firstly obtains current display frame images and frame images to be processed of a plurality of user terminals, and divides the current display frame images and the frame images to be processed to obtain a plurality of first image blocks and a plurality of second image blocks which carry first steganographic information, then determines a target second image block according to the similarity of each second image block and the corresponding first image block, then calls an encoding model to carry out second steganographic information embedding on the target second image block to obtain a third image block, and finally obtains a target display frame image according to the first image block, the target second image block and the third image block and displays the target display frame image on the target user terminal; the tracing server firstly obtains an image to be traced and divides the image to obtain a fourth image block, and then calls a decoding model to obtain third steganography information from the fourth image block so as to determine a target leakage terminal and target leakage time of the image to be traced. According to the method and the device, accurate tracing of the image to be traced is achieved, and the operation amount in the process is small.

Description

Image management method and device
Technical Field
The present application relates to the field of information security technologies, and in particular, to an image management method and apparatus.
Background
With the rapid development of internet information technology, information security has become a social hotspot, and for some sensitive files, various encryption modes are generally adopted for protection. However, if the sensitive file is photographed or subjected to screen capture when the terminal is used, the sensitive file cannot be traced and positioned, and further information leakage cannot be prevented in time, so that great potential safety hazards are brought.
Therefore, the existing file management method has the technical problem that the source of the leaked files is difficult to trace, and needs to be improved.
Disclosure of Invention
The embodiment of the application provides an image management method and device, which are used for relieving the technical problem that the source of a file is difficult to trace after the file is leaked in the conventional file management method.
In order to solve the above technical problem, an embodiment of the present application provides the following technical solutions:
the application provides an image management method, which is applied to a steganography traceability system, wherein the steganography traceability system comprises a plurality of user terminals, a processing server and a traceability server, and when the steganography traceability system is applied to the processing server, the image management method comprises the following steps:
acquiring current display frame images and frame images to be processed of a plurality of user terminals;
dividing a current display frame image and a frame image to be processed of a target user terminal in the same division mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein each first image block carries first steganographic information, and the first steganographic information comprises a first terminal identifier and a first time identifier;
determining target second image blocks with the similarity smaller than a threshold value according to the similarity between each second image block and the corresponding first image block;
calling a trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier;
obtaining a target display frame image according to the first image block, the target second image block and the third image block, and displaying the target display frame image on the target user terminal at a target moment;
when the image management method is applied to the tracing server, the image management method comprises the following steps:
acquiring an image to be traced leaked from one of a plurality of user terminals, wherein the image to be traced is obtained by photographing, screen capturing or printing one target display frame image on the corresponding user terminal;
dividing the image to be traced in a preset dividing mode to obtain at least one fourth image block;
calling a trained deep learning decoding model to acquire third steganographic information from the fourth image block, wherein the third steganographic information comprises a second terminal identifier and a third time identifier;
and determining a target leakage terminal and target leakage time of the image to be traced according to the third steganographic information.
Meanwhile, an embodiment of the present application further provides an image management apparatus, which is applied to a steganography traceability system, where the steganography traceability system includes a plurality of user terminals, a processing server, and a traceability server, and in the processing server, the image management apparatus includes:
the first acquisition module is used for acquiring current display frame images and frame images to be processed of a plurality of user terminals;
the device comprises a first segmentation module, a second segmentation module and a third segmentation module, wherein the first segmentation module is used for segmenting a current display frame image and a frame image to be processed of a target user terminal in the same segmentation mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, each first image block carries first steganographic information, and the first steganographic information comprises a first terminal identifier and a first time identifier;
the first determining module is used for determining a target second image block with the similarity smaller than a threshold value according to the similarity between each second image block and the corresponding first image block;
the embedding module is used for calling the trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier;
the display module is used for obtaining a target display frame image according to the first image block, the target second image block and the third image block and displaying the target display frame image on the target user terminal at a target moment;
in the tracing server, the image management apparatus includes: the second acquisition module is used for acquiring an image to be traced, which is leaked from one of the user terminals, wherein the image to be traced is obtained by photographing, screen-capturing or printing one of target display frame images on the corresponding user terminal;
the second segmentation module is used for segmenting the image to be traced in a preset segmentation mode to obtain at least one fourth image block;
a third obtaining module, configured to invoke the trained deep learning decoding model to obtain third steganographic information from the fourth image block, where the third steganographic information includes a terminal identifier and a third time identifier;
and the second determining module is used for determining the target leakage terminal and the target leakage time of the image to be traced according to the third steganographic information.
The application also provides an electronic device comprising a memory and a processor; the memory stores an application program, and the processor is configured to run the application program in the memory to perform any one of the operations in the image management method.
The embodiment of the present application provides a computer-readable storage medium, which stores a plurality of instructions, where the instructions are suitable for a processor to load, so as to execute the steps in the above method.
Has the advantages that: the application provides an image management method and a device, in the method, a processing server firstly obtains a current display frame image and a frame image to be processed of a plurality of user terminals, divides the two frame images of a target user terminal in the same division mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, each first image block carries first steganography information, the first steganography information comprises a terminal identification and a first time identification, then the target second image block with the similarity smaller than a threshold value is determined according to the similarity of each second image block and the corresponding first image block, then a trained depth learning coding model is called to respectively carry out second steganography information embedding on each target second image block to obtain a third image block corresponding to each target second image block, the second steganography information comprises a first terminal identification and a second time identification, finally, a target display frame image is obtained according to the first image block, the target second image block and the third image block and is displayed on a target user terminal at a target moment; the tracing server firstly obtains an image to be traced leaked from one of the user terminals, divides the image to be traced in a preset division mode to obtain at least one fourth image block, then calls a trained deep learning decoding model to obtain third steganographic information from the fourth image block, the third steganographic information comprises a terminal identifier and a third time identifier, and finally determines a target leakage terminal and target leakage time of the image to be traced according to the third steganographic information. By the method, each frame of image on each user terminal is divided, each divided image block is embedded with the steganographic information and then displayed, after the image on a certain user terminal is leaked, the divulging terminal and the divulging time of the image can be known in time according to the steganographic information in a certain image block of the leaked image, and therefore accurate positioning and tracing of the divulging source can be achieved; in addition, the frame image to be processed is divided into a plurality of image blocks, the steganography information is embedded by taking the image blocks as a unit, the related operation amount is greatly reduced, the real-time embedding of the steganography information is realized, each frame image to be processed on a user terminal refers to the current display frame image, the image blocks with changed pictures are determined according to the similarity between the image blocks of the two image blocks, new steganography information is embedded into the image blocks only, and the image blocks with unchanged pictures are not processed, so the operation amount is further reduced; meanwhile, a segmentation method is also adopted during the identification of the image to be traced subsequently, the identification and tracing can be completed only by one image block, the requirement on the image to be traced is low, and the image leaked by screen capture or photographing can be accurately traced.
Drawings
The technical solution and other advantages of the present application will become apparent from the detailed description of the embodiments of the present application with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario of a steganographic traceability system provided in an embodiment of the present application.
Fig. 2 is a schematic flowchart of a first image management method according to an embodiment of the present application.
Fig. 3 is a schematic flowchart of a second image management method according to an embodiment of the present application.
Fig. 4 is a schematic diagram illustrating segmentation of a currently displayed frame image according to an embodiment of the present application.
Fig. 5 is a schematic diagram illustrating segmentation of a frame image to be processed in the embodiment of the present application.
Fig. 6 is a schematic diagram of a target display frame image in an embodiment of the present application.
Fig. 7 is a schematic diagram of an encoding and decoding process of steganographic information in an embodiment of the present application.
Fig. 8 is a schematic interface diagram of a traceability system in an embodiment of the present application.
Fig. 9 is a third flowchart illustrating an image management method according to an embodiment of the present application.
Fig. 10 is a schematic view of a first structure of an image management apparatus according to an embodiment of the present application.
Fig. 11 is a schematic diagram of a second structure of an image management apparatus according to an embodiment of the present application.
Fig. 12 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application. It is to be understood that the embodiments described are only a few embodiments of the present application and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application provides an image management method, an image management device, electronic equipment and a computer-readable storage medium, wherein the image management device can be integrated in the electronic equipment, and the electronic equipment can be a server or a terminal and other equipment.
Referring to fig. 1, fig. 1 is a schematic view of a scenario of an application of a steganographic traceability system provided in an embodiment of the present application, where the system may include terminals and servers, and the terminals, the servers, and the terminals and the servers are connected and communicated through an internet formed by various gateways, and the like, where the application scenario includes a user terminal 11, a processing server 12, a traceability server 13, and an image to be traced 14; wherein:
the user terminal 11 includes but is not limited to a mobile terminal and a fixed terminal with a display function, such as a computer, a mobile phone, etc., and the system includes a plurality of user terminals 11;
the processing server 12 and the tracing server 13 comprise a local server and/or a remote server, etc.;
the image to be traced 14 is an image obtained by photographing or screen-capturing a current display interface of a certain user terminal 11.
The user terminal 11, the processing server 12, the tracing server 13 and the image to be traced 14 are located in a wireless network or a wired network to realize data interaction between the four, wherein:
each user terminal 11 includes a current display frame image and a frame image to be processed, the current display frame image is displayed in a current interface of the user terminal 11 after being spliced by a plurality of first image blocks, each first image block is embedded with first steganographic information, and the first steganographic information includes a first user terminal identifier and a first time identifier.
Respectively taking each user terminal 11 as a target user terminal, firstly obtaining a display frame image and a frame image to be processed of the target user terminal by the processing server 12, and dividing the two frame images in the same division manner to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein the shapes of the first image blocks and the second image blocks are the same, and the position relations are in one-to-one correspondence; then, according to the similarity between each second image block and the corresponding first image block, determining a target second image block of which the similarity is smaller than a threshold, wherein the similarity smaller than the threshold indicates that the content displayed in the target second image block is changed compared with the content displayed at the corresponding position of the current display frame; after the target second image blocks are determined, calling a trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain third image blocks corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier; and finally, if all the second image blocks are target second image blocks, combining all the third image blocks to obtain a target display frame image, if only part of the second image blocks are the target second image blocks, determining a target first image block corresponding to the target second image blocks according to the target second image blocks, determining the remaining first image blocks from the plurality of first image blocks according to the target first image blocks, then combining the remaining first image blocks and the third image blocks to obtain the target display frame image, if the target second image blocks do not exist, taking the current display frame image as the target display frame image, and finally displaying the target display frame image on a target user terminal at the target moment. After the above steps are completed, each frame of the image displayed in the interface of each user terminal 11 is embedded with a plurality of steganographic information in units of image blocks.
When a display interface of a certain user terminal 11 is photographed or screenshot is leaked, the leaked image is an image 14 to be traced, a tracing server 13 obtains the image 14 to be traced, the image 14 to be traced is segmented in a preset segmentation mode to obtain at least one fourth image block, then a trained deep learning decoding model is called to obtain third steganography information from the fourth image block, the third steganography information comprises a second terminal identifier and a third time identifier, and finally a target leakage terminal and target leakage time of the image to be traced are determined according to the third steganography information so as to realize accurate positioning and tracing of a leakage source.
It should be noted that the system scenario diagram shown in fig. 1 is only an example, and the server and the scenario described in the embodiment of the present application are for more clearly illustrating the technical solution of the embodiment of the present application, and do not form a limitation on the technical solution provided in the embodiment of the present application, and as a person having ordinary skill in the art knows, with the evolution of the system and the occurrence of a new service scenario, the technical solution provided in the embodiment of the present application is also applicable to similar technical problems. The following are detailed below. It should be noted that the following description of the embodiments is not intended to limit the preferred order of the embodiments.
Referring to fig. 2, fig. 2 is a first flowchart illustrating an image management method according to an embodiment of the present application, where the method is applied to a processing server, and specifically includes:
s201: acquiring current display frame images and frame images to be processed of a plurality of user terminals.
The image management method is suitable for the steganography traceability system, the steganography traceability system comprises a plurality of user terminals, and each user terminal comprises but is not limited to a mobile terminal and a fixed terminal with a display function, such as a computer and a mobile phone, and can be used for displaying various digital media files such as images, videos and documents. The user terminal displays the frame by frame when displaying the picture, wherein a frame of image which is currently displayed is a currently displayed frame image, a frame of image which needs to be processed currently and is displayed at a target moment is a frame image to be processed, and the frame image to be processed is usually the next frame of the currently displayed frame image.
S202: the method comprises the steps of dividing a current display frame image and a frame image to be processed of a target user terminal in the same dividing mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein each first image block carries first steganographic information, and the first steganographic information comprises a first terminal identifier and a first time identifier.
The watermark system is a permanent process which is a group of programs installed on a user operating system, is started along with the starting of the system and never exits, and a user cannot exit the system, and the system is always kept in the running process of the whole operating system and works under the control of the processing server. The watermarking system circularly captures the content of a host video memory buffer area based on a current display frame image of a user terminal to obtain a frame image to be processed, then divides the frame image to be processed and the current display frame image to obtain a plurality of image blocks corresponding to each frame image, and then embeds steganographic information into partial image blocks according to actual conditions. In the present application, steganographic information refers to an invisible watermark added on a target carrier, which can carry certain information but does not cause visual perception distortion of the target carrier after being embedded in the target carrier.
Specifically, each user terminal is used as a target user terminal, for each target user terminal, a current display frame image and a frame image to be processed of the target user terminal are obtained, and then the target user terminal is divided in the same dividing mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein the shapes of the first image blocks and the second image blocks are the same, and the positions of the first image blocks and the second image blocks are in one-to-one correspondence.
Each first image block of the currently displayed frame image is embedded with first steganographic information, the first steganographic information comprises a first terminal identifier and a first time identifier, the first terminal identifier is an identifier of a target user terminal for displaying the frame image, such as an MAC address, the first time identifier is the generation time of the first steganographic information of the first image block, and in addition, the first steganographic information can also carry information such as a login user work number and a name of a watermark system in the target user terminal. Each second image block of the frame image to be processed has no steganographic information in an initial state.
In one embodiment, S202 specifically includes: acquiring a display center, preset segmentation precision and resolution of a target user terminal; determining the position information of each divided image block according to the display center, and determining the size information and the total number of the divided image blocks according to the preset division precision and the resolution; and segmenting the current display frame image and the frame image to be processed according to the position information, the size information and the total number.
For the target user terminal, a display center, preset segmentation accuracy and resolution are obtained first, the display center is a center point of the display screen, the resolution is the number of pixels that can be displayed by the display screen, such as 1600 × 1400, 2048 × 1536, and the like, and the preset segmentation accuracy is the number of pixels included in each image block, such as 400 × 400, 200, and the like. After the display center is determined, the image block where the display center is located is used as a first image block, and other image blocks are respectively located at the upper, lower, left and right positions of the image block, so that the position information of each divided image block can be determined according to the display center. After the preset division accuracy is determined, the size information of each image block, which contains the same number of pixels regardless of the size of the display screen, may be determined. After knowing the number of pixels each image block contains and the total number of pixels of the display screen, it can be calculated how many image blocks the whole image can be divided into.
As shown in fig. 4, for the current display frame image, assuming that the preset segmentation accuracy is 400 × 400, a coordinate system is established with the display center as the origin of coordinates O, and the entire image is segmented according to 400 × 400, so as to obtain a plurality of first image blocks. From the origin O of coordinates, four points are sequentially taken in a clockwise direction, the coordinates of the four points are a (200 ), B (-200, 200), C (-200 ) and D (200, -200), and the first image block 30 can be determined according to the four points. Then, four points may be sequentially taken from any of the upper, lower, left, and right sides of the first image block 30 to obtain a second first image block 30, and so on, and four points are taken each time to obtain other first image blocks 30. Assuming that the resolution is Width × Height, taking M = floor ((Width/2-200)/400), N = floor ((Height/2-200)/400), where floor denotes taking an integer downward, the total number of the first image blocks 30 is: m × N × 4+ (M + N) × 2+ 1. In the edge area, if a complete 400 × 400 image block cannot be obtained, it is regarded as the first edge image and is not processed.
As shown in fig. 5, for the frame image to be processed, the same segmentation method is adopted to obtain a plurality of second image blocks 40 and second edge images.
S203: and determining the target second image blocks with the similarity smaller than the threshold according to the similarity between each second image block and the corresponding first image block.
And comparing the similarity of each second image block 40 with the corresponding first image block 30, and judging whether a target second image block with the similarity smaller than a threshold exists according to the comparison result. The similarity is used for judging whether the display contents of the two image blocks are the same or not, and the similarity smaller than the threshold value indicates that the display contents of the two image blocks are different, namely the display contents of the frame image to be processed are changed at the position where the target second image block is located compared with the current display frame image. When the similarity comparison is performed, a structural similarity algorithm (SSIM) may be used, where the SSIM algorithm is used to measure image similarity from three aspects of brightness, contrast, and structure, the average value is used as an estimate of brightness, the standard deviation is used as an estimate of contrast, the covariance is used as a measure of structural similarity, the SSIM value range is 0 to 1, and a larger value indicates that the image distortion is smaller and more similar.
S204: and calling the trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier.
When target second image blocks exist, calling a trained deep learning coding model to embed second steganography information into each target second image block, wherein the second steganography information comprises a first terminal identifier and a second time identifier, the first terminal identifier is an identifier of a target user terminal of the frame image to be processed, such as an MAC (media access control) address, and the second time identifier is the generation time of the second steganography information of the second image block.
In the embodiment of the present application, when embedding the second steganographic information in the target second image block, a trained depth learning coding model is used, and before this, the depth learning coding model needs to be trained. In the training stage of the model, a generation countermeasure (GAN) neural network model is adopted for training, the generation countermeasure neural network model comprises a deep learning coding model and a discrimination model, a plurality of groups of original image blocks and original steganography information are used as first training samples and input into the deep learning coding model, the deep learning coding model codes the original image blocks and generates generated image blocks embedded with the original steganography information, the discrimination model learns to distinguish the generated image blocks and the original image blocks and outputs a probability value between 0 and 1, the generated image blocks are represented by 0, the original image blocks are represented by 1, the probability value is smaller than 0.5 to represent that the discrimination result is generated image blocks, and the discrimination result is larger than 0.5 to represent that the discrimination result is the original image blocks. If the discrimination model can distinguish the two, the deep learning coding model improves the neuron parameters of the discrimination model, recodes and generates a new generated image block, then the discrimination model continues to learn and distinguish, the deep learning coding model continues to code and generate, and the steps are executed in a circulating way until the discrimination model achieves Nash equilibrium of the recognition probability of the generated image block and the original image block, namely the two image blocks cannot be correctly recognized, and at the moment, the generated image block is infinitely close to the original image block. And according to the training result, obtaining a trained deep learning coding model and a final generated image block embedded with original steganography information.
After the trained deep learning coding model is called to embed second steganography information into each target second image block, and a third image block is obtained, the similarity between the third image block and the target second image block is extremely high, and the difference between the third image block and the target second image block cannot be distinguished by human eyes for a user.
S205: and obtaining a target display frame image according to the first image block, the target second image block and the third image block, and displaying the target display frame image on a target user terminal at a target moment.
After the current display frame image is displayed, a next frame image needs to be displayed, at this time, a new target display frame image needs to be obtained according to the first image block, the target second image block and the third image block, the new target display frame image is rendered and displayed on a target user terminal through a display card at the target moment, and each image block of the target display frame image carries steganography information.
In one embodiment, S205 specifically includes: when all the second image blocks are target second image blocks, combining all the third image blocks to obtain a target display frame image; and when part of the second image blocks are target second image blocks, determining target first image blocks corresponding to the target second image blocks according to the target second image blocks, determining residual first image blocks from the plurality of first image blocks according to the target first image blocks, and combining the residual first image blocks and the third image blocks to obtain a target display frame image.
And if the display contents of the current display frame image and the frame image to be processed are completely different, all second image blocks of the frame image to be processed are target second image blocks, all third image blocks obtained after encoding are combined to obtain a target display frame image, and all steganographic information in the target display frame image is second steganographic information.
If the display contents of the current display frame image and the frame image to be processed are partially the same and partially different, only part of second image blocks in the frame image to be processed are target second image blocks, first image blocks corresponding to the target second image blocks are used as target first image blocks, other first image blocks are used as residual first image blocks, all residual first image blocks and all third image blocks obtained after encoding are combined to obtain a target display frame image, one part of steganography information in the target display frame image is first steganography information, and the other part of steganography information is second steganography information. As shown in fig. 6, one part of the target display frame image is a first image block 30, and the other part is a third image block 50.
And if the display contents of the current display frame image and the frame image to be processed are completely the same, the target second image block does not exist, the current display frame image is directly used as the target display frame image and is displayed on the target user terminal at the target moment, and all the steganographic information in the target display frame image is the first steganographic information. Through the steps, each frame of picture displayed on each user terminal has a plurality of steganographic information.
In the current field of information security, the steganography technology is mainly used for information transfer, namely, secret information is embedded into a picture, so that the secret information is transferred without paying attention to a third party, the embedding is only carried out on certain specific sensitive files, and the steganography information is directly embedded into the whole picture when the secret information is embedded into the picture, so that the involved operation amount is large. In the application, for each user terminal provided with the watermark system, each frame of image displayed on the user terminal carries steganographic information, so that the application range is wider. When the steganography information is embedded, the frame image to be processed is divided into a plurality of image blocks, and the steganography information is embedded by taking the image blocks as units, so that the involved computation amount is greatly reduced, the real-time embedding of the steganography information can be realized, and the condition that the embedding of the steganography information is incomplete or delayed can not occur.
In an actual use scene, the content displayed in the user terminal installed with the watermark system generally has a high requirement on information security, and the displayed picture generally has some other protective measures, such as manual screen locking after leaving the screen, screen locking after a few minutes of still picture, and the like, so that the image in the display screen is not always in a scene easy to leak for a long time. In the method, for each frame image to be processed on a user terminal, a current display frame image is referred to, image blocks with changed pictures are determined according to the similarity between the image blocks of the current display frame image and the current display frame image, new steganography information is only embedded into the image blocks, and the image blocks without changed pictures are not processed, so that time identification in the steganography information has certain timeliness, meanwhile, the operation amount is reduced to the greatest extent, and both timeliness and operation amount are realized.
In the above embodiment, the steganographic information is embedded by using a deep learning neural network model, but the embodiment of the present application is not limited thereto, and other image algorithms, such as DCT (Discrete Cosine Transform), SIFT (Scale Invariant Feature Transform), and the like, may also be used to implement steganographic processing of an image.
Referring to fig. 3, fig. 3 is a schematic flowchart illustrating a second flowchart of an image management method according to an embodiment of the present application, where the method is applied to a tracing server, and specifically includes:
s301: and acquiring the image to be traced leaked from one of the plurality of user terminals, wherein the image to be traced is obtained by photographing, screen capturing or printing one of the target display frame images on the corresponding user terminal.
The image to be traced can be an image leaked from a certain user terminal by means of photographing, screen capturing, printing and the like, and after the image to be traced is obtained, the leaked user terminal needs to be found from all the user terminals provided with the watermark systems, and the specific leakage time of the user terminal.
S302: and segmenting the image to be traced by a preset segmentation mode to obtain at least one fourth image block.
The image to be traced is obtained by photographing, screen-capturing or printing the display image on the screen of the user terminal, and the display content of the image to be traced is the same as the content on the actual screen, so that the image to be traced also comprises a plurality of steganographic information, and when tracing, only one piece of steganographic information needs to be obtained, so that the image to be traced is firstly segmented by a preset segmentation mode to obtain at least one fourth image block. The preset segmentation mode is the same as the segmentation mode of the current display frame image and the frame image to be displayed in the previous embodiment, that is, the display center is taken as the origin of coordinates in the previous embodiment, the steganographic information is embedded after segmentation is performed with the precision of 400 × 400, and then the steganographic information is obtained by similarly performing segmentation in this step.
In an embodiment, S302 specifically includes: acquiring target display content from the image to be traced, and performing image rectification on the target display content; and dividing the target display content with preset dividing precision to obtain at least one fourth image block.
For the to-be-traced image obtained by photographing, some pictures irrelevant to the display content of the screen are often carried, such as external environment pictures except for the display screen, and the pictures are not needed during tracing, so that the to-be-traced image needs to be processed first, and only the target display content in the screen is obtained from the pictures. In addition, vertical screen shooting is generally not performed during shooting, and a certain shooting angle is usually provided, so that the target display content may have a direction skew or a viewing angle deviation, and therefore, image correction is also required to be performed on the target display content, so that the target display content is presented in an elevation form, and then the processed display content is divided with a preset division precision, where the preset division precision is the same as that in the above embodiment, for example, 400 × 400, and at least one fourth image block is obtained after division, and the fourth image block may select an image block in a region with higher definition, an image block close to the center, or the like, but is not limited thereto, and a person skilled in the art may select a specific division position of the fourth image block as required.
It should be noted that, in the present application, only one fourth image block is needed to complete the identification of the steganographic information, but in order to improve the accuracy of the identification, a plurality of fourth image blocks may be divided to perform the identification, and a person skilled in the art may select the number of the fourth image blocks as needed. When the display content acquisition and the image correction are performed on the image to be traced, the deep learning model can be adopted.
S303: and calling the trained deep learning decoding model to acquire third steganographic information from the fourth image block, wherein the third steganographic information comprises a second terminal identifier and a third time identifier.
And after the fourth image block is obtained, calling a trained deep learning decoding model to obtain third steganography information from the fourth image block, wherein the third steganography information comprises a second terminal identifier and a third time identifier, the second terminal identifier is an identifier of a user terminal revealing an image to be traced, such as an MAC (media access control) address, the third time identifier is the generation time of the third steganography information in the fourth image block, and in addition, the third steganography information can also carry information such as login user work number and name of a watermark system in the user terminal.
In the embodiment of the present application, when the third steganographic information is obtained from the fourth image block, a trained deep learning decoding model is used, and before that, the deep learning decoding model needs to be trained. In the foregoing embodiment, the generative neural network model is trained, and after the training is completed, the generative image block in which the original steganographic information is embedded is obtained. When the deep learning decoding model is trained, processing multiple groups of generated image blocks, taking the processed multiple groups of generated image blocks and corresponding original steganography information as second training samples, training the deep learning decoding model, and then obtaining the trained deep learning decoding model according to training results.
When the generated image blocks are processed, the mode of screen capture, printing or photographing of the generated image blocks can be adopted for processing to obtain image blocks in various different forms, then a plurality of groups of processed generated image blocks are used as training input data, original steganography information is used as training output data, a deep learning decoding model is trained, the deep learning decoding model after training can automatically output the decoded steganography information according to the input processed generated image blocks, errors of the decoded steganography information and the original steganography information can be gradually reduced along with the increase of the training sample amount and the training times, and the trained deep learning decoding model is obtained after the errors of the two are smaller than a preset value. When the trained deep learning decoding model is called to obtain third steganography information from the fourth image block, the third steganography information is obtained by decoding from the fourth image block, and the error between the third steganography information and originally embedded steganography information in the fourth image block is very small, so that the integrity and the accuracy of the steganography information are ensured.
S304: and determining a target leakage terminal and target leakage time of the image to be traced according to the third steganography information.
In an embodiment, S304 specifically includes: inquiring the steganographic information generation data of each user terminal according to the third steganographic information; and determining a target leakage terminal and target leakage time of the image to be traced according to the query result. After the third steganographic information is obtained, because the third steganographic information carries the second terminal identifier and the third display time, the steganographic information of all the user terminals can be inquired to generate data, the steganographic information generating data comprises specific contents of the steganographic information generated by each user terminal at each moment, after the inquiry, if the steganographic information generated by a certain user terminal is the same as the third steganographic information, the user terminal is a target leakage terminal, and the third display time in the third steganographic information is regarded as the target leakage time.
Through the steps, the accurate positioning of the leakage source of the image to be traced is realized, and the divulgence of which user terminal is carried out at which time can be traced, so that the divulgence is deterred. Because the image segmentation mode is adopted in the steganographic data embedding stage, a segmentation method can be also adopted in the process of identifying the image to be traced subsequently, the identification and tracing can be completed only by one image block, the identification computation amount is low, the requirement on the image to be traced is low, and even if the obtained image to be traced only corresponds to partial content on a display screen and is not a complete picture, the accurate tracing can be realized.
As shown in fig. 7, which is a schematic diagram of encoding and decoding processes of steganography information, taking a certain target second image block as an example, the second steganography information generated on a webpage is represented by a binary pixel gray-scale value, and is combined with image data, then a deep learning coding model is called to encode to obtain a third image block, and the third image block is printed or photographed to obtain an image to be traced. And segmenting the image to be traced to obtain a fourth image block, calling a deep learning decoding model to decode the fourth image block to obtain decoded third steganography information and displaying the third steganography information on a webpage, assuming that the fourth image block corresponds to the target second image block, the third steganography information is the same as the second steganography information, and according to the content of the third steganography information, determining a target leakage terminal and target leakage time of the image to be traced to.
As shown in fig. 8, in the actual operation process, only the image to be traced needs to be directly uploaded on the interface of the tracing system, and the tracing server can complete the subsequent image block segmentation, steganographic information decoding, and tracing operations.
It can be known from the above embodiments that, in the image management method provided by the present application, each frame image on each user terminal is divided, and each divided image block is embedded with steganographic information and then displayed, after an image on a certain user terminal is leaked, a disclosure terminal and disclosure time of the image can be known in time according to the steganographic information in a certain image block of the leaked image, so that accurate positioning and tracing of a disclosure source can be realized; in addition, the frame image to be processed is divided into a plurality of image blocks, the steganography information is embedded in each image block, the involved operation amount is greatly reduced, the real-time embedding of the steganography information is realized, each frame image to be processed on the user terminal refers to the current display frame image, the image blocks with changed pictures are determined according to the similarity between the image blocks of the frame image and the current display frame image, new steganography information is embedded into the image blocks, and the image blocks with unchanged pictures are not processed, so the operation amount is further reduced.
Referring to fig. 9, fig. 9 is a schematic view of a third flow of an image management method according to an embodiment of the present application, where the method is applied to a steganographic traceability system, and includes a plurality of user terminals, a processing server, and a traceability server, and the method specifically includes:
901: and each user terminal sends the current display frame image and the frame image to be processed to the processing server.
The steganographic traceability system comprises a plurality of user terminals, such as a user terminal a, a user terminal b and other user terminals which are not shown, wherein each user terminal displays frames by frames when displaying a picture, one frame of image which is currently displayed is a currently displayed frame image, and one frame of image which needs to be processed and is displayed at a target moment is a frame image to be processed.
902: and the processing server receives the current display frame image and the frame image to be processed of each user terminal.
903: the processing server divides the current display frame image to obtain a plurality of first image blocks, and divides the frame image to be processed to obtain a plurality of second image blocks.
The processing server divides the current display frame image and the frame image to be processed of the target user terminal in the same division mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein the first image blocks and the second image blocks are the same in shape and are in one-to-one correspondence in position. Each first image block of the current display frame image is embedded with first steganographic information, the first steganographic information comprises a first terminal identifier and a first time identifier, and each second image block of the frame image to be processed has no steganographic information in an initial state.
904: the processing server determines a target second image block from the plurality of second image blocks.
And comparing the similarity of each second image block 40 with the corresponding first image block 30, and judging whether a target second image block with the similarity smaller than a threshold exists according to the comparison result.
905: and the processing server embeds second steganographic information into the target second image block.
And calling the trained deep learning coding model by the processing server to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier.
906: the processing server generates a target display frame image.
And if the display contents of the current display frame image and the frame image to be processed are completely different, all second image blocks of the frame image to be processed are target second image blocks, all third image blocks obtained after encoding are combined to obtain a target display frame image, and all steganographic information in the target display frame image is second steganographic information. If the display contents of the current display frame image and the frame image to be processed are partially the same and partially different, only part of second image blocks in the frame image to be processed are target second image blocks, first image blocks corresponding to the target second image blocks are used as target first image blocks, other first image blocks are used as residual first image blocks, all residual first image blocks and all third image blocks obtained after encoding are combined to obtain a target display frame image, one part of steganography information in the target display frame image is first steganography information, and the other part of steganography information is second steganography information. And if the display contents of the current display frame image and the frame image to be processed are completely the same, the target second image block does not exist, and the current display frame image is directly used as the target display frame image.
907: the processing server transmits the target display frame image to each user terminal.
908: each user terminal receives and displays the target display frame image.
And each user terminal receives a respective target display frame image and displays the target display frame image at a target moment, wherein each displayed frame image has a plurality of steganographic information.
909: and the tracing server acquires an image to be traced.
The image to be traced can be an image leaked from a certain user terminal by means of photographing, screen capturing, printing and the like, the image to be traced is directly uploaded on an interface of the tracing system, and the tracing server can acquire the image to be traced.
910: and the tracing server divides the image to be traced to obtain a fourth image block.
The tracing server divides the image to be traced in a preset division manner to obtain at least one fourth image block, wherein the preset division manner is the same as the division manner of the current display frame image and the frame image to be displayed in the embodiment.
911: and decoding by the tracing server to obtain third steganographic information.
And calling the trained deep learning decoding model by the tracing server to acquire third steganographic information from the fourth image block, wherein the third steganographic information comprises a second terminal identifier and a third time identifier.
912: and the tracing server acquires the steganographic information generation data of each user terminal from the processing server.
The steganographic information generation data comprises the concrete content of the steganographic information generated by each user terminal at each moment, the steganographic information generation data is generated by the processing server, and the tracing server is obtained from the processing server.
913: and the tracing server determines a divulgence terminal and a divulgence time of the image to be traced.
After the tracing server obtains the third steganography information, the third steganography information carries the second terminal identification and the third display time, the steganography information of all the user terminals can be inquired to generate data, after inquiry, if the steganography information generated by a certain user terminal is the same as the third steganography information, the user terminal is a target leakage terminal, and the third display time in the third steganography information is regarded as the target leakage time.
Through the steps, each frame of image on each user terminal is divided, each divided image block is embedded with the steganographic information and then displayed, after the image on a certain user terminal is leaked, the divulging terminal and the divulging time of the image can be known in time according to the steganographic information in a certain image block of the leaked image, and therefore accurate positioning and tracing of the divulging source can be achieved.
On the basis of the method in the foregoing embodiment, the present embodiment will be further described from the perspective of an image management apparatus, please refer to fig. 10, where fig. 10 specifically describes an image management apparatus located in a processing server according to the embodiment of the present application, which may include:
a first obtaining module 101, configured to obtain current display frame images and frame images to be processed of multiple user terminals;
a first segmentation module 102, configured to segment a current display frame image and a frame image to be processed of a target user terminal in the same segmentation manner to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, where each first image block carries first steganography information, and the first steganography information includes a first terminal identifier and a first time identifier;
the first determining module 103 is configured to determine, according to the similarity between each second image block and the corresponding first image block, a target second image block of which the similarity is smaller than a threshold;
the embedding module 104 is configured to invoke the trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, where the second steganographic information includes a first terminal identifier and a second time identifier;
and the display module 105 is configured to obtain a target display frame image according to the first image block, the target second image block, and the third image block, and display the target display frame image on the target user terminal at a target moment.
In one embodiment, the first segmentation module 102 is further configured to: acquiring a display center, preset segmentation precision and resolution of a target user terminal; determining the position information of each divided image block according to the display center, and determining the size information and the total number of the divided image blocks according to the preset division precision and the resolution; and segmenting the current display frame image and the frame image to be processed according to the position information, the size information and the total number.
In one embodiment, the display module 105 is further configured to: when all the second image blocks are target second image blocks, combining all the third image blocks to obtain a target display frame image; and when part of the second image blocks are target second image blocks, determining target first image blocks corresponding to the target second image blocks according to the target second image blocks, determining residual first image blocks from the plurality of first image blocks according to the target first image blocks, and combining the residual first image blocks and the third image blocks to obtain a target display frame image.
In one embodiment, the image management apparatus further comprises a third display module, the third display module is configured to: and when the target second image block does not exist, taking the current display frame image as a target display frame image, and displaying the target display frame image on the target user terminal at the target moment.
In one embodiment, the image management apparatus further comprises a model training module, the model training module is configured to: training a deep learning coding model and a discrimination model of an antagonistic neural network model by taking a plurality of groups of original image blocks and original steganography information as first training samples; according to the training result, obtaining a trained deep learning coding model and a plurality of groups of generated image blocks embedded with original steganography information; processing the multiple groups of generated image blocks, taking the processed multiple groups of generated image blocks and corresponding original steganography information as second training samples, and training a deep learning decoding model; and obtaining a trained deep learning decoding model according to the training result.
Accordingly, referring to fig. 11, fig. 11 specifically describes an image management apparatus located in a tracing server according to an embodiment of the present application, which may include:
a second obtaining module 106, configured to obtain an image to be traced, which is leaked from one of the plurality of user terminals;
the second segmentation module 107 is configured to segment the image to be traced in a preset segmentation manner to obtain at least one fourth image block;
a third obtaining module 108, configured to invoke the trained deep learning decoding model to obtain third steganographic information from the fourth image block, where the third steganographic information includes a terminal identifier and a third time identifier;
and a second determining module 109, configured to determine a target leakage terminal and a target leakage time of the image to be traced according to the third steganographic information.
In one embodiment, the second segmentation module 107 is further configured to: acquiring target display content from the image to be traced, and performing image rectification on the target display content; and dividing the target display content with preset dividing precision to obtain at least one fourth image block.
In one embodiment, the second determining module 109 is further configured to: inquiring the steganographic information generation data of each user terminal according to the third steganographic information; and determining a target leakage terminal and target leakage time of the image to be traced according to the query result.
Different from the prior art, the image management device provided by the application divides each frame of image on each user terminal, embeds the steganographic information into each divided image block and displays the image block, and can know the divulgence terminal and the divulgence time of the image in time according to the steganographic information in a certain image block of the divulged image after the image on a certain user terminal is divulged, so that the accurate positioning and tracing of the divulgence source can be realized; in addition, the frame image to be processed is divided into a plurality of image blocks, the steganography information is embedded in each image block, the involved operation amount is greatly reduced, the real-time embedding of the steganography information is realized, each frame image to be processed on the user terminal refers to the current display frame image, the image blocks with changed pictures are determined according to the similarity between the image blocks of the frame image and the current display frame image, new steganography information is embedded into the image blocks, and the image blocks with unchanged pictures are not processed, so the operation amount is further reduced.
Accordingly, embodiments of the present application also provide an electronic device, as shown in fig. 12, which may include Radio Frequency (RF) circuitry 1201, a memory 1202 including one or more computer-readable storage media, an input unit 1203, a display unit 1204, a sensor 1205, audio circuitry 1206, a WiFi module 1207, a processor 1208 including one or more processing cores, and a power supply 1209. Those skilled in the art will appreciate that the electronic device configuration shown in fig. 12 does not constitute a limitation of the electronic device and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the radio frequency circuit 1201 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information of a base station and then sends the received downlink information to one or more processors 1208 for processing; in addition, data relating to uplink is transmitted to the base station. The memory 1202 may be used to store software programs and modules, and the processor 1208 executes various functional applications and data processing by executing the software programs and modules stored in the memory 1202. The input unit 1203 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control.
The display unit 1204 may be used to display information input by or provided to the user and various graphical user interfaces of the server, which may be made up of graphics, text, icons, video, and any combination thereof.
The electronic device can also include at least one sensor 1205, such as a light sensor, motion sensor, and other sensors. The audio circuitry 1206 includes speakers, which may provide an audio interface between a user and the electronic device.
WiFi belongs to short-range wireless transmission technology, and the electronic device can help the user send and receive e-mail, browse web pages, access streaming media, etc. through the WiFi module 1207, which provides wireless broadband internet access for the user. Although fig. 12 shows the WiFi module 1207, it is understood that it does not belong to the essential constitution of the electronic device, and may be omitted entirely as needed within the scope not changing the essence of the application.
The processor 1208 is a control center of the electronic device, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the electronic device and processes data by operating or executing software programs and/or modules stored in the memory 1202 and calling data stored in the memory 1202, thereby performing overall monitoring of the mobile phone.
The electronic device also includes a power supply 1209 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1208 via a power management system that may be used to manage charging, discharging, and power consumption.
Although not shown, the electronic device may further include a camera, a bluetooth module, and the like, which are not described in detail herein. Specifically, in this embodiment, the processor 1208 in the server loads the executable file corresponding to the process of one or more application programs into the memory 1202 according to the following instructions, and the processor 1208 runs the application programs stored in the memory 1202, so as to implement the following functions:
acquiring current display frame images and frame images to be processed of a plurality of user terminals;
dividing a current display frame image and a frame image to be processed of a target user terminal in the same division mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein each first image block carries first steganographic information, and the first steganographic information comprises a first terminal identifier and a first time identifier;
determining target second image blocks with the similarity smaller than a threshold value according to the similarity between each second image block and the corresponding first image block;
calling a trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier;
and obtaining a target display frame image according to the first image block, the target second image block and the third image block, and displaying the target display frame image on the target user terminal at a target moment.
Or to implement the following functions:
acquiring an image to be traced, which is leaked from one of a plurality of user terminals;
dividing the image to be traced in a preset dividing mode to obtain at least one fourth image block;
calling a trained deep learning decoding model to acquire third steganographic information from the fourth image block, wherein the third steganographic information comprises a second terminal identifier and a third time identifier;
and determining a target leakage terminal and target leakage time of the image to be traced according to the third steganographic information.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and parts that are not described in detail in a certain embodiment may refer to the above detailed description, and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps of the methods of the above embodiments may be performed by instructions or by associated hardware controlled by the instructions, which may be stored in a computer readable storage medium and loaded and executed by a processor.
To this end, an embodiment of the present application provides a computer-readable storage medium, in which a plurality of instructions are stored, and the instructions can be loaded by a processor to implement the following functions:
acquiring current display frame images and frame images to be processed of a plurality of user terminals;
dividing a current display frame image and a frame image to be processed of a target user terminal in the same division mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein each first image block carries first steganographic information, and the first steganographic information comprises a first terminal identifier and a first time identifier;
determining target second image blocks with the similarity smaller than a threshold value according to the similarity between each second image block and the corresponding first image block;
calling a trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier;
and obtaining a target display frame image according to the first image block, the target second image block and the third image block, and displaying the target display frame image on the target user terminal at a target moment.
Or to implement the following functions:
acquiring an image to be traced, which is leaked from one of a plurality of user terminals;
dividing the image to be traced in a preset dividing mode to obtain at least one fourth image block;
calling a trained deep learning decoding model to acquire third steganographic information from the fourth image block, wherein the third steganographic information comprises a second terminal identifier and a third time identifier;
and determining a target leakage terminal and target leakage time of the image to be traced according to the third steganographic information.
The image management method, the image management apparatus, the electronic device, and the computer-readable storage medium provided in the embodiments of the present application are described in detail above, and a specific example is applied in the description to explain the principles and implementations of the present application, and the description of the embodiments is only used to help understand the technical solutions and core ideas of the present application; those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications or substitutions do not depart from the spirit and scope of the present disclosure as defined by the appended claims.

Claims (7)

1. An image management method is applied to a steganographic traceability system, the steganographic traceability system comprises a plurality of user terminals, a processing server and a traceability server, and when the image management method is applied to the processing server, the image management method comprises the following steps:
acquiring current display frame images and frame images to be processed of a plurality of user terminals;
dividing a current display frame image and a frame image to be processed of a target user terminal in the same division mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, wherein each first image block carries first steganographic information, the first steganographic information comprises a first terminal identifier and a first time identifier, the first image blocks and the second image blocks are the same in shape and are in one-to-one correspondence in position;
determining target second image blocks with the similarity smaller than a threshold value according to the similarity between each second image block and the corresponding first image block;
calling a trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier;
when all the second image blocks are target second image blocks, combining all the third image blocks to obtain a target display frame image; when part of second image blocks are target second image blocks, determining target first image blocks corresponding to the target second image blocks according to the target second image blocks, determining residual first image blocks from the plurality of first image blocks according to the target first image blocks, and combining the residual first image blocks and the third image blocks to obtain a target display frame image; displaying the target display frame on the target user terminal at a target moment;
when the image management method is applied to the tracing server, the image management method comprises the following steps:
acquiring an image to be traced leaked from one of a plurality of user terminals, wherein the image to be traced is obtained by photographing, screen capturing or printing one target display frame image on the corresponding user terminal;
dividing the image to be traced in a preset dividing mode to obtain at least one fourth image block;
calling a trained deep learning decoding model to acquire third steganographic information from the fourth image block, wherein the third steganographic information comprises a second terminal identifier and a third time identifier;
and determining a target leakage terminal and target leakage time of the image to be traced according to the third steganographic information.
2. The image management method of claim 1, wherein the step of dividing the currently displayed frame image and the frame image to be processed of the target user terminal in the same division manner comprises:
acquiring a display center, preset segmentation precision and resolution of a target user terminal;
determining the position information of each divided image block according to the display center, and determining the size information and the total number of the divided image blocks according to the preset division precision and the resolution;
and segmenting the current display frame image and the frame image to be processed according to the position information, the size information and the total number.
3. The image management method according to claim 1, further comprising:
and when the target second image block does not exist, taking the current display frame image as a target display frame image, and displaying the target display frame image on the target user terminal at the target moment.
4. The image management method according to claim 1, further comprising:
training a deep learning coding model and a discrimination model of an antagonistic neural network model by taking a plurality of groups of original image blocks and original steganography information as first training samples;
obtaining a trained deep learning coding model and a plurality of groups of generated image blocks embedded with original steganography information according to a training result;
processing the multiple groups of generated image blocks, taking the processed multiple groups of generated image blocks and corresponding original steganography information as second training samples, and training a deep learning decoding model;
and obtaining a trained deep learning decoding model according to the training result.
5. The image management method according to claim 1, wherein the step of segmenting the image to be traced by a preset segmentation method to obtain at least one fourth image block comprises:
acquiring target display content from the image to be traced, and performing image rectification on the target display content;
and dividing the target display content with preset dividing precision to obtain at least one fourth image block.
6. The image management method according to claim 1, wherein the step of determining the target leakage terminal and the target leakage time of the image to be traced according to the third steganographic information comprises:
inquiring the steganographic information generation data of each user terminal according to the third steganographic information;
and determining a target leakage terminal and target leakage time of the image to be traced according to the query result.
7. An image management apparatus applied to a steganographic traceability system including a plurality of user terminals, a processing server in which the image management apparatus includes:
the first acquisition module is used for acquiring current display frame images and frame images to be processed of a plurality of user terminals;
the device comprises a first segmentation module, a second segmentation module and a third segmentation module, wherein the first segmentation module is used for segmenting a current display frame image and a frame image to be processed of a target user terminal in the same segmentation mode to obtain a plurality of first image blocks corresponding to the current display frame image and a plurality of second image blocks corresponding to the frame image to be processed, each first image block carries first steganographic information, the first steganographic information comprises a first terminal identifier and a first time identifier, and the first image blocks and the second image blocks are the same in shape and are in one-to-one correspondence in position;
the first determining module is used for determining a target second image block with the similarity smaller than a threshold value according to the similarity between each second image block and the corresponding first image block;
the embedding module is used for calling the trained deep learning coding model to respectively embed second steganographic information into each target second image block to obtain a third image block corresponding to each target second image block, wherein the second steganographic information comprises a first terminal identifier and a second time identifier;
the display module is used for combining all the third image blocks to obtain a target display frame image when all the second image blocks are target second image blocks; when part of second image blocks are target second image blocks, determining target first image blocks corresponding to the target second image blocks according to the target second image blocks, determining residual first image blocks from the plurality of first image blocks according to the target first image blocks, and combining the residual first image blocks and the third image blocks to obtain a target display frame image; displaying the target display frame on the target user terminal at a target moment;
in the tracing server, the image management apparatus includes:
the second acquisition module is used for acquiring an image to be traced, which is leaked from one of the user terminals, wherein the image to be traced is obtained by photographing, screen-capturing or printing one of target display frame images on the corresponding user terminal;
the second segmentation module is used for segmenting the image to be traced in a preset segmentation mode to obtain at least one fourth image block;
a third obtaining module, configured to invoke the trained deep learning decoding model to obtain third steganographic information from the fourth image block, where the third steganographic information includes a second terminal identifier and a third time identifier;
and the second determining module is used for determining the target leakage terminal and the target leakage time of the image to be traced according to the third steganographic information.
CN202111033870.5A 2021-09-03 2021-09-03 Image management method and device Active CN113469869B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111033870.5A CN113469869B (en) 2021-09-03 2021-09-03 Image management method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111033870.5A CN113469869B (en) 2021-09-03 2021-09-03 Image management method and device

Publications (2)

Publication Number Publication Date
CN113469869A CN113469869A (en) 2021-10-01
CN113469869B true CN113469869B (en) 2021-11-12

Family

ID=77867387

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111033870.5A Active CN113469869B (en) 2021-09-03 2021-09-03 Image management method and device

Country Status (1)

Country Link
CN (1) CN113469869B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115644804B (en) * 2022-09-29 2023-08-18 浙江浙大西投脑机智能科技有限公司 Two-photon imaging method and system based on calcium imaging recovery algorithm
CN116959075B (en) * 2023-08-01 2024-02-06 湖北省电子信息产品质量监督检验院 Deep learning-based iterative optimization method for identity recognition robot

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1301462A (en) * 1998-05-20 2001-06-27 麦克罗维西恩公司 Method and apparatus for selective block processing
CN106228505A (en) * 2016-07-15 2016-12-14 广东技术师范学院 A kind of robust general steganalysis method of picture material perception
CN107423629A (en) * 2017-04-12 2017-12-01 李晓妮 A kind of anti-method and system divulged a secret with tracing of fileinfo output
US9959586B1 (en) * 2016-12-13 2018-05-01 GoAnimate, Inc. System, method, and computer program for encoding and decoding a unique signature in a video file as a set of watermarks
CN108156408A (en) * 2017-12-21 2018-06-12 中国地质大学(武汉) It is a kind of towards the digital watermark embedding of video data, extracting method and system
CN109727180A (en) * 2019-01-03 2019-05-07 成都宇飞信息工程有限责任公司 A kind of screen message leakage traceability system and retroactive method
CN113095992A (en) * 2021-04-19 2021-07-09 潘瑞哲 Novel bar code screenshot steganography traceability combined algorithm

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7965861B2 (en) * 2006-04-26 2011-06-21 The Board Of Regents Of The University Of Texas System Methods and systems for digital image security
US8548810B2 (en) * 2009-11-04 2013-10-01 Digimarc Corporation Orchestrated encoding and decoding multimedia content having plural digital watermarks
US10354355B2 (en) * 2014-07-29 2019-07-16 Tata Consultancy Services Limited Digital watermarking

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1301462A (en) * 1998-05-20 2001-06-27 麦克罗维西恩公司 Method and apparatus for selective block processing
CN106228505A (en) * 2016-07-15 2016-12-14 广东技术师范学院 A kind of robust general steganalysis method of picture material perception
US9959586B1 (en) * 2016-12-13 2018-05-01 GoAnimate, Inc. System, method, and computer program for encoding and decoding a unique signature in a video file as a set of watermarks
CN107423629A (en) * 2017-04-12 2017-12-01 李晓妮 A kind of anti-method and system divulged a secret with tracing of fileinfo output
CN108156408A (en) * 2017-12-21 2018-06-12 中国地质大学(武汉) It is a kind of towards the digital watermark embedding of video data, extracting method and system
CN109727180A (en) * 2019-01-03 2019-05-07 成都宇飞信息工程有限责任公司 A kind of screen message leakage traceability system and retroactive method
CN113095992A (en) * 2021-04-19 2021-07-09 潘瑞哲 Novel bar code screenshot steganography traceability combined algorithm

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Robust Image Watermarking in the Wavelet Domain for Copyright Protection;Hamed Dehghan;《ARXIV》;20100104;全文 *

Also Published As

Publication number Publication date
CN113469869A (en) 2021-10-01

Similar Documents

Publication Publication Date Title
CN108933935B (en) Detection method and device of video communication system, storage medium and computer equipment
CN113469869B (en) Image management method and device
JP6127226B1 (en) Image processing apparatus, image processing method, and program
JP6216516B2 (en) Digital watermark embedding method and digital watermark detection method
US11514263B2 (en) Method and apparatus for processing image
CN110555334B (en) Face feature determination method and device, storage medium and electronic equipment
CN106599758A (en) Image quality processing method and terminal
CN113068040A (en) Image compression method and device, electronic equipment and readable storage medium
EP3335155B1 (en) Electronic device and operating method of the same
CN107818553B (en) Image gray value adjusting method and device
CN115358911A (en) Screen watermark generation method, device, equipment and computer readable storage medium
CN113496133A (en) Two-dimensional code identification method and device, electronic equipment and storage medium
CN111428740A (en) Detection method and device for network-shot photo, computer equipment and storage medium
CN110633773B (en) Two-dimensional code generation method and device for terminal equipment
CN114299056A (en) Defect point recognition method of image and defect image recognition model training method
CN111507140B (en) Portrait contrast method, system, electronic device and readable storage medium
WO2017130333A1 (en) Image processing device, image processing method, and program
CN116468914A (en) Page comparison method and device, storage medium and electronic equipment
JP2019047418A (en) Information processing apparatus, information processing method, computer program
CN109859092B (en) Information hiding method, device, equipment and computer readable storage medium
CN108540726B (en) Method and device for processing continuous shooting image, storage medium and terminal
CN112215237A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN112565780B (en) Game state information sharing method, network device, and storage medium
CN113326815B (en) Document processing method and device, electronic equipment and readable storage medium
CN113988649B (en) Display function testing method of display screen and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant