CN117115823A - Tamper identification method and device, computer equipment and storage medium - Google Patents

Tamper identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN117115823A
CN117115823A CN202311231238.0A CN202311231238A CN117115823A CN 117115823 A CN117115823 A CN 117115823A CN 202311231238 A CN202311231238 A CN 202311231238A CN 117115823 A CN117115823 A CN 117115823A
Authority
CN
China
Prior art keywords
image
text
tampered
target image
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311231238.0A
Other languages
Chinese (zh)
Inventor
余琦
丁拥科
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhongan Online P&c Insurance Co ltd
Original Assignee
Zhongan Online P&c Insurance Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhongan Online P&c Insurance Co ltd filed Critical Zhongan Online P&c Insurance Co ltd
Priority to CN202311231238.0A priority Critical patent/CN117115823A/en
Publication of CN117115823A publication Critical patent/CN117115823A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/1444Selective acquisition, locating or processing of specific regions, e.g. highlighted text, fiducial marks or predetermined fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/14Image acquisition
    • G06V30/146Aligning or centring of the image pick-up or image-field
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/18Extraction of features or characteristics of the image
    • G06V30/1801Detecting partial patterns, e.g. edges or contours, or configurations, e.g. loops, corners, strokes or intersections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition
    • G06V30/19Recognition using electronic means
    • G06V30/191Design or setup of recognition systems or techniques; Extraction of features in feature space; Clustering techniques; Blind source separation
    • G06V30/19147Obtaining sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a tamper identification method, a tamper identification device, a computer device and a storage medium. The method comprises the following steps: acquiring an image to be identified, wherein the image to be identified contains an object to be identified; performing corner regression correction on a target to be identified to obtain a target image conforming to the contour of the target; performing text recognition on the target image to obtain a text position; performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively labeling the position information of the tampered area; respectively corresponding the target image and the plurality of tampered images to be used as samples, training the tampered identification model to be trained according to the samples and the position information of the corresponding tampered areas, and obtaining a trained tampered identification model; and identifying the target image through the trained tamper identification model to obtain a tamper identification result. By adopting the method, unlimited training samples can be automatically generated, so that the investment of a large amount of labor cost is avoided, and the accuracy of the falsification identification result is improved.

Description

Tamper identification method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of artificial intelligence, and in particular, to a tamper identification method, apparatus, computer device, and storage medium.
Background
With the development of the internet and big data, more and more enterprises develop online business handling modes, and the business handling can be completed after checking by only uploading photos of related targets to be identified, such as certificates, driving licenses, driving license and the like. Then, some illegal users tamper the target photo to be identified, namely, the image editing software is utilized to modify the content of the shot target photo information to be identified, so that the purpose of commercial fraud is achieved. Considering the problems of large and high concurrency of the Internet service, heavy workload of manual auditing, poor auditing instantaneity, low accuracy and the like. Meanwhile, the tampering in part of samples is finer, and the manual examination is difficult to check and find. Therefore, how to tamper with and identify the target photo information to be identified is a key issue.
At present, the mainstream method is that a large number of real target pictures to be identified are tampered and marked manually, and then target detection or supervised training of a semantic segmentation model is carried out on the data set, so that tampering identification of target information to be identified is realized. However, the method has two difficulties, namely various types of targets to be identified, high requirement on manual tampering task, large workload and high cost; secondly, the types of text tampering are also various, and only tens of fonts exist, so that the tampering method is difficult to be exhausted.
Therefore, it is needed to propose a tamper identification method, device, computer device and storage medium capable of automatically generating an unlimited amount of training samples, avoiding a great amount of investment in labor cost, and improving the correctness of tamper identification results.
Disclosure of Invention
In view of the above, it is necessary to provide a tamper recognition method, a tamper recognition device, a computer apparatus, and a storage medium, which can automatically generate an unlimited amount of training samples, avoid a large amount of labor cost input, and improve the accuracy of tamper recognition results.
In a first aspect, a tamper identification method is provided, the method comprising:
acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
performing angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the contour of the target;
performing text recognition on the target image to obtain a text position in the target image;
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
respectively corresponding the target image and the plurality of tampered images to be used as samples, and training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas to obtain a trained tamper identification model;
and identifying the target image through the trained tamper identification model to obtain a tamper identification result.
In one embodiment, the tamper-identification result includes an identified tamper location, the method further comprising:
judging whether the target image is tampered or not according to the tampering identification result;
ending the tamper identification of the target image in response to the tamper identification result being that no tamper exists;
responding to the falsification identification result as falsification, and determining whether the target image is falsified or not by calculating the intersection ratio of the falsified position and the text position;
determining that tampering exists in the target image in response to the intersection ratio of the tampered position and the text position meeting a preset threshold condition;
and determining that the target image is not tampered according to the fact that the intersection ratio of the tampered position and the text position does not meet a preset threshold condition.
In one embodiment, the performing corner regression correction on the object to be identified to obtain an object image conforming to the object contour includes:
detecting the position coordinates of each angular point of the target to be identified in the image to be identified through an algorithm model;
and correcting the target to be identified to a target image conforming to the target contour through perspective transformation according to the position coordinates of each angular point.
In one embodiment, the algorithm model comprises a coordinate value regression model, the method further comprising:
coding the image to be identified into a multidimensional vector through a coordinate value regression model to be trained, wherein the multidimensional vector sequentially represents the position coordinates of each angular point;
according to the multidimensional vector and the position coordinates of each corner point marked in advance, a corresponding loss function is calculated through an absolute value error or a mean square error;
and optimizing the internal parameters of the coordinate value regression model to be trained through gradient back propagation and an optimizer algorithm, so that the loss function is reduced to approach 0, and the trained coordinate value regression model is obtained.
In one embodiment, the performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and labeling the position information of the tampered area in the plurality of tampered images respectively includes:
according to the target image and the text position, obtaining a text-free target image with text contents removed through an image restoration model;
and generating different tampered images through an image generation model according to the text position and the non-text target image, and marking the position information of the tampered area in the tampered images.
In one embodiment, the obtaining, by the image restoration model, the text-free target image after removing the text content includes:
and filling the text region of the target image with background shading through an image restoration model according to the text position, and obtaining the text-free target image with text content removed.
In one embodiment, the generating different tampered images by the image generating model and marking the position information of the tampered area in the tampered images includes:
and according to the text position and the text-free target image, restoring different falsified entries generated through text rendering to corresponding text areas in the text-free target image, obtaining falsified images, and marking the position information of the falsified areas in the falsified images.
In a second aspect, there is provided a tamper identification device, the device comprising:
the acquisition module is used for acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
the correction module is used for carrying out angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the target contour;
a falsified image generation module, which is used for carrying out text recognition on the target image to obtain the text position in the target image,
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
the training module is used for respectively corresponding the target image and the plurality of tampered images to be used as samples, training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas, and obtaining a trained tamper identification model;
and the identification module is used for identifying the target image through the trained tamper identification model to obtain a tamper identification result.
In a third aspect, a computer device is provided, the computer device comprising one or more processors; and a memory associated with the one or more processors, the memory for storing program instructions that, when read for execution by the one or more processors, perform the steps of the tamper identification method of any of the first aspects above.
In a fourth aspect, there is provided a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the tamper identification method according to any of the first aspects above.
According to the tamper identification method, the tamper identification device, the computer equipment and the storage medium, firstly, angular point regression correction is carried out on an object to be identified in an image to be identified, and the object to be identified is converted into an object image conforming to an object contour so as to carry out subsequent training of a tamper identification model; secondly, obtaining a text region in a target image through text recognition, and obtaining corresponding tampered images and marking information of the tampered region through image restoration and image generation, so as to automatically generate unlimited tampered samples, thereby improving the accuracy of a tampered recognition model; then, respectively corresponding the target image and the plurality of tampered images as samples, and training the tamper identification model to be trained by combining the labeling information of the tampered area to obtain a trained tamper identification model so as to generate an accurate tamper identification result; and finally, recognizing the target image by using the trained tamper recognition model to obtain a tamper recognition result.
Drawings
FIG. 1 is a flow diagram of a tamper identification method in one embodiment;
FIG. 2 is a flow chart of a tamper identification model training method in one embodiment;
FIG. 3 is a block diagram of the tamper identification device in one embodiment;
fig. 4 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present application more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the application.
Example 1
In one embodiment, as shown in fig. 1, a tamper identification method is provided, the method comprising:
s10, acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
s11, carrying out angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the contour of the target;
further, detecting the position coordinates of each angular point of the target to be identified in the image to be identified through an algorithm model;
still further, the algorithm model includes a coordinate value regression model, the method further comprising:
coding the image to be identified into a multidimensional vector through a coordinate value regression model to be trained, wherein the multidimensional vector sequentially represents the position coordinates of each angular point;
according to the multidimensional vector and the position coordinates of each corner point marked in advance, a corresponding loss function is calculated through an absolute value error or a mean square error;
and optimizing the internal parameters of the coordinate value regression model to be trained through gradient back propagation and an optimizer algorithm, so that the loss function is reduced to approach 0, and the trained coordinate value regression model is obtained.
And correcting the target to be identified to a target image conforming to the target contour through perspective transformation according to the position coordinates of each angular point.
Specifically, the design correction module carries out angular point regression correction on the object to be identified, and an algorithm model is adopted to detect the position coordinates of the angular point of the object to be identified in the image to be identified. For example, the image to be identified is a photo, the object to be identified is a certificate, the position information of four corner points of the certificate in the photo is detected by adopting an algorithm model, and then the certificate area is corrected to be rectangular through perspective transformation, so that the background information outside the object to be identified is removed, and the object area to be identified is transformed into an object image conforming to the outline of the object, so that the follow-up falsification identification task is facilitated.
The neural network-based corner detection method comprises the following steps: coordinate value regression methods, gaussian heat map methods, image segmentation methods, and the like. For example, the image to be identified is a photo, the object to be identified is a certificate, and the coordinate value regression method directly encodes the photo into the coordinate values of 4 corner points of the certificate through a neural network, namely an 8-dimensional vector, and sequentially represents the abscissa and the ordinate of each corner point. And calculating a corresponding loss function through the absolute value error or the mean square error according to the output 8-dimensional vector and the label information of the four corner points of the certificate. The label information of the corner points is manually marked coordinates of the corner points in advance. The model is guided/trained by this tag information so that the result of the model encoding is as consistent as possible with the tag. The reasoning result of the model can be directly used as the corner coordinates for the pictures which are not marked later. And carrying out gradient back propagation on the calculated loss value, and optimizing parameters in the model by utilizing an optimizer algorithm, so that the loss value is reduced until the loss value approaches to 0, wherein the completion of fitting is indicated, and the completion of model training is indicated. And then, performing operations such as image rotation, pixel interpolation and the like by utilizing a perspective transformation algorithm, finally obtaining a target image conforming to the target contour, eliminating background interference, unifying the formats of image contents and reducing the fitting difficulty of a follow-up tamper identification model.
S12, carrying out text recognition on the target image to obtain a text position in the target image;
s13, performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
further, according to the target image and the text position, obtaining a text-free target image with text contents removed through an image restoration model;
and further, filling the text region of the target image with background shading through an image restoration model according to the text position to obtain a text-free target image with text contents removed.
And generating different tampered images through an image generation model according to the text position and the non-text target image, and marking the position information of the tampered area in the tampered images.
And restoring different falsified entries generated by text rendering to corresponding text areas in the text-free target image according to the text positions and the text-free target image to obtain falsified images, and marking the position information of the falsified areas in the falsified images.
Specifically, a tampered image generation module is designed, which comprises the following three parts: text recognition, image restoration, and image generation. Firstly, text recognition is carried out by adopting a conventional OCR (Optical Character Recognition ) technology, so that the position of the text in an image is obtained; then, taking the text area and the certificate image as input, and using an image restoration model to obtain the certificate image from which the corresponding text is removed, and simultaneously, completing desensitization of the certificate information; and finally, selecting text content and background shading for fusion, generating tampered entries with different characters such as fonts, sizes and colors in a text rendering mode, restoring the tampered entries to the areas corresponding to the original image, and generating marking information of the tampered areas.
For example, the certificate number and the position thereof corresponding to the certificate image are acquired through OCR recognition, a tampered image with the same background and different contents is generated by utilizing an image restoration and text rendering technology, and the image is replaced in place and the corresponding position label is recorded through perspective transformation on the space position, so that automatic tampering and label generation of the image are realized.
The main flow of the image restoration module is as follows, the pixel positions of texts on certificates are marked by utilizing an OCR detection model, a binarization mask capable of distinguishing target areas is obtained, the mask and an original image are used as input of the image restoration model, namely, the text areas corresponding to the original image can be filled with background ground marks, at the moment, certificate images with text information removed can be obtained, and the images are used for next text rendering.
S14, respectively corresponding the target image and the plurality of tampered images to be used as samples, training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas, and obtaining a trained tamper identification model;
specifically, as shown in fig. 2, a tamper recognition model, mainly a semantic/instance segmentation or object detection model, is trained. The process is as follows: and taking the target image as training input, then generating a tampered image for the target image through a tampered image generation module, and generating labeling information corresponding to the tampered area, such as a pixel-level position label or a rectangular frame-level label, so that a tampered sample can be generated.
The target image and the tampered image are formed into positive and negative sample pairs. And adding a plurality of positive and negative sample pairs into the same training batch to achieve the effect of contrast learning, thereby improving the effect of model training.
And S15, recognizing the target image through the trained tamper recognition model to obtain a tamper recognition result.
Judging whether the target image is tampered or not according to the tampering identification result;
ending the tamper identification of the target image in response to the tamper identification result being that no tamper exists;
responding to the falsification identification result as falsification, and determining whether the target image is falsified or not by calculating the intersection ratio of the falsified position and the text position;
determining that tampering exists in the target image in response to the intersection ratio of the tampered position and the text position meeting a preset threshold condition;
and determining that the target image is not tampered according to the fact that the intersection ratio of the tampered position and the text position does not meet a preset threshold condition.
Specifically, in the application of tamper recognition of the target image, the tampered area a recognized by the tamper recognition model and the text area B recognized by the OCR are subjected to position verification, and the intersection ratio (IOU) of the two areas is compared, so that if the threshold is not reached, no tampering exists, and if the threshold is exceeded or reached, the target image is considered to have tampering. The method can prevent erroneous judgment of the whole certificate caused by extra marks such as watermarks, remarks and the like in non-key field areas.
IOU=(A∩B)/(A∪B)
Specifically, the to-be-trained tamper identification model comprises a trunk module, a feature pyramid module and a last sampling module, a certificate image is input into the model, and a segmentation feature diagram with the length and the width consistent with those of an original image can be obtained through information coding of the 3 modules in sequence, and a tamper area is highlighted in the diagram, so that the effect of tamper automatic identification is achieved.
It should be understood that, although the steps in the flowchart of fig. 1 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1 may include multiple sub-steps or stages that are not necessarily performed at the same time, but may be performed at different times, nor do the order in which the sub-steps or stages are performed necessarily performed in sequence, but may be performed alternately or alternately with at least a portion of other steps or sub-steps of other steps.
Example two
In one embodiment, as shown in FIG. 3, there is provided a tamper-evident device comprising:
the acquiring module 30 is configured to acquire an image to be identified, where the image to be identified includes an object to be identified;
the correction module 31 is configured to perform corner regression correction on a target to be identified in the image to be identified, so as to obtain a target image conforming to a target contour;
further, the correction module is further configured to:
detecting the position coordinates of each angular point of the target to be identified in the image to be identified through an algorithm model;
and correcting the target to be identified to a target image conforming to the target contour through perspective transformation according to the position coordinates of each angular point.
Still further, the algorithm model includes a coordinate value regression model, and the correction module is further configured to:
coding the image to be identified into a multidimensional vector through a coordinate value regression model to be trained, wherein the multidimensional vector sequentially represents the position coordinates of each angular point;
according to the multidimensional vector and the position coordinates of each corner point marked in advance, a corresponding loss function is calculated through an absolute value error or a mean square error;
and optimizing the internal parameters of the coordinate value regression model to be trained through gradient back propagation and an optimizer algorithm, so that the loss function is reduced to approach 0, and the trained coordinate value regression model is obtained.
A tampered image generation module 32 for performing text recognition on the target image, obtaining a text position in the target image,
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
further, the tampered image generating module is further configured to:
according to the target image and the text position, obtaining a text-free target image with text contents removed through an image restoration model;
still further, the tamper generation module is further configured to:
and filling the text region of the target image with background shading through an image restoration model according to the text position, and obtaining the text-free target image with text content removed.
And generating different tampered images through an image generation model according to the text position and the non-text target image, and marking the position information of the tampered area in the tampered images.
Still further, the tamper generation module is further configured to:
and according to the text position and the text-free target image, restoring different falsified entries generated through text rendering to corresponding text areas in the text-free target image, obtaining falsified images, and marking the position information of the falsified areas in the falsified images.
The training module 33 is configured to respectively correspond to the target image and the plurality of tampered images to be used as samples, train the tamper recognition model to be trained according to the samples and the position information of the corresponding tampered areas, and obtain a trained tamper recognition model;
and the recognition module 34 is used for recognizing the target image through the trained tamper recognition model to obtain a tamper recognition result.
Further, the tamper-identification result includes the identified tamper location, the apparatus further comprising:
a judging module for judging whether the target image is tampered or not according to the tamper identification result,
and ending the tamper identification of the target image in response to the tamper identification result being that no tamper exists;
still further, the tamper identification module is further configured to:
responding to the falsification identification result as falsification, calculating the intersection ratio of the falsification position and the text position to determine whether the target image has falsification,
and determining that tampering exists with the target image in response to the intersection ratio of the tampered position and the text position meeting a preset threshold condition,
or determining that the target image is not tampered in response to the intersection ratio of the tampered position and the text position not meeting a preset threshold condition.
For specific limitations of the tamper-evident device, reference may be made to the limitations of the tamper-evident method hereinabove, and no further description is given here. The respective modules in the tamper identification device described above may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
Example III
In one embodiment, a computer device is provided comprising a memory, a processor, and a computer program stored on the memory and executable on the processor, the processor implementing the steps of when executing the computer program:
acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
performing angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the contour of the target;
performing text recognition on the target image to obtain a text position in the target image;
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
respectively corresponding the target image and the plurality of tampered images to be used as samples, and training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas to obtain a trained tamper identification model;
and identifying the target image through the trained tamper identification model to obtain a tamper identification result.
In one embodiment, the processor when executing the computer program further performs the steps of:
judging whether the target image is tampered or not according to the tampering identification result;
ending the tamper identification of the target image in response to the tamper identification result being that no tamper exists;
responding to the falsification identification result as falsification, and determining whether the target image is falsified or not by calculating the intersection ratio of the falsified position and the text position;
determining that tampering exists in the target image in response to the intersection ratio of the tampered position and the text position meeting a preset threshold condition;
and determining that the target image is not tampered according to the fact that the intersection ratio of the tampered position and the text position does not meet a preset threshold condition.
In one embodiment, the processor when executing the computer program further performs the steps of:
detecting the position coordinates of each angular point of the target to be identified in the image to be identified through an algorithm model;
and correcting the target to be identified to a target image conforming to the target contour through perspective transformation according to the position coordinates of each angular point.
In one embodiment, the processor when executing the computer program further performs the steps of:
coding the image to be identified into a multidimensional vector through a coordinate value regression model to be trained, wherein the multidimensional vector sequentially represents the position coordinates of each angular point;
according to the multidimensional vector and the position coordinates of each corner point marked in advance, a corresponding loss function is calculated through an absolute value error or a mean square error;
and optimizing the internal parameters of the coordinate value regression model to be trained through gradient back propagation and an optimizer algorithm, so that the loss function is reduced to approach 0, and the trained coordinate value regression model is obtained.
In one embodiment, the processor when executing the computer program further performs the steps of:
according to the target image and the text position, obtaining a text-free target image with text contents removed through an image restoration model;
and generating different tampered images through an image generation model according to the text position and the non-text target image, and marking the position information of the tampered area in the tampered images.
In one embodiment, the processor when executing the computer program further performs the steps of:
and filling the text region of the target image with background shading through an image restoration model according to the text position, and obtaining the text-free target image with text content removed.
In one embodiment, the processor when executing the computer program further performs the steps of:
and according to the text position and the text-free target image, restoring different falsified entries generated through text rendering to corresponding text areas in the text-free target image, obtaining falsified images, and marking the position information of the falsified areas in the falsified images.
The program instructions, when read and executed by the one or more processors, may further perform operations corresponding to the steps in the foregoing method embodiments, and reference may be made to the foregoing description, which is not repeated herein. Referring to FIG. 4, which illustrates an architecture of a computer device, may include, in particular, a processor 410, a video display adapter 411, a disk drive 412, an input/output interface 413, a network interface 414, and a memory 420. The processor 410, video display adapter 411, disk drive 412, input/output interface 413, network interface 414, and memory 420 may be communicatively coupled via a communication bus 430.
The processor 410 may be implemented by a general-purpose central processing unit (Central Processing Unit, CPU), a microprocessor, an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), or one or more integrated circuits, etc., for executing related programs to implement the technical scheme provided by the present application.
The Memory 420 may be implemented in the form of Read Only Memory (ROM), random access Memory (Random Access Memory, RAM), static storage devices, dynamic storage devices, etc. The memory 420 may store an operating system 421 for controlling the operation of the computer device 400, and a Basic Input Output System (BIOS) 422 for controlling the low-level operation of the computer device 400. In addition, a web browser 423, data storage management 424, and an icon font processing system 425, etc. may also be stored. The icon font processing system 425 may be an application program that implements the operations of the foregoing steps in embodiments of the present application. In general, when the technical solution provided by the present application is implemented by software or firmware, relevant program codes are stored in the memory 420 and invoked by the processor 410 for execution.
The input/output interface 413 is used to connect to an input/output module to realize information input and output. The input/output module may be configured as a component in a device (not shown) or may be external to the device to provide corresponding functionality. Wherein the input devices may include a keyboard, mouse, touch screen, microphone, various types of sensors, etc., and the output devices may include a display, speaker, vibrator, indicator lights, etc.
The network interface 414 is used to connect communication modules (not shown) to enable communication interactions of the device with other devices. The communication module may implement communication through a wired manner (such as USB, network cable, etc.), or may implement communication through a wireless manner (such as mobile network, WIFI, bluetooth, etc.).
Bus 430 includes a path to transfer information between various components of the device (e.g., processor 410, video display adapter 411, disk drive 412, input/output interface 413, network interface 414, and memory 420).
In addition, the computer apparatus 400 may also obtain information of specific acquisition conditions from the virtual resource object acquisition condition information database 441 for making condition judgment, and so on.
It should be noted that although the above-described computer device 400 illustrates only a processor 410, a video display adapter 411, a disk drive 412, an input/output interface 413, a network interface 414, a memory 420, a bus 430, etc., the computer device may include other components necessary to achieve proper operation in a particular implementation. Furthermore, it will be appreciated by those skilled in the art that the apparatus may include only the components necessary to implement the present application, and not all of the components shown in the drawings.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a cloud server, or a network device, etc.) to execute the method described in the embodiments or some parts of the embodiments of the present application.
Example IV
In one embodiment, a computer readable storage medium is provided having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
performing angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the contour of the target;
performing text recognition on the target image to obtain a text position in the target image;
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
respectively corresponding the target image and the plurality of tampered images to be used as samples, and training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas to obtain a trained tamper identification model;
and identifying the target image through the trained tamper identification model to obtain a tamper identification result.
In one embodiment, the computer program when executed by the processor further performs the steps of:
judging whether the target image is tampered or not according to the tampering identification result;
ending the tamper identification of the target image in response to the tamper identification result being that no tamper exists;
responding to the falsification identification result as falsification, and determining whether the target image is falsified or not by calculating the intersection ratio of the falsified position and the text position;
determining that tampering exists in the target image in response to the intersection ratio of the tampered position and the text position meeting a preset threshold condition;
and determining that the target image is not tampered according to the fact that the intersection ratio of the tampered position and the text position does not meet a preset threshold condition.
In one embodiment, the computer program when executed by the processor further performs the steps of:
detecting the position coordinates of each angular point of the target to be identified in the image to be identified through an algorithm model;
and correcting the target to be identified to a target image conforming to the target contour through perspective transformation according to the position coordinates of each angular point.
In one embodiment, the computer program when executed by the processor further performs the steps of:
coding the image to be identified into a multidimensional vector through a coordinate value regression model to be trained, wherein the multidimensional vector sequentially represents the position coordinates of each angular point;
according to the multidimensional vector and the position coordinates of each corner point marked in advance, a corresponding loss function is calculated through an absolute value error or a mean square error;
and optimizing the internal parameters of the coordinate value regression model to be trained through gradient back propagation and an optimizer algorithm, so that the loss function is reduced to approach 0, and the trained coordinate value regression model is obtained.
In one embodiment, the computer program when executed by the processor further performs the steps of:
according to the target image and the text position, obtaining a text-free target image with text contents removed through an image restoration model;
and generating different tampered images through an image generation model according to the text position and the non-text target image, and marking the position information of the tampered area in the tampered images.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and filling the text region of the target image with background shading through an image restoration model according to the text position, and obtaining the text-free target image with text content removed.
In one embodiment, the computer program when executed by the processor further performs the steps of:
and according to the text position and the text-free target image, restoring different falsified entries generated through text rendering to corresponding text areas in the text-free target image, obtaining falsified images, and marking the position information of the falsified areas in the falsified images.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous Link DRAM (SLDRAM), memory bus direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples illustrate only a few embodiments of the application, which are described in detail and are not to be construed as limiting the scope of the application. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the application, which are all within the scope of the application. Accordingly, the scope of protection of the present application is to be determined by the appended claims.

Claims (10)

1. A tamper-evident method, the method comprising:
acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
performing angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the contour of the target;
performing text recognition on the target image to obtain a text position in the target image;
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
respectively corresponding the target image and the plurality of tampered images to be used as samples, and training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas to obtain a trained tamper identification model;
and identifying the target image through the trained tamper identification model to obtain a tamper identification result.
2. The method of claim 1, wherein the tamper-identification result includes an identified tamper location, the method further comprising:
judging whether the target image is tampered or not according to the tampering identification result;
ending the tamper identification of the target image in response to the tamper identification result being that no tamper exists;
responding to the falsification identification result as falsification, and determining whether the target image is falsified or not by calculating the intersection ratio of the falsified position and the text position;
determining that tampering exists in the target image in response to the intersection ratio of the tampered position and the text position meeting a preset threshold condition;
and determining that the target image is not tampered according to the fact that the intersection ratio of the tampered position and the text position does not meet a preset threshold condition.
3. The method according to claim 1, wherein performing corner regression correction on the object to be identified to obtain an object image conforming to an object contour comprises:
detecting the position coordinates of each angular point of the target to be identified in the image to be identified through an algorithm model;
and correcting the target to be identified to a target image conforming to the target contour through perspective transformation according to the position coordinates of each angular point.
4. The method of claim 3, wherein the algorithmic model comprises a coordinate value regression model, the method further comprising:
coding the image to be identified into a multidimensional vector through a coordinate value regression model to be trained, wherein the multidimensional vector sequentially represents the position coordinates of each angular point;
according to the multidimensional vector and the position coordinates of each corner point marked in advance, a corresponding loss function is calculated through an absolute value error or a mean square error;
and optimizing the internal parameters of the coordinate value regression model to be trained through gradient back propagation and an optimizer algorithm, so that the loss function is reduced to approach 0, and the trained coordinate value regression model is obtained.
5. The method according to claim 1, wherein performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and labeling position information of tampered areas in the plurality of tampered images respectively, includes:
according to the target image and the text position, obtaining a text-free target image with text contents removed through an image restoration model;
and generating different tampered images through an image generation model according to the text position and the non-text target image, and marking the position information of the tampered area in the tampered images.
6. The method of claim 5, wherein obtaining a text-free target image with text content removed by an image restoration model comprises:
and filling the text region of the target image with background shading through an image restoration model according to the text position, and obtaining the text-free target image with text content removed.
7. The method according to claim 5, wherein generating different tampered images by the image generation model and labeling the location information of the tampered region in the tampered images includes:
and according to the text position and the text-free target image, restoring different falsified entries generated through text rendering to corresponding text areas in the text-free target image, obtaining falsified images, and marking the position information of the falsified areas in the falsified images.
8. A tamper-evident device, the device comprising:
the acquisition module is used for acquiring an image to be identified, wherein the image to be identified contains an object to be identified;
the correction module is used for carrying out angular point regression correction on the target to be identified in the image to be identified to obtain a target image conforming to the target contour;
a falsified image generation module, which is used for carrying out text recognition on the target image to obtain the text position in the target image,
performing image restoration and image generation on the target image according to the text position to obtain a plurality of tampered images, and respectively marking the position information of tampered areas in the tampered images;
the training module is used for respectively corresponding the target image and the plurality of tampered images to be used as samples, training the tamper identification model to be trained according to the samples and the position information of the corresponding tampered areas, and obtaining a trained tamper identification model;
and the identification module is used for identifying the target image through the trained tamper identification model to obtain a tamper identification result.
9. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the tamper identification method according to any one of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium having stored thereon a computer program, characterized in that the computer program, when executed by a processor, implements the steps of the tamper identification method of any of claims 1 to 7.
CN202311231238.0A 2023-09-22 2023-09-22 Tamper identification method and device, computer equipment and storage medium Pending CN117115823A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311231238.0A CN117115823A (en) 2023-09-22 2023-09-22 Tamper identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311231238.0A CN117115823A (en) 2023-09-22 2023-09-22 Tamper identification method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117115823A true CN117115823A (en) 2023-11-24

Family

ID=88807677

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311231238.0A Pending CN117115823A (en) 2023-09-22 2023-09-22 Tamper identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117115823A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407562A (en) * 2023-12-13 2024-01-16 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117407562A (en) * 2023-12-13 2024-01-16 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment
CN117407562B (en) * 2023-12-13 2024-04-05 杭州海康威视数字技术股份有限公司 Image recognition method, system and electronic equipment

Similar Documents

Publication Publication Date Title
CN107798299B (en) Bill information identification method, electronic device and readable storage medium
CN109886928B (en) Target cell marking method, device, storage medium and terminal equipment
CN110751149B (en) Target object labeling method, device, computer equipment and storage medium
CN110675940A (en) Pathological image labeling method and device, computer equipment and storage medium
CN113837151B (en) Table image processing method and device, computer equipment and readable storage medium
CN113673500A (en) Certificate image recognition method and device, electronic equipment and storage medium
CN115063618B (en) Defect positioning method, system, equipment and medium based on template matching
CN117115823A (en) Tamper identification method and device, computer equipment and storage medium
CN112308069A (en) Click test method, device, equipment and storage medium for software interface
JP2019079347A (en) Character estimation system, character estimation method, and character estimation program
CN111144372A (en) Vehicle detection method, device, computer equipment and storage medium
CN114550051A (en) Vehicle loss detection method and device, computer equipment and storage medium
CN112232336A (en) Certificate identification method, device, equipment and storage medium
CN111738252B (en) Text line detection method, device and computer system in image
CN114049540A (en) Method, device, equipment and medium for detecting marked image based on artificial intelligence
CN111898544B (en) Text image matching method, device and equipment and computer storage medium
CN111680680B (en) Target code positioning method and device, electronic equipment and storage medium
CN113496115B (en) File content comparison method and device
CN112651315A (en) Information extraction method and device of line graph, computer equipment and storage medium
CN114494856A (en) Equipment model detection model training method and equipment model detection method
CN111241974B (en) Bill information acquisition method, device, computer equipment and storage medium
CN110705633B (en) Target object detection method and device and target object detection model establishing method and device
CN114332016A (en) Equipment screen light spot detection method and device
CN109902724B (en) Text recognition method and device based on support vector machine and computer equipment
CN114495108A (en) Character detection method and device, electronic equipment and readable medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination