CN111260757A - Image processing method and device and terminal equipment - Google Patents

Image processing method and device and terminal equipment Download PDF

Info

Publication number
CN111260757A
CN111260757A CN201811465054.XA CN201811465054A CN111260757A CN 111260757 A CN111260757 A CN 111260757A CN 201811465054 A CN201811465054 A CN 201811465054A CN 111260757 A CN111260757 A CN 111260757A
Authority
CN
China
Prior art keywords
image
mark
eliminated
area
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811465054.XA
Other languages
Chinese (zh)
Inventor
李威
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mashang Xiaofei Finance Co Ltd
Mashang Consumer Finance Co Ltd
Original Assignee
Mashang Xiaofei Finance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mashang Xiaofei Finance Co Ltd filed Critical Mashang Xiaofei Finance Co Ltd
Priority to CN201811465054.XA priority Critical patent/CN111260757A/en
Publication of CN111260757A publication Critical patent/CN111260757A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras

Abstract

The invention provides an image processing method, an image processing device and terminal equipment, wherein the image processing method comprises the following steps: acquiring an image to be processed; detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located; intercepting a first area image containing the mark to be eliminated from the image to be processed according to the position information of the area where the mark to be eliminated is located; inputting the first area image into a pre-trained marker elimination model, and eliminating a marker to be eliminated in the first area image to obtain a second area image with the marker to be eliminated; and replacing the first area image in the image to be processed with the second area image to obtain a target image. The embodiment of the invention can simply and conveniently eliminate different specific marks in the image and simplify the operation.

Description

Image processing method and device and terminal equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a terminal device.
Background
With the continuous development of science and technology, the application of images has become an indispensable part of people's daily life. From the recording of drips in daily life, the editing of content for news reports, and even to the internet credit review of people, images have become an intimate participant. However, in some of the above cases, not all of the content in the image is essential in certain specific scenes, and sometimes one only wants to retain interesting or non-interfering image content. Thus, applications or tools such as image editing have come to work to edit images to remove some uninteresting content, such as removal of watermarks, removal of specific symbols or elements, and the like.
Specifically, the currently commonly used image mark removal method is as follows: the object or area to be eliminated is separated from the image subject and filled with appropriate image content to achieve the purpose of eliminating the specific mark. However, the image mark eliminating method often needs different schemes to process different marks to be eliminated, and the operation is complicated.
Disclosure of Invention
The embodiment of the invention provides an image processing method, an image processing device and terminal equipment, and aims to solve the problem of complex operation caused by the existing image mark eliminating method.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an image to be processed;
detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located;
intercepting a first area image containing the mark to be eliminated from the image to be processed according to the position information of the area where the mark to be eliminated is located;
inputting the first area image into a pre-trained marker elimination model, and eliminating a marker to be eliminated in the first area image to obtain a second area image with the marker to be eliminated;
and replacing the first area image in the image to be processed with the second area image to obtain a target image.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including:
the first acquisition module is used for acquiring an image to be processed;
the detection module is used for detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located;
the intercepting module is used for intercepting a first area image containing the mark to be eliminated from the image to be processed according to the position information of the area where the mark to be eliminated is located;
the first processing module is used for inputting the first area image into a pre-trained mark elimination model, and eliminating a mark to be eliminated in the first area image to obtain a second area image with the mark to be eliminated;
and the replacing module is used for replacing the first area image in the image to be processed by using the second area image to obtain a target image.
In a third aspect, an embodiment of the present invention further provides a terminal device, including a memory, a processor, and a computer program stored on the memory and executable on the processor, where the computer program, when executed by the processor, may implement the steps of the image processing method.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement the steps of the above-mentioned image processing method.
According to the image processing method provided by the embodiment of the invention, the intercepted image containing the mark to be eliminated, namely the first area image, is input into the pre-trained mark elimination model, the mark to be eliminated in the first area image is eliminated to obtain the second area image of which the mark to be eliminated is eliminated, and the second area image is used for replacing the first area image in the image to be processed to obtain the target image.
In addition, because the mark to be eliminated is eliminated based on the intercepted image rather than the whole image, the calculation complexity of the corresponding mark elimination model can be greatly reduced, the precision of the model is reduced to a certain extent, and the data processing efficiency is improved.
Furthermore, the mark to be eliminated in the embodiment of the present invention may include a plurality of target marks, so that at least one first region image may be captured from the image to be processed, and each first region image includes at least one target mark, thereby implementing elimination of a plurality of specific marks in one image together, simplifying image processing operation, and improving image processing efficiency.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a model training process according to an embodiment of the present invention;
FIG. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, an embodiment of the present invention provides an image processing method, including the steps of:
step 101: and acquiring an image to be processed.
The image to be processed can be selected from commonly used images, such as photos, drawings and the like.
Step 102: and detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located.
The mark to be eliminated may be a watermark, a name mark, and/or a specific symbol mark, and the like, which is not limited in the embodiments of the present invention.
Further, the mark to be eliminated may include at least one target mark indicating a specific mark to be eliminated. In a specific implementation, the mark to be eliminated may be a combination of a plurality of identical or different target marks.
Step 103: and according to the position information of the area where the mark to be eliminated is located, intercepting a first area image containing the mark to be eliminated from the image to be processed.
It is understood that, when step 103 is executed, the first area image containing the to-be-eliminated mark may be intercepted from the to-be-processed image by using an existing image intercepting manner.
Since the mark to be eliminated may include at least one target mark, the first area image including the mark to be eliminated may be understood as at least one first area image including the mark to be eliminated, and each first area image may include at least one target mark. Therefore, a plurality of marks in one image can be eliminated together, the image processing operation is simplified, and the image processing efficiency is improved.
Step 104: and inputting the first area image into a pre-trained mark elimination model, and eliminating the mark to be eliminated in the first area image to obtain a second area image with the mark to be eliminated.
In an embodiment of the present invention, the marker elimination model may be selected based on pre-training of the neural network. In order to ensure the elimination processing effect of the marker elimination model, the first region image may be preprocessed before being input to the pre-trained marker elimination model.
Specifically, after step 103 and before step 104, the method may further include:
and preprocessing the first area image to obtain a preprocessed image meeting the input requirement of the mark elimination model.
The corresponding step 104 includes: inputting the preprocessed image into a pre-trained mark elimination model, and eliminating a mark to be eliminated in the preprocessed image to obtain a third area image with the mark to be eliminated; and processing the third area image to obtain the second area image.
For example, if the input requirement of the pre-trained marker removal model includes that the resolution of the input image is greater than or equal to X, and the resolution of the clipped first region image is less than X, before the input requirement of the input image is input into the marker removal model, the clipped first region image needs to be scaled (for example, reduced) to meet the requirement of the input resolution of the marker removal model, and after the reduced image is removed by the marker removal model, the output image of the marker removal model needs to be correspondingly enlarged in order to obtain the target image attached to the original image by replacement. Where X may be preset based on actual conditions.
Step 105: and replacing the first area image in the image to be processed with the second area image to obtain a target image.
According to the image processing method provided by the embodiment of the invention, the intercepted image containing the mark to be eliminated, namely the first area image, is input into the pre-trained mark elimination model, the mark to be eliminated in the first area image is eliminated to obtain the second area image of which the mark to be eliminated is eliminated, and the second area image is used for replacing the first area image in the image to be processed to obtain the target image.
In addition, because the mark to be eliminated is eliminated based on the intercepted image rather than the whole image, the calculation complexity of the corresponding mark elimination model can be greatly reduced, the precision of the model is reduced to a certain extent, and the data processing efficiency is improved.
Furthermore, the mark to be eliminated in the embodiment of the present invention may include a plurality of target marks, so that at least one first region image may be captured from the image to be processed, and each first region image includes at least one target mark, thereby implementing elimination of a plurality of specific marks in one image together, simplifying image processing operation, and improving image processing efficiency.
In this embodiment of the present invention, optionally, in step 102, a pre-trained marker detection model may be used to detect a marker to be eliminated in the image to be processed. Step 102 may include:
and inputting the image to be processed into a pre-trained mark detection model, and detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located.
Therefore, the pre-trained mark detection model is used for detection, so that the detection process can be simplified and the effect can be improved on the premise of ensuring the detection precision.
Further, before step 102, the method may further include:
acquiring a mark detection training set; the image samples in the mark detection training set contain marks to be eliminated, and the mark detection training set can also comprise position information, category label information and the like corresponding to the marks to be eliminated besides the image samples;
and training to obtain a mark detection model based on the convolutional neural network and the mark detection training set.
The convolutional neural network can be selected as a residual convolutional neural network to optimize the performance of the model. And the corresponding loss function may be selected as a focal loss function, such as FL (p)t)=-αt(1-pt)γlog(pt) Wherein p istThe other parameters are model parameters for the class probability obtained in the training process. The training may be determined when it is detected that the trained landmark detection model is saturated, i.e. the corresponding loss function (loss value) is smaller than a preset threshold valueAnd (6) ending. Therefore, the convolutional neural network has strong fitting capability, so that the training of the mark detection model is carried out by utilizing the convolutional neural network, and the precision of subsequent detection can be improved.
The following describes a training process of the marker detection model in an embodiment of the present invention with reference to fig. 2.
In the embodiment of the present invention, the convolutional neural network is exemplified by a residual convolutional neural network, and the loss function is a focal loss function FL (p)t)=-αt(1-pt)γlog(pt) The corresponding training process may be as follows:
s1: randomly extracting N image samples from the mark detection training set, inputting the N image samples into a residual convolutional neural network module (namely a mark detection model) to obtain the characteristics of the image samples, and inputting the characteristics of the obtained image samples into an FPN (feature pyramid network) module to form multi-scale characteristics and perform enhanced utilization so as to obtain a characteristic diagram set which has stronger expression and contains multi-scale target area information; the FPN module is used for generating multi-scale features;
s2: based on S1 forward propagation, performing category and position information regression on the feature map set output by the FPN module respectively to obtain the marker category probability and position information predicted by the network, namely obtaining the predicted candidate frame position information of the marker to be eliminated and the category probability of the marker to be eliminated;
s3: calculating loss values between the position information corresponding to the training image samples and the prediction result of the corresponding network in the current iteration (namely the obtained position information in S2) by utilizing a focal loss function based on S2 forward propagation;
s4: judging whether the loss value calculated in the step S3 is smaller than a preset threshold (for example, 0.01), that is, judging whether the residual convolutional neural network module is saturated; if the calculated loss value is greater than or equal to a preset threshold (not saturated), performing S1 based on the back propagation of S4, and continuing to randomly extract training data for training; and if the calculated loss value is smaller than a preset threshold value (saturation), finishing the training and storing the trained mark detection model.
In this embodiment of the present invention, optionally, before step 102, the method may further include:
acquiring a mark elimination training set; wherein the mark elimination training set comprises a first image sample set containing a mark to be eliminated and a second image sample set not containing the mark to be eliminated, which are paired;
and performing iterative training on a generator network and a discriminator network in a preset antagonistic neural network model based on the first image sample set and the second image sample set until loss functions of the generator network and the discriminator network are smaller than a preset threshold value to obtain a mark elimination model.
Therefore, training is carried out based on the image samples appearing in pairs, so that the trained mark elimination model can learn the image characteristic distribution, and the image with the mark to be eliminated, which is obtained by utilizing the trained mark elimination model, is more fit with the natural distribution of the original image in the visual effect.
It will be appreciated that this loss function may be chosen based on the actual situation, but at least the following conditions are met: 1) the discriminator must allow all the original images of the corresponding category, i.e. corresponding to output 1; 2) the discriminator must reject all the generated images that are desired to fool a pass, i.e. the corresponding output is set to 0; 3) the generator must have the arbiter allow fool operations to be implemented through all of the generated images; 4) the image generated by the generator must retain the characteristics of the original image; for example, if generator A2B is used to generate a false image, then another generator B2A is used to recover the original image, and this process is required to satisfy the loop consistency.
For example, the loss function may be selected as:
Figure BDA0001889565750000071
wherein G denotes a generator, DYA representation discriminator, X represents a model input, Y represents flag information of a model output,
Figure BDA0001889565750000072
a loss value representing the network of discriminators,
Figure BDA0001889565750000073
representing the loss value of the generator network.
The following describes a training process of the marker-elimination model in the embodiment of the present invention with reference to fig. 2.
In the embodiment of the present invention, the reality a represents the image sample including the mark to be eliminated, and the reality B represents the image sample not including the mark to be eliminated, and the corresponding training process may be as follows:
s1: randomly extracting a data set of a real A and a data set of a real B from the mark elimination training set, and respectively carrying out forward propagation, namely training a generator network and a discriminator network in a preset antagonistic neural network model;
s2: the forward propagation process of true a is: the true A is generated into a generation _ B through a generator A2B, and the generation _ B is generated into a Cycle _ A through a generator B2A, namely the true A- > generator A2B- > generates _ B- > generator B2A- > Cycle _ A;
meanwhile, the forward propagation process of the real B is: the real B obtains a generation _ A through a generator B2A, and the generation _ A obtains a Cycle _ B through a generator A2B, namely the real B- > generator B2A- > generates _ A- > generator A2B- > Cycle _ B;
s3: the discriminator A discriminates the true A and the generated _ A and outputs a value of a [0, 1] interval; meanwhile, the discriminator B discriminates the real B and the generated B, and outputs a value of a [0, 1] interval; wherein, the more the output value approaches to 1, the input is real data, otherwise, the input is false data;
s4: and continuously iterating until the loss functions of the generator network and the discriminator network are smaller than a preset threshold (such as 0.01), obtaining a mark elimination model saturated by training, and storing the trained mark elimination model.
The above embodiments describe the image processing method of the present invention, and the image processing apparatus of the present invention will be described with reference to the embodiments and the drawings.
Referring to fig. 3, an embodiment of the present invention further provides an image processing apparatus, including:
a first obtaining module 31, configured to obtain an image to be processed;
the detection module 32 is configured to detect a to-be-eliminated mark in the to-be-processed image, so as to obtain position information of an area where the to-be-eliminated mark is located;
an intercepting module 33, configured to intercept, according to the position information of the area where the to-be-eliminated mark is located, a first area image including the to-be-eliminated mark from the to-be-processed image;
a first processing module 34, configured to input the first area image into a pre-trained marker elimination model, and perform elimination processing on a to-be-eliminated marker in the first area image to obtain a second area image in which the to-be-eliminated marker is eliminated;
and a replacing module 35, configured to replace the first area image in the image to be processed with the second area image to obtain a target image.
According to the image processing device provided by the embodiment of the invention, the intercepted image containing the mark to be eliminated, namely the first area image, is input into the pre-trained mark elimination model, the mark to be eliminated in the first area image is eliminated to obtain the second area image of which the mark to be eliminated is eliminated, and the second area image is used for replacing the first area image in the image to be processed to obtain the target image.
In this embodiment of the present invention, optionally, the to-be-eliminated mark includes at least one target mark.
Optionally, the detection module 32 is specifically configured to:
inputting the image to be processed into a pre-trained mark detection model, and detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located.
Optionally, the apparatus further comprises:
the second acquisition module is used for acquiring a mark elimination training set; wherein the mark elimination training set comprises a first image sample set containing the mark to be eliminated and a second image sample set not containing the mark to be eliminated, which are paired;
and the first training module is used for carrying out iterative training on a generator network and a discriminator network in a preset antagonistic neural network model based on the first image sample set and the second image sample set until loss functions of the generator network and the discriminator network are smaller than a preset threshold value, so as to obtain the mark elimination model.
Optionally, the apparatus further comprises:
the third acquisition module is used for acquiring a mark detection training set; wherein the image samples in the marker detection training set contain the marker to be eliminated;
and the second training module is used for training to obtain the mark detection model based on the convolutional neural network and the mark detection training set.
Optionally, the apparatus further comprises:
the second processing module is used for preprocessing the first area image to obtain a preprocessed image meeting the input requirement of the mark elimination model;
the first processing module 34 is specifically configured to: inputting the preprocessed image into a pre-trained mark elimination model, and eliminating a mark to be eliminated in the preprocessed image to obtain a third area image with the mark to be eliminated; and processing the third area image to obtain the second area image.
In addition, an embodiment of the present invention further provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, can implement each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and is not described herein again to avoid repetition.
Specifically, referring to fig. 4, the embodiment of the present invention further provides a terminal device, which includes a bus 41, a transceiver 42, an antenna 43, a bus interface 44, a processor 45, and a memory 46.
In this embodiment of the present invention, the terminal device further includes: a computer program stored on the memory 46 and executable on the processor 45. The computer program can implement the processes of the above-mentioned embodiments of the image processing method when being executed by the processor 45, and can achieve the same technical effects, and is not described herein again to avoid repetition.
In fig. 4, a bus architecture (represented by bus 41), bus 41 may include any number of interconnected buses and bridges, with bus 41 linking together various circuits including one or more processors, represented by processor 45, and memory, represented by memory 46. The bus 41 may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. A bus interface 44 provides an interface between the bus 41 and the transceiver 42. The transceiver 42 may be one element or may be multiple elements, such as multiple receivers and transmitters, providing a means for communicating with various other apparatus over a transmission medium. The data processed by the processor 45 is transmitted over a wireless medium via the antenna 43, and further, the antenna 43 receives the data and transmits the data to the processor 45.
The processor 45 is responsible for managing the bus 41 and general processing and may also provide various functions including timing, peripheral interfaces, voltage regulation, power management, and other control functions. And the memory 46 may be used to store data used by the processor 45 in performing operations.
Alternatively, the processor 45 may be a CPU, ASIC, FPGA or CPLD.
The embodiment of the present invention further provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, can implement each process of the above-mentioned image processing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
Computer-readable media, which include both non-transitory and non-transitory, removable and non-removable media, may implement the information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (10)

1. An image processing method, comprising:
acquiring an image to be processed;
detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located;
intercepting a first area image containing the mark to be eliminated from the image to be processed according to the position information of the area where the mark to be eliminated is located;
inputting the first area image into a pre-trained marker elimination model, and eliminating a marker to be eliminated in the first area image to obtain a second area image with the marker to be eliminated;
and replacing the first area image in the image to be processed with the second area image to obtain a target image.
2. The method of claim 1, wherein the to-be-eliminated mark comprises at least one target mark.
3. The method according to claim 1, wherein the detecting the to-be-eliminated mark in the to-be-processed image to obtain the position information of the area where the to-be-eliminated mark is located comprises:
inputting the image to be processed into a pre-trained mark detection model, and detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located.
4. The method of claim 3, wherein prior to said obtaining the image to be processed, the method further comprises:
acquiring a mark detection training set; wherein the image samples in the marker detection training set contain the marker to be eliminated;
and training to obtain the mark detection model based on the convolutional neural network and the mark detection training set.
5. The method of claim 1, wherein prior to said obtaining the image to be processed, the method further comprises:
acquiring a mark elimination training set; wherein the mark elimination training set comprises a first image sample set containing the mark to be eliminated and a second image sample set not containing the mark to be eliminated, which are paired;
and performing iterative training on a generator network and a discriminator network in a preset antagonistic neural network model based on the first image sample set and the second image sample set until loss functions of the generator network and the discriminator network are smaller than a preset threshold value, and obtaining the mark elimination model.
6. The method according to claim 1, wherein after the first region image containing the to-be-eliminated mark is cut from the to-be-processed image and before the first region image is input into a pre-trained mark elimination model, the method further comprises:
preprocessing the first area image to obtain a preprocessed image meeting the input requirement of the mark elimination model;
the inputting the first area image into a pre-trained marker elimination model, and eliminating the to-be-eliminated marker in the first area image to obtain a second area image with the to-be-eliminated marker eliminated, includes:
inputting the preprocessed image into a pre-trained mark elimination model, and eliminating a mark to be eliminated in the preprocessed image to obtain a third area image with the mark to be eliminated;
and processing the third area image to obtain the second area image.
7. An image processing apparatus characterized by comprising:
the first acquisition module is used for acquiring an image to be processed;
the detection module is used for detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located;
the intercepting module is used for intercepting a first area image containing the mark to be eliminated from the image to be processed according to the position information of the area where the mark to be eliminated is located;
the first processing module is used for inputting the first area image into a pre-trained mark elimination model, and eliminating a mark to be eliminated in the first area image to obtain a second area image with the mark to be eliminated;
and the replacing module is used for replacing the first area image in the image to be processed by using the second area image to obtain a target image.
8. The apparatus of claim 7, wherein the detection module is specifically configured to:
inputting the image to be processed into a pre-trained mark detection model, and detecting the mark to be eliminated in the image to be processed to obtain the position information of the area where the mark to be eliminated is located.
9. A terminal device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the computer program, when executed by the processor, implements the steps of the image processing method according to any one of claims 1 to 6.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the image processing method according to any one of claims 1 to 6.
CN201811465054.XA 2018-12-03 2018-12-03 Image processing method and device and terminal equipment Pending CN111260757A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811465054.XA CN111260757A (en) 2018-12-03 2018-12-03 Image processing method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811465054.XA CN111260757A (en) 2018-12-03 2018-12-03 Image processing method and device and terminal equipment

Publications (1)

Publication Number Publication Date
CN111260757A true CN111260757A (en) 2020-06-09

Family

ID=70951989

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811465054.XA Pending CN111260757A (en) 2018-12-03 2018-12-03 Image processing method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN111260757A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202570A1 (en) * 2022-04-21 2023-10-26 维沃移动通信有限公司 Image processing method and processing apparatus, electronic device and readable storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1658586A1 (en) * 2003-08-19 2006-05-24 Koninklijke Philips Electronics N.V. Detecting a watermark using a subset of available detection methods
CN104867095A (en) * 2014-02-21 2015-08-26 腾讯科技(深圳)有限公司 Image processing method and device
CN106934780A (en) * 2017-03-15 2017-07-07 中山大学 A kind of automatic watermark minimizing technology based on image repair
CN107862673A (en) * 2017-10-31 2018-03-30 北京小米移动软件有限公司 Image processing method and device
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN108335256A (en) * 2017-12-13 2018-07-27 深圳大学 Three-dimensional blind watermatking under local spherical coordinate system is embedded and extracts detection method and device
CN108734677A (en) * 2018-05-21 2018-11-02 南京大学 A kind of blind deblurring method and system based on deep learning
CN108765349A (en) * 2018-05-31 2018-11-06 四川斐讯信息技术有限公司 A kind of image repair method and system with watermark
CN108805789A (en) * 2018-05-29 2018-11-13 厦门市美亚柏科信息股份有限公司 A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1658586A1 (en) * 2003-08-19 2006-05-24 Koninklijke Philips Electronics N.V. Detecting a watermark using a subset of available detection methods
CN104867095A (en) * 2014-02-21 2015-08-26 腾讯科技(深圳)有限公司 Image processing method and device
CN106934780A (en) * 2017-03-15 2017-07-07 中山大学 A kind of automatic watermark minimizing technology based on image repair
CN107862673A (en) * 2017-10-31 2018-03-30 北京小米移动软件有限公司 Image processing method and device
CN107993190A (en) * 2017-11-14 2018-05-04 中国科学院自动化研究所 Image watermark removal device
CN108335256A (en) * 2017-12-13 2018-07-27 深圳大学 Three-dimensional blind watermatking under local spherical coordinate system is embedded and extracts detection method and device
CN108734677A (en) * 2018-05-21 2018-11-02 南京大学 A kind of blind deblurring method and system based on deep learning
CN108805789A (en) * 2018-05-29 2018-11-13 厦门市美亚柏科信息股份有限公司 A kind of method, apparatus, equipment and readable medium removing watermark based on confrontation neural network
CN108765349A (en) * 2018-05-31 2018-11-06 四川斐讯信息技术有限公司 A kind of image repair method and system with watermark

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
TALI DEKEL 等: "On the Effectiveness of Visible Watermarks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR)》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023202570A1 (en) * 2022-04-21 2023-10-26 维沃移动通信有限公司 Image processing method and processing apparatus, electronic device and readable storage medium

Similar Documents

Publication Publication Date Title
CN108108731B (en) Text detection method and device based on synthetic data
WO2020248866A1 (en) Method and system for image search and cropping
CN111881707B (en) Image reproduction detection method, identity verification method, model training method and device
CN111144215B (en) Image processing method, device, electronic equipment and storage medium
CN112381092A (en) Tracking method, device and computer readable storage medium
CN114359533B (en) Page number identification method based on page text and computer equipment
CN111597845A (en) Two-dimensional code detection method, device and equipment and readable storage medium
CN111260757A (en) Image processing method and device and terminal equipment
CN111967449A (en) Text detection method, electronic device and computer readable medium
CN114638996A (en) Model training method, device, equipment and storage medium based on counterstudy
CN114708582B (en) AI and RPA-based electric power data intelligent inspection method and device
CN109886865A (en) Method, apparatus, computer equipment and the storage medium of automatic shield flame
CN115035347A (en) Picture identification method and device and electronic equipment
CN115223022A (en) Image processing method, device, storage medium and equipment
CN114329030A (en) Information processing method and device, computer equipment and storage medium
CN113609966A (en) Method and device for generating training sample of face recognition system
CN113807407A (en) Target detection model training method, model performance detection method and device
CN112825145A (en) Human body orientation detection method and device, electronic equipment and computer storage medium
CN115393868B (en) Text detection method, device, electronic equipment and storage medium
CN113420844B (en) Object defect detection method and device, electronic equipment and storage medium
CN112750065B (en) Carrier object processing and watermark embedding method, device and electronic equipment
CN111079624B (en) Sample information acquisition method and device, electronic equipment and medium
CN116958674A (en) Image recognition method and device, electronic equipment and storage medium
CN114418130A (en) Model training method, data processing method and related equipment
CN116977816A (en) Simple drawing image generation model training method, device, equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination