CN112801857A - Image data processing method and device - Google Patents

Image data processing method and device Download PDF

Info

Publication number
CN112801857A
CN112801857A CN202011381293.4A CN202011381293A CN112801857A CN 112801857 A CN112801857 A CN 112801857A CN 202011381293 A CN202011381293 A CN 202011381293A CN 112801857 A CN112801857 A CN 112801857A
Authority
CN
China
Prior art keywords
deleted
image
original image
area
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011381293.4A
Other languages
Chinese (zh)
Inventor
沈程秀
马文伟
刘设伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Taikang Insurance Group Co Ltd
Taikang Online Property Insurance Co Ltd
Original Assignee
Taikang Insurance Group Co Ltd
Taikang Online Property Insurance Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Taikang Insurance Group Co Ltd, Taikang Online Property Insurance Co Ltd filed Critical Taikang Insurance Group Co Ltd
Priority to CN202011381293.4A priority Critical patent/CN112801857A/en
Publication of CN112801857A publication Critical patent/CN112801857A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details

Abstract

The invention provides a data processing method, a device, computer equipment and a computer readable storage medium of an image, comprising the following steps: acquiring an original image and a first retention ratio; dividing an original image into a plurality of regions to be deleted with the same shape; and deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain an enhanced image. According to the method, the original image needing data enhancement is divided into a plurality of regions to be deleted, wherein the regions are the same in shape, information deletion is carried out according to the second retention ratio, so that the deleted parts are uniformly dispersed in the original image, the possibility that only background region information in the image is deleted is avoided, and meanwhile, the deleted information amount can be controlled by changing the retention ratio, so that the accuracy of the prediction result of the image classification model obtained after training by using the original image and the enhanced image is ensured.

Description

Image data processing method and device
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image data processing method, an image data processing apparatus, a computer device, and a computer-readable storage medium.
Background
In recent years, convolutional neural networks have been widely used in many computer vision scenes, such as image classification, object detection, semantic segmentation, and so on.
In the prior art, many times, not only enough training data is obtained through data acquisition, but a smaller amount of training data is obtained, and more training data is generated by using a data enhancement mode, wherein the existing data enhancement mode mainly comprises the following steps: the method comprises the following steps of spatial transformation, color distortion and information deletion, wherein for the information deletion mode of deleting partial information from an original image, the deletion process is to randomly cut partial areas in the original image, so that the cut image is also used as training data to train a neural network model together with the original image.
However, in the process of deleting images randomly, if only background region information is deleted or the deleted information is less, the data enhancement cannot be performed, and overfitting of the neural network model may occur; if the excessive information is deleted, the image loses the original meaning of the image, which may result in under-fitting of the neural network model, and thus the accuracy of the prediction result of the neural network model is poor.
Disclosure of Invention
In view of the above, the present invention provides a method and an apparatus for processing image data, a computer device, and a computer readable storage medium, which solve the problem that in the current scheme, when data processing of an image is performed by deleting information, the accuracy of a prediction result of a trained neural network model is poor due to the random deletion of the information.
According to a first aspect of the present invention, there is provided a data processing method of an image, comprising:
acquiring an original image and a first retention proportion corresponding to the original image, wherein the retention proportion is the ratio of the information content of an enhanced image subjected to data enhancement to the information content of the original image;
dividing the original image into a plurality of regions to be deleted with the same shape;
deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain the enhanced image;
training a neural network model by using the original image and the enhanced image to obtain an image classification model;
and acquiring an image to be classified, and inputting the image to be classified into the image classification model to obtain a classification result of the image to be classified.
According to a second aspect of the present invention, there is provided an apparatus for processing data of an image, the apparatus may include:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an original image and a first retention proportion corresponding to the original image, and the retention proportion is the ratio of the information quantity of an enhanced image subjected to data enhancement to the information quantity of the original image;
the dividing module is used for dividing the original image into a plurality of regions to be deleted with the same shape;
the cutting module is used for deleting information of part or all of the area to be deleted according to a second retention proportion of the difference value of the first retention proportion within a preset range to obtain the enhanced image;
the first generation module is used for training a neural network model by using the original image and the enhanced image to obtain an image classification model;
and the second generation module is used for acquiring the images to be classified and inputting the images to be classified into the image classification model to obtain the classification result of the images to be classified.
In a third aspect, an embodiment of the present invention provides a computer device, where the computer device includes:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the steps included in the data processing method of the image according to the first aspect according to the obtained program instructions.
In a fourth aspect, the embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements the steps of the data processing method for images according to the first aspect.
Aiming at the prior art, the invention has the following advantages:
the invention provides a data processing method of an image, which comprises the following steps: acquiring an original image and a first retention proportion corresponding to the original image, wherein the retention proportion is the ratio of the information quantity of an enhanced image subjected to data enhancement to the information quantity of the original image; dividing an original image into a plurality of regions to be deleted with the same shape; and deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain an enhanced image. The method comprises the steps of firstly dividing an original image needing data enhancement into a plurality of regions to be deleted with the same shape, enabling information to be deleted in different regions to be deleted according to a second retention ratio of a difference value with a first retention ratio within a preset range, and accordingly ensuring that when the information is deleted from the original image, the deleted regions are uniformly dispersed in the original image, the possibility of deleting only background region information in the image is avoided, and meanwhile, adjusting the areas of the deleted regions and the rest regions in each region to be deleted by changing the retention ratio, so that the deleted information amount is controlled, and the accuracy of a prediction result of an image classification model obtained after training by using the original image and the enhanced image is ensured.
The foregoing description is only an overview of the technical solutions of the present invention, and the embodiments of the present invention are described below in order to make the technical means of the present invention more clearly understood and to make the above and other objects, features, and advantages of the present invention more clearly understandable.
Drawings
Various other advantages and benefits will become apparent to those of ordinary skill in the art upon reading the following detailed description of the preferred embodiments. The drawings are only for purposes of illustrating the preferred embodiments and are not to be construed as limiting the invention. Also, like reference numerals are used to refer to like parts throughout the drawings. In the drawings:
FIG. 1 is a flowchart illustrating steps of a method for processing image data according to an embodiment of the present invention;
fig. 2 is a schematic diagram of an area to be deleted according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of an enhanced image provided by an embodiment of the present invention;
FIG. 4 is a flow chart illustrating steps of another method for processing image data according to an embodiment of the present invention;
FIG. 5 is a schematic diagram of another enhanced image provided by an embodiment of the present invention;
fig. 6 is a block diagram of a data processing apparatus for an image according to an embodiment of the present invention.
Detailed Description
Exemplary embodiments of the present invention will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the invention are shown in the drawings, it should be understood that the invention can be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art.
Fig. 1 is a flowchart of steps of a data processing method for an image according to an embodiment of the present invention, and as shown in fig. 1, the method may include:
step 101, an original image and a first retention proportion corresponding to the original image are obtained, wherein the retention proportion is a ratio of an information amount of an enhanced image subjected to data enhancement to an information amount of the original image.
In this step, an original image that needs to be subjected to data enhancement and a first retention ratio corresponding to the original image when the original image is subjected to data enhancement can be obtained.
In the embodiment of the present invention, the original image may be an image used for training a neural network model, and the neural network model may be a neural network model based on an open-source extensible and efficient target detection algorithm, and is used for performing classification, identification, target detection, semantic segmentation, and the like on the image, so the original image may be any image related to image classification, detection, and semantic segmentation, including a medical invoice image, a cost list image, an identity card image, a bank card image, and the like.
Because the neural network model often has millions of parameters, a large amount of data is needed for model training, so that the finally obtained model has high accuracy. The accuracy of the prediction result of the model is greatly influenced by the training data set, the detection effect of the obtained model is better when the amount of the training set data used is larger, but enough training data cannot be obtained in many times, so that the existing training data needs to be utilized to synthesize data or data enhancement is performed on the existing training data in a data enhancement mode.
The existing data enhancement methods mainly include three methods: spatial transformation, color warping, and information deletion. Specifically, the input original image can be inverted left and right, which is a data enhancement mode of space transformation; the way of color distortion is mainly to change the transparency, brightness, etc. of the original image; the information deletion is, as the name implies, to delete part of the information in the image, for example, to randomly delete information contained in a part of the area in the original image, and to use the remaining part of the area as a new training image.
Correspondingly, when the data of the image is processed by adopting an information deletion mode, the retention ratio is the ratio of the information amount of the enhanced image subjected to data enhancement to the information amount of the original image, and under the condition that the information is the pixel value of the pixel point in the image, the pixel value of the pixel point contained in the region to be deleted in the enhanced image can be set to be 0, so that the information contained in the region to be deleted is deleted, the pixel value of the pixel point contained in the residual region is retained, and the information contained in the residual region is retained, so that the retention ratio can also be the ratio of the area of the residual region to the area of the original image. If the retention proportion is too small, that is, the information in the image is deleted too much, the image after data enhancement becomes noise, so that the classification effect of the model obtained by training is influenced; if the information retained in the original image is too large or deleted, which belongs to the background part, such as the blank part, in the original image, the information in the original image is retained too much, which may affect the generalization ability of the trained model.
Therefore, the range of the retention ratio, for example, 50% to 70%, or 40% to 60%, may be set in advance so that the ratio of the information amount of the enhanced image after data enhancement to the information amount of the original image is within a suitable range, thereby ensuring that the ratio of the deleted information in the original image is neither too much nor too little.
And 102, dividing the original image into a plurality of areas to be deleted with the same shape.
In this step, the original image may be divided into a plurality of regions to be deleted having the same shape, so as to perform data enhancement in the regions to be deleted of the original image according to the first reserved region, thereby ensuring that the deleted information is uniformly dispersed in the original image, and avoiding the possibility of deleting only the background region information in the image.
Fig. 2 is a schematic diagram of an area to be deleted according to an embodiment of the present invention, as shown in fig. 2, an original image 10 may be divided into four areas to be deleted 20 with the same shape, and when the original image 10 cannot be uniformly divided into a plurality of areas to be deleted 20 with the same shape, a position of the area to be deleted 20 in the original image 10 may be changed by adjusting a first length a and a second length B of a distance between the area to be deleted 20 and a side of the original image 10.
In the embodiment of the present invention, the shape and size of the region to be deleted may be parameters preset according to the shape and size of the original image, for example, when the original image is a square structure with 10 pixel points on the side, the region to be deleted may be preset to be a square structure with 5 pixel points on the side, so that the original image may be uniformly divided into 4 regions to be deleted, or the region to be deleted may be preset to be a square structure with 2 pixel points on the side, so that the original image may be uniformly divided into 25 regions to be deleted.
And 103, deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value of the first retention ratio within a preset range to obtain the enhanced image.
In this step, information may be deleted from a part or all of the to-be-deleted area in the original image according to the second retention ratio, so as to obtain an enhanced image obtained by performing data enhancement on the original image.
Specifically, information can be deleted from all the to-be-deleted areas in the original image, or information can be deleted from part of the to-be-deleted areas, for example, information can be deleted from a certain proportion of the to-be-deleted areas according to a preset proportion, so that the deleted information is uniformly dispersed in the original image, the information deletion process can be ensured to have a certain degree of randomness, the diversity of data enhancement on the original image is increased, and more enhanced images are obtained.
In addition, the second retention ratio when information is deleted from each to-be-deleted area is a retention ratio of a difference value from the first retention ratio within a preset range, where the first retention ratio may be a ratio of an information amount of the enhanced image to an information amount of the original image, for example, the first retention ratio may be preset to be 60%, that is, information included in an area of 40% of an area in the original image needs to be deleted, specifically, information included in 40% of the area may be deleted from each to-be-deleted area, and information included in 60% of the area is retained, so that an information amount of the finally obtained enhanced image may be 60% of the information amount of the original image. The retention ratios of the plurality of to-be-deleted areas for deleting the information may be all the first retention ratio, that is, 40% of the information contained in the area is deleted in each to-be-deleted area, and 60% of the information contained in the area is retained; the preset range can be preset to be +/-5%, the second retention ratio determined according to the first retention ratio can be 55-65%, and therefore, each area to be deleted is deleted according to the second retention ratio within the range of 55-65%, the retention ratios of each area to be deleted in the information deletion process are different from each other, the randomness of the information deletion process to a certain degree is further ensured, the diversity of data enhancement on the original image is increased, and more enhanced images are obtained.
Fig. 3 is a schematic diagram of an enhanced image according to an embodiment of the present invention, as shown in fig. 3, for each to-be-deleted area 20 in a plurality of to-be-deleted areas 20 obtained by dividing in fig. 2, information is deleted according to a same second retention ratio of 75%, where each to-be-deleted area 20 includes a deleted area 21 and a retained area 22, a ratio of an area of the deleted area 21 to the to-be-deleted area 20 is 25%, and a ratio of an area of the retained area 22 to the to-be-deleted area 20 is 75%.
In the embodiment of the present invention, a mask may be used to implement a process of deleting information from an original image, specifically, a mask matched with the original image may be set, in the mask, a pixel value corresponding to a pixel point included in a deletion area in each area to be deleted is set to 0, and a pixel value corresponding to a pixel point included in a reserved area is set to 1, so as to obtain a matrix M corresponding to the mask, which is used to represent pixel values of all pixel points included in the mask. In addition, a matrix x representing pixel values of all pixel points included in the original image may be obtained, and a product of the matrix M and the matrix x may be determined as matrices x to x × M corresponding to the enhanced image, where the matrix x to represent pixel values corresponding to all pixel points included in the enhanced image.
Because the pixel point with the pixel value of 0 in the mask corresponds to the pixel point contained in the deleted region in the original image, when the product of the matrix M and the matrix x is determined as the matrix x-corresponding to the enhanced image, the pixel values of the pixel points corresponding to the deleted region in the enhanced image are all 0, namely the information contained in the deleted region of the original image is deleted in the enhanced image; because the pixel point with the pixel value of 1 in the mask corresponds to the pixel point contained in the reserved area in the original image, when the product of the matrix M and the matrix x is determined as the matrix x-corresponding to the enhanced image, the pixel value corresponding to the pixel point corresponding to the reserved area in the enhanced image is the same as the pixel value corresponding to the pixel point contained in the reserved area in the original image, namely, the information contained in the reserved area of the original image is reserved in the enhanced image.
For example, if the original image corresponds to a matrix x of [ 123; 456; 789]TThe mask comprises three pixel points, and a matrix M corresponding to the mask is [ 111; 000; 111]TThat is, the information included in the second pixel point in the original image needs to be deleted, and the information included in the first and third pixel points is retained, so that a matrix x-x corresponding to the enhanced image after the information deletion is [ 123; 000; 789]T
It should be noted that, referring to fig. 2, if the original image 10 cannot be completely and uniformly divided into the plurality of regions to be deleted 20, the first length a and the second length B may be adjusted in the original image 10, so that the plurality of regions to be deleted 20 are uniformly divided in the target region of the original image 10 except for the region corresponding to the first length a and the second length B, and thus, while the plurality of regions to be deleted 20 are cut according to the second retention ratio, information may also be deleted in the region corresponding to the first length a and the second length B according to the second retention ratio, thereby ensuring that information is deleted in the entire range of the original image 10 according to the second retention ratio.
And 104, training a neural network model by using the original image and the enhanced image to obtain an image classification model.
In this step, after obtaining the enhanced image, the original image and the enhanced image are used together as a training set to train a neural network model, so as to obtain an image classification model for classifying images, where the image classification model can perform type recognition on a plurality of input images, sort the plurality of images according to a certain classification order, generate a corresponding Portable Document Format (PDF) Document, and transmit the PDF Document to relevant people for viewing.
In addition, after the enhanced image is obtained by using the data enhancement method of the image in the embodiment of the application, the neural network model can be trained according to specific application requirements, and the models suitable for image classification recognition, target detection, semantic segmentation and the like of different application requirements are obtained.
And 105, acquiring an image to be classified, and inputting the image to be classified into the image classification model to obtain a classification result of the image to be classified.
In this step, after the image classification model is obtained by training the original image and the enhanced image, the image to be classified collected and uploaded by the user may be further obtained, and the image to be classified is input into the image classification model, so that the classification result corresponding to the image to be classified may be obtained.
In summary, the image data processing method provided in the embodiment of the present invention includes: acquiring an original image and a first retention proportion corresponding to the original image, wherein the retention proportion is the ratio of the information quantity of an enhanced image subjected to data enhancement to the information quantity of the original image; dividing an original image into a plurality of regions to be deleted with the same shape; and deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain an enhanced image. The method comprises the steps of firstly dividing an original image needing data enhancement into a plurality of regions to be deleted with the same shape, enabling information to be deleted in different regions to be deleted according to a second retention ratio of a difference value with a first retention ratio within a preset range, and accordingly ensuring that when the information is deleted from the original image, the deleted regions are uniformly dispersed in the original image, the possibility of deleting only background region information in the image is avoided, and meanwhile, adjusting the areas of the deleted regions and the rest regions in each region to be deleted by changing the retention ratio, so that the deleted information amount is controlled, and the accuracy of a prediction result of an image classification model obtained after training by using the original image and the enhanced image is ensured.
Fig. 4 is a flowchart of steps of another image data processing method according to an embodiment of the present invention, and as shown in fig. 4, the method may include:
step 201, an original image and a first retention proportion corresponding to the original image are obtained, wherein the retention proportion is a ratio of an information amount of an enhanced image subjected to data enhancement to an information amount of the original image.
This step may specifically refer to step 101, which is not described herein again.
Step 202, under the condition that the original image and the area to be deleted are in a rectangular structure, acquiring a first length and a second length which are preset aiming at the original image.
In this step, in the case that the original image and the area to be deleted have a rectangular structure, if the original image cannot be completely and uniformly divided into a plurality of areas to be deleted, referring to fig. 2, a first length a and a second length B preset for the original image may be obtained, so that the first length a and the second length B are adjusted in the original image 10, so that the plurality of areas to be deleted 20 are uniformly divided in the target area of the original image 10 except for the area corresponding to the first length a and the second length B.
The first length a and the second length B may be length values randomly determined from a preset length range, a minimum value of the length range may be 0, and a maximum value may be a side length d of the region to be deleted, that is, a/B ═ random (0, d).
Step 203, determining a target region in the original image according to the first length and the second length, wherein a distance between the target region and one side of the original image is greater than the first length, and a distance between another side adjacent to the one side is greater than the second length.
In this step, a target area for dividing the area to be deleted may be determined in the original image according to the determined first length and second length, referring to fig. 2, the first length a may be a distance between the target area and a left side of the original image 10, and the second length B may be a distance between the target area and an upper side of the original image 10; the first length a may also be a distance between the target region and a right side of the original image 10, and correspondingly, the second length B may be a distance between the target region and an upper side or a lower side of the original image 10. That is, the target area is an area in which the distance between one side of the original image 10 and the target area is greater than the first length a, and the distance between the other side adjacent to the one side of the original image 10 is greater than the second length B, so that the position of the target area can be changed by adjusting the first length a and the second length B, and further the position of the area to be deleted is changed, so that the information deletion process of the original image has a certain degree of randomness, the diversity of data enhancement performed on the original image is increased, and more enhanced images are obtained.
And 204, acquiring the side length of the area to be deleted, wherein the side length of the area to be deleted is any length within a preset length range.
In this step, before dividing the plurality of regions to be deleted in the target region, the side length of the region to be deleted may be acquired.
The side length of the area to be deleted can be any length within a preset length range, so that the size of the area to be deleted for information deletion operation can be ensured to be moderate, the problem that the operation steps are more due to the fact that the side length of the area to be deleted is small is solved, and the problem that the distribution of deleted information is uneven due to the fact that the side length of the cutting area is large is solved.
Specifically, a fixed range (d) may be presetmin,dmax) If the number of the pixels is taken as the measurement unit, the preset fixed range can be (1, 5) or (2, 7)) And when the original image is divided into a plurality of areas to be deleted, randomly selecting a value from the range as the side length d of the area to be deleted, namely d ═ random (d)min,dmax)。
Step 205, dividing the target area into a plurality of areas to be deleted with the same shape.
In this step, the target area in the original image may be divided into a plurality of areas to be deleted having the same shape for data enhancement in the areas to be deleted according to the first reserved area, so as to ensure that the portions to be deleted are uniformly dispersed in the original image, thereby avoiding the possibility of deleting only the background area information in the image.
And step 206, under the condition that the deleting frame and the area to be deleted are in a square structure, determining the side length of the deleting frame according to the second retention ratio and the side length of the area to be deleted.
In this step, a deletion border may be set in the to-be-deleted region, where a region corresponding to the deletion border in the to-be-deleted region is a deletion region in the to-be-deleted region, and regions corresponding to the plurality of deletion borders in the original image are regions to which information that needs to be deleted when data enhancement is performed on the original image belongs, that is, the to-be-deleted region includes a deletion region and a reserved region, and a border of the deletion region is the deletion border.
Optionally, step 206 may specifically include the following sub-steps:
substep 2061, determining a ratio between the area of the deleted region corresponding to the deleted border in the original image and the area of the original image according to the second retention ratio.
In this step, a ratio between an area of a region to which information to be deleted belongs when data enhancement is performed in the original image and an area of the original image may be determined according to a second retention ratio for determining data enhancement, that is, a deletion operation, to be performed on the original image.
Because the retention ratio is a ratio between the information amount of the enhanced image after data enhancement and the information amount of the original image, if the information is a pixel value of a pixel point in the image, and the original image includes a retention area and an area that needs to be deleted, the second retention ratio K may be a ratio between the area of the retention area in the original image and the area of the original image, therefore, when data enhancement is performed, the sum of the ratio C between the area of the area that needs to be deleted in the original image and the area of the original image and the second retention ratio K is 1, that is, C + K is 1, so that the difference 1-K between the value 1 and the second retention ratio K may be determined as the ratio C between the area of the area that needs to be deleted in the original image and the area of the original image.
And a substep 2062, determining the side length of the deleted frame according to the ratio and the side length of the region to be deleted.
In this step, when data enhancement is performed, the ratio between the area of the region to be deleted in the original image and the area of the original image is equal to the ratio between the area of the deletion region in each region to be deleted and the area of the region to be deleted, and if the deletion border and the region to be deleted are both in a square structure and the side length of the deletion border is L, the area S of the deletion region is SC=L2Area S of the region to be deletedd=d2If C is equal to SC/Sd=L2/d2=1-K。
Therefore, the side length of the deletion border in the region to be deleted can be determined
Figure BDA0002809396100000111
And step 207, determining a deleted area in the to-be-deleted area according to the side length of the deleted border, and deleting information contained in the plurality of deleted areas from the original image to obtain the enhanced image.
In this step, after determining the deletion area in each to-be-deleted area, a deletion area corresponding to the deletion border is determined in each to-be-deleted area, and information included in the deletion area is deleted, that is, the pixel value of a pixel point included in the deletion area is set to 0, so that the operation of deleting a plurality of deletion areas in the original image is completed, and the enhanced image obtained by performing data enhancement on the original image is obtained.
Optionally, the step of deleting a plurality of deleted regions from the original image in step 207 may specifically include the following sub-steps:
substep 2071, in the region to be deleted, rotating the deletion border by a preset angle.
In this step, after determining the deletion border in the region to be deleted, the deletion border may be rotated by a preset angle, so as to change the position of the region to be deleted for data enhancement in the original image.
Fig. 5 is a schematic diagram of another enhanced image according to an embodiment of the present invention, as shown in fig. 5, after the original image 10 is divided into a plurality of areas to be deleted 20, a deletion border 23 may be determined in each area to be deleted 20, and the deletion border 23 is rotated by a preset angle, so as to finally determine a deletion area 21 to be deleted in the area to be deleted 20.
The preset angle can also be an angle randomly determined from a preset angle value range, and the angle range can be (0, 360), so that the cut information is uniformly dispersed in the original image, the information deleting process can be ensured to have certain randomness, the diversity of data enhancement on the original image is increased, and more enhanced images are obtained.
Substep 2072 deletes information included in the region corresponding to the rotated deletion frame from the original image.
In this step, after the deletion frame is rotated by a preset angle, information included in an area corresponding to the rotated deletion frame may be deleted from the original image, thereby completing an information deletion operation on the original image, implementing data enhancement on the original image, and obtaining an enhanced image.
And 208, training a neural network model by using the original image and the enhanced image to obtain an image classification model.
This step may specifically refer to step 104, which is not described herein again.
And 209, acquiring an image to be classified, and inputting the image to be classified into the image classification model to obtain a classification result of the image to be classified.
In this step, after the image classification model is obtained by training the original image and the enhanced image, the image to be classified collected and uploaded by the user may be further obtained, and the image to be classified is input into the image classification model, so that the classification result corresponding to the image to be classified may be obtained.
Optionally, the image to be classified includes: any one or more of a medical invoice image, a invoice image, an identification card image, and a bank card image. Meanwhile, the type of the original image may be determined according to a specific service application scenario, that is, an image to be classified, for example, for a health insurance underwriting or claim settlement scenario, the original image may be a medical invoice image, an expense list image, an identity card image or a bank card image, which is obtained by a web crawler or any other manner and is related to the health insurance underwriting or claim settlement, and the obtained original image and an enhanced image obtained by data enhancement of the original image are used as an image classification model obtained by training of a training set to perform type recognition on the image to be classified related to the health insurance underwriting or claim settlement, so as to obtain a classification result of the image to be classified.
Specifically, after the image classification model is obtained by training the original image and the enhanced image, the image to be classified collected and uploaded by the user can be further obtained, the image to be classified can be a related image generated by the user in the health insurance underwriting or claim settlement process, and the image to be classified is input into the image classification model, so that a classification result corresponding to the image to be classified can be obtained.
Optionally, the classification result of the images to be classified may be images sorted according to a category order. For example, the images input to the image classification model may include a medical invoice image, a invoice image, an identification card image, and a bank card image. The images can be sorted according to the sequence of the identity card, the expense bill, the medical invoice and the bank card after being input into the image classification model. It should be understood that the category order can be set according to actual needs, and the present invention is not limited thereto. The ranked images may be generated into a document in any suitable form, such as a pdf, for viewing by the relevant person.
For example, when the image to be classified input into the image classification model is a medical invoice image, the classification result of the image to be classified is obtained as a medical invoice category, specifically, when the image to be classified is a medical invoice image generated by a user in a health insurance underwriting or claim settlement process, the image to be classified is input into the image classification model, and the image classification model can output the classification result of the image to be classified as a medical invoice category; the image classification method comprises the steps that when an image to be classified input into an image classification model is an expense list image, a classification result of the image to be classified is obtained to be an expense list category, specifically, when the image to be classified is an expense list image generated in a health insurance underwriting or claim settlement process of a user, the image to be classified is input into the image classification model, and the image classification model can output the classification result of the image to be classified to be the expense list category; in the step, when the image to be classified is an identity card image generated by a user in a health insurance underwriting or claim settlement process, the image to be classified is input into the image classification model, and the image classification model can output the classification result of the image to be classified as the identity card category; specifically, when the image to be classified is a bank card image generated by a user in a health insurance underwriting or claim settlement process, the image to be classified is input into the image classification model, and the image classification model can output the classification result of the image to be classified as the bank card category.
In summary, the image data processing method provided in the embodiment of the present invention includes: acquiring an original image and a first retention proportion corresponding to the original image, wherein the retention proportion is the ratio of the information quantity of an enhanced image subjected to data enhancement to the information quantity of the original image; dividing an original image into a plurality of regions to be deleted with the same shape; and deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain an enhanced image. The method comprises the steps of firstly dividing an original image needing data enhancement into a plurality of regions to be deleted with the same shape, enabling the subsequent deletion operation of information according to a second retention ratio of a difference value with the first retention ratio within a preset range to be in different regions to be deleted, and accordingly ensuring that when the information of the original image is deleted, the deleted regions are uniformly dispersed in the original image, the possibility of deleting only background region information in the image is avoided, and meanwhile, the areas of the deleted regions and the rest regions in each region to be deleted can be adjusted by changing the retention ratio, so that the deleted information amount is controlled, and the accuracy of a prediction result of an image classification model obtained after training by using the original image and the enhanced image is ensured.
In addition, the position and the size of the region needing to be deleted in the original image during data enhancement can be flexibly changed by adjusting the first length and the second length in the original image, or adjusting the side length of the region to be deleted, the preset angle of rotation of a deletion frame in the region to be deleted, and the like, so that a certain randomness is ensured in the data enhancement process, the diversity of data enhancement on the original image is increased, more enhanced images are obtained for training a neural network model, and the accuracy of a prediction result of an image classification model obtained after the original image and the enhanced images are trained is further ensured.
Fig. 6 is a block diagram of an image data processing apparatus according to an embodiment of the present invention, and as shown in fig. 6, the apparatus may include:
a first obtaining module 301, configured to obtain an original image and a first retention ratio corresponding to the original image, where the retention ratio is a ratio of an information amount of an enhanced image after data enhancement to an information amount of the original image;
a dividing module 302, configured to divide the original image into a plurality of regions to be deleted, which have the same shape;
a cropping module 303, configured to delete information of a part or all of the region to be deleted according to a second retention ratio, where a difference between the first retention ratio and the second retention ratio is within a preset range, to obtain the enhanced image;
a first generating module 304, configured to train a neural network model using the original image and the enhanced image to obtain an image classification model;
the second generating module 305 is configured to obtain an image to be classified, and input the image to be classified into the image classification model to obtain a classification result of the image to be classified.
Optionally, the area to be deleted includes a deletion border, and an area of the deletion border corresponding to the area to be deleted is a deletion area; the cutting module 303 includes:
the first determining submodule is used for determining the side length of the deleted frame according to the second retention ratio and the side length of the to-be-deleted area under the condition that the deleted frame and the to-be-deleted area are of a square structure;
and the second determining submodule is used for determining a deleted area in the area to be deleted according to the side length of the deleted frame, and deleting information contained in a plurality of deleted areas from the original image to obtain the enhanced image.
Optionally, the second determining sub-module includes:
the rotating unit is used for rotating the deleting frame by a preset angle in the area to be deleted;
and a deleting unit configured to delete information included in an area corresponding to the rotated deletion border from the original image.
Optionally, the apparatus further comprises:
and the second acquisition module is used for acquiring the side length of the area to be deleted, wherein the side length of the area to be deleted is any length within a preset length range.
Optionally, the first determining sub-module includes:
a first determining unit, configured to determine, according to the second retention ratio, a ratio between an area of a deleted region corresponding to the deleted border in the original image and an area of the original image;
and the second determining unit is used for determining the side length of the deleted frame according to the ratio and the side length of the area to be deleted.
Optionally, the dividing module 302 includes:
the obtaining submodule is used for obtaining a first length and a second length which are preset aiming at the original image under the condition that the original image and the area to be deleted are in a rectangular structure;
a third determining sub-module, configured to determine a target region in the original image according to the first length and the second length, where a distance between the target region and one side of the original image is greater than the first length, and a distance between another side adjacent to the one side is greater than the second length;
and the dividing submodule is used for dividing the target area into a plurality of areas to be deleted with the same shape.
Optionally, the image to be classified includes: any one or more of a medical invoice image, an identification card image and a bank card image;
and the classification result of the images to be classified is the images sorted according to the category sequence.
In summary, an embodiment of the present invention provides an image data processing apparatus, including: acquiring an original image and a first retention proportion corresponding to the original image, wherein the retention proportion is the ratio of the information quantity of an enhanced image subjected to data enhancement to the information quantity of the original image; dividing an original image into a plurality of regions to be deleted with the same shape; and deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain an enhanced image. The method comprises the steps of firstly dividing an original image needing data enhancement into a plurality of regions to be deleted with the same shape, enabling information to be deleted in different regions to be deleted according to a second retention ratio of a difference value with a first retention ratio within a preset range, and accordingly ensuring that when the information is deleted from the original image, the deleted regions are uniformly dispersed in the original image, the possibility of deleting only background region information in the image is avoided, and meanwhile, adjusting the areas of the deleted regions and the rest regions in each region to be deleted by changing the retention ratio, so that the deleted information amount is controlled, and the accuracy of a prediction result of an image classification model obtained after training by using the original image and the enhanced image is ensured.
For the above device embodiment, since it is basically similar to the method embodiment, the description is relatively simple, and for the relevant points, refer to the partial description of the method embodiment.
Preferably, an embodiment of the present invention further provides a computer device, which includes a processor, a memory, and a computer program stored in the memory and capable of running on the processor, where the computer program, when executed by the processor, implements each process of the data processing method for an image, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
The embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the computer program implements each process of the data processing method for an image, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here. The computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As is readily imaginable to the person skilled in the art: any combination of the above embodiments is possible, and thus any combination between the above embodiments is an embodiment of the present invention, but the present disclosure is not necessarily detailed herein for reasons of space.
The data processing methods for images provided herein are not inherently related to any particular computer, virtual system, or other apparatus. Various general purpose systems may also be used with the teachings herein. The structure required to construct a system incorporating aspects of the present invention will be apparent from the description above. Moreover, the present invention is not directed to any particular programming language. It is appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any descriptions of specific languages are provided above to disclose the best mode of the invention.
In the description provided herein, numerous specific details are set forth. It is understood, however, that embodiments of the invention may be practiced without these specific details. In some instances, well-known methods, structures and techniques have not been shown in detail in order not to obscure an understanding of this description.
Similarly, it should be appreciated that in the foregoing description of exemplary embodiments of the invention, various features of the invention are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the invention and aiding in the understanding of one or more of the various inventive aspects. However, the disclosed method should not be interpreted as reflecting an intention that: that the invention as claimed requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the claims following the detailed description are hereby expressly incorporated into this detailed description, with each claim standing on its own as a separate embodiment of this invention.
Those skilled in the art will appreciate that the modules in the device in an embodiment may be adaptively changed and disposed in one or more devices different from the embodiment. The modules or units or components of the embodiments may be combined into one module or unit or component, and furthermore they may be divided into a plurality of sub-modules or sub-units or sub-components. All of the features disclosed in this specification (including any accompanying claims, abstract and drawings), and all of the processes or elements of any method or apparatus so disclosed, may be combined in any combination, except combinations where at least some of such features and/or processes or elements are mutually exclusive. Each feature disclosed in this specification (including any accompanying claims, abstract and drawings) may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Furthermore, those skilled in the art will appreciate that while some embodiments described herein include some features included in other embodiments, rather than other features, combinations of features of different embodiments are meant to be within the scope of the invention and form different embodiments. For example, in the claims, any of the claimed embodiments may be used in any combination.
The various component embodiments of the invention may be implemented in hardware, or in software modules running on one or more processors, or in a combination thereof. Those skilled in the art will appreciate that a microprocessor or Digital Signal Processor (DSP) may be used in practice to implement some or all of the functions of some or all of the components of the data processing method of an image according to an embodiment of the present invention. The present invention may also be embodied as apparatus or device programs (e.g., computer programs and computer program products) for performing a portion or all of the methods described herein. Such programs implementing the present invention may be stored on computer-readable media or may be in the form of one or more signals. Such a signal may be downloaded from an internet website or provided on a carrier signal or in any other form.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design alternative embodiments without departing from the scope of the appended claims. In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word "comprising" does not exclude the presence of elements or steps not listed in a claim. The word "a" or "an" preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the unit claims enumerating several means, several of these means may be embodied by one and the same item of hardware. The usage of the words first, second and third, etcetera do not indicate any ordering. These words may be interpreted as names.

Claims (10)

1. A method of data processing of an image, the method comprising:
acquiring an original image and a first retention proportion corresponding to the original image, wherein the retention proportion is the ratio of the information content of an enhanced image subjected to data enhancement to the information content of the original image;
dividing the original image into a plurality of regions to be deleted with the same shape;
deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain the enhanced image;
training a neural network model by using the original image and the enhanced image to obtain an image classification model;
and acquiring an image to be classified, and inputting the image to be classified into the image classification model to obtain a classification result of the image to be classified.
2. The method according to claim 1, wherein the area to be deleted comprises a deletion border, and a corresponding area of the deletion border in the area to be deleted is a deletion area;
the step of deleting information of part or all of the area to be deleted according to a second retention ratio of the difference value with the first retention ratio within a preset range to obtain the enhanced image includes:
under the condition that the deleting frame and the area to be deleted are of a square structure, determining the side length of the deleting frame according to the second retention ratio and the side length of the area to be deleted;
and determining a deleted area in the area to be deleted according to the side length of the deleted frame, and deleting information contained in a plurality of deleted areas from the original image to obtain the enhanced image.
3. The method according to claim 2, wherein the step of deleting the information contained in the deleted area from the original image comprises:
in the area to be deleted, rotating the deletion frame by a preset angle;
and deleting information contained in the area corresponding to the rotated deletion border from the original image.
4. The method according to claim 2, wherein before the step of determining the side length of the deletion border according to the second retention ratio and the side length of the area to be deleted in a case where the deletion border and the area to be deleted have a square structure, the method further comprises:
and acquiring the side length of the area to be deleted, wherein the side length of the area to be deleted is any length in a preset length range.
5. The method according to claim 2, wherein in a case that the deletion border and the to-be-deleted area are in a square structure, the step of determining the side length of the deletion border according to the second retention ratio and the side length of the to-be-deleted area includes:
determining the ratio of the area of a deleted region corresponding to the deleted border in the original image to the area of the original image according to the second retention ratio;
and determining the side length of the deleted frame according to the ratio and the side length of the region to be deleted.
6. The method according to claim 1, wherein the step of dividing the original image into a plurality of regions to be deleted having the same shape comprises:
under the condition that the original image and the area to be deleted are in a rectangular structure, acquiring a first length and a second length which are preset aiming at the original image;
determining a target area in the original image according to the first length and the second length, wherein the distance between the target area and one side edge of the original image is larger than the first length, and the distance between the other side edge adjacent to the one side edge is larger than the second length;
and dividing the target area into a plurality of areas to be deleted with the same shape.
7. The method according to any one of claims 1 to 6, wherein the image to be classified comprises: any one or more of a medical invoice image, an identification card image and a bank card image;
and the classification result of the images to be classified is the images sorted according to the category sequence.
8. An apparatus for processing data of an image, the apparatus comprising:
the device comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring an original image and a first retention proportion corresponding to the original image, and the retention proportion is the ratio of the information quantity of an enhanced image subjected to data enhancement to the information quantity of the original image;
the dividing module is used for dividing the original image into a plurality of regions to be deleted with the same shape;
the cutting module is used for deleting information of part or all of the area to be deleted according to a second retention proportion of the difference value of the first retention proportion within a preset range to obtain the enhanced image;
the first generation module is used for training a neural network model by using the original image and the enhanced image to obtain an image classification model;
and the second generation module is used for acquiring the images to be classified and inputting the images to be classified into the image classification model to obtain the classification result of the images to be classified.
9. A computer device, characterized in that the computer device comprises:
a memory for storing program instructions;
a processor for calling the program instructions stored in the memory and executing the steps of the data processing method of the image according to any one of claims 1 to 7 according to the obtained program instructions.
10. A computer-readable storage medium, characterized in that a computer program is stored on the computer-readable storage medium, which computer program, when being executed by a processor, carries out a data processing method of an image according to any one of claims 1 to 7.
CN202011381293.4A 2020-11-30 2020-11-30 Image data processing method and device Pending CN112801857A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011381293.4A CN112801857A (en) 2020-11-30 2020-11-30 Image data processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011381293.4A CN112801857A (en) 2020-11-30 2020-11-30 Image data processing method and device

Publications (1)

Publication Number Publication Date
CN112801857A true CN112801857A (en) 2021-05-14

Family

ID=75806127

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011381293.4A Pending CN112801857A (en) 2020-11-30 2020-11-30 Image data processing method and device

Country Status (1)

Country Link
CN (1) CN112801857A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744161A (en) * 2021-09-16 2021-12-03 北京顺势兄弟科技有限公司 Enhanced data acquisition method and device, data enhancement method and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902956A (en) * 2012-09-10 2013-01-30 中国人民解放军理工大学气象学院 Ground-based visible cloud image recognition processing method
CN108460764A (en) * 2018-03-31 2018-08-28 华南理工大学 The ultrasonoscopy intelligent scissor method enhanced based on automatic context and data
CN109753978A (en) * 2017-11-01 2019-05-14 腾讯科技(深圳)有限公司 Image classification method, device and computer readable storage medium
US20200184200A1 (en) * 2018-12-07 2020-06-11 Karl L. Wang Classification system
CN111275080A (en) * 2020-01-14 2020-06-12 腾讯科技(深圳)有限公司 Artificial intelligence-based image classification model training method, classification method and device
CN111368889A (en) * 2020-02-26 2020-07-03 腾讯科技(深圳)有限公司 Image processing method and device
CN111738316A (en) * 2020-06-10 2020-10-02 北京字节跳动网络技术有限公司 Image classification method and device for zero sample learning and electronic equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102902956A (en) * 2012-09-10 2013-01-30 中国人民解放军理工大学气象学院 Ground-based visible cloud image recognition processing method
CN109753978A (en) * 2017-11-01 2019-05-14 腾讯科技(深圳)有限公司 Image classification method, device and computer readable storage medium
CN108460764A (en) * 2018-03-31 2018-08-28 华南理工大学 The ultrasonoscopy intelligent scissor method enhanced based on automatic context and data
US20200184200A1 (en) * 2018-12-07 2020-06-11 Karl L. Wang Classification system
CN111275080A (en) * 2020-01-14 2020-06-12 腾讯科技(深圳)有限公司 Artificial intelligence-based image classification model training method, classification method and device
CN111368889A (en) * 2020-02-26 2020-07-03 腾讯科技(深圳)有限公司 Image processing method and device
CN111738316A (en) * 2020-06-10 2020-10-02 北京字节跳动网络技术有限公司 Image classification method and device for zero sample learning and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113744161A (en) * 2021-09-16 2021-12-03 北京顺势兄弟科技有限公司 Enhanced data acquisition method and device, data enhancement method and electronic equipment
CN113744161B (en) * 2021-09-16 2024-03-29 北京顺势兄弟科技有限公司 Enhanced data acquisition method and device, data enhancement method and electronic equipment

Similar Documents

Publication Publication Date Title
CN110705583B (en) Cell detection model training method, device, computer equipment and storage medium
DE102018008161A1 (en) Detecting objects using a weakly monitored model
CN109918969B (en) Face detection method and device, computer device and computer readable storage medium
CN109948566B (en) Double-flow face anti-fraud detection method based on weight fusion and feature selection
Li et al. Multi-scale single image dehazing using laplacian and gaussian pyramids
CN106202086B (en) Picture processing and obtaining method, device and system
CN109840883B (en) Method and device for training object recognition neural network and computing equipment
CN112184585B (en) Image completion method and system based on semantic edge fusion
CN110136052B (en) Image processing method and device and electronic equipment
JP2021531571A (en) Certificate image extraction method and terminal equipment
CN111325205A (en) Document image direction recognition method and device and model training method and device
CN110675334A (en) Image enhancement method and device
CN107038707B (en) Image segmentation method and device
CN112801857A (en) Image data processing method and device
CN111597845A (en) Two-dimensional code detection method, device and equipment and readable storage medium
CN114444566A (en) Image counterfeiting detection method and device and computer storage medium
Chen et al. Photographic reproduction and enhancement using HVS-based modified histogram equalization
DE102012201024A1 (en) A method of processing an image to clarify text in the image
CN115294162B (en) Target identification method, device, equipment and storage medium
Wang et al. Spatially adaptive multi-scale contextual attention for image inpainting
CN108804652B (en) Method and device for generating cover picture, storage medium and electronic device
DE102019000178A1 (en) Interactive system for automatically synthesizing a content-sensitive filling
CN109886865A (en) Method, apparatus, computer equipment and the storage medium of automatic shield flame
CN114549932A (en) Data enhancement processing method and device, computer equipment and storage medium
Wang et al. Near-infrared fusion for deep lightness enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination