CN115601283B - Image enhancement method and device, computer equipment and computer readable storage medium - Google Patents

Image enhancement method and device, computer equipment and computer readable storage medium Download PDF

Info

Publication number
CN115601283B
CN115601283B CN202211603412.5A CN202211603412A CN115601283B CN 115601283 B CN115601283 B CN 115601283B CN 202211603412 A CN202211603412 A CN 202211603412A CN 115601283 B CN115601283 B CN 115601283B
Authority
CN
China
Prior art keywords
object instance
image
area
instance
enhanced
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211603412.5A
Other languages
Chinese (zh)
Other versions
CN115601283A (en
Inventor
田倬韬
林一
易振彧
刘枢
吕江波
沈小勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Smartmore Technology Co Ltd
Original Assignee
Shenzhen Smartmore Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Smartmore Technology Co Ltd filed Critical Shenzhen Smartmore Technology Co Ltd
Priority to CN202211603412.5A priority Critical patent/CN115601283B/en
Publication of CN115601283A publication Critical patent/CN115601283A/en
Application granted granted Critical
Publication of CN115601283B publication Critical patent/CN115601283B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image enhancement method, an image enhancement device, computer equipment and a computer readable storage medium. The method comprises the following steps: acquiring an image to be enhanced and an object instance set; respectively carrying out area difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set to obtain area difference items of each instance; carrying out shape difference calculation according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance and the area of each target object instance to obtain shape difference items of each instance; determining a fusion object instance corresponding to the original object instance based on each instance area difference item and each instance shape difference item; and fusing the image to be enhanced and the fused object instance to obtain a target enhanced image. By adopting the method, the quality of the enhanced image can be effectively improved.

Description

Image enhancement method and device, computer equipment and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image enhancement method and apparatus, a computer device, and a computer-readable storage medium.
Background
With the development of computer technology, image processing technology based on intelligent algorithm is more mature, but due to the cost limit of time and money, the situations of small image quantity, poor image quality and unbalanced category may occur, and data enhancement technology can enable data to generate incremental data on the basis of the existing data, so that the value of the data is improved.
In the conventional technology, image enhancement is performed by simply rotating, translating, strength interfering and the like, and the quality of the generated enhanced image is poor.
Disclosure of Invention
In view of the foregoing, it is desirable to provide an image enhancement method, an image enhancement apparatus, a computer device, and a computer-readable storage medium, which can effectively improve the quality of an enhanced image.
In a first aspect, the present application provides an image enhancement method, including:
acquiring an image to be enhanced and an object instance set;
respectively carrying out area difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set to obtain area difference items of each instance;
performing shape difference calculation according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance and the area of each target object instance to obtain shape difference items of each instance;
determining a fusion object instance corresponding to the original object instance based on each instance area difference item and each instance shape difference item;
and fusing the image to be enhanced and the fused object instance to obtain a target enhanced image.
In a second aspect, the present application provides an image enhancement apparatus comprising:
the acquisition module is used for acquiring an image to be enhanced and an object instance set;
the first calculation module is used for respectively carrying out area difference calculation according to the area of an original object instance of the image to be enhanced and the area of each target object instance in the object instance set to obtain each instance area difference item;
the second calculation module is used for carrying out shape difference calculation according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance and the area of each target object instance to obtain shape difference items of each instance;
the determining module is used for determining a fusion object instance corresponding to the original object instance based on each instance area difference item and each instance shape difference item;
and the fusion module is used for fusing the image to be enhanced and the fusion object instance to obtain the target enhanced image.
In a third aspect, the present application provides a computer device, which includes a memory and a processor, wherein the memory stores a computer program, and the processor implements the steps of the image enhancement method when executing the computer program.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps in the image enhancement method described above.
According to the image enhancement method, the image enhancement device, the computer equipment and the computer readable storage medium, the fusion instance corresponding to the original object instance of the image to be enhanced is selected according to the morphological constraint method, and then the fusion instance and the image to be enhanced are fused to generate the target enhanced image, so that the generated enhanced image can effectively reserve the original semantic information between the image and the object instance, and the quality of the enhanced image is effectively improved.
Drawings
Fig. 1 is an application environment diagram of an image enhancement method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of an image enhancement method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of an example area difference item generation step according to an embodiment of the present disclosure;
FIG. 4 is a flowchart illustrating an example shape difference item generation step according to an embodiment of the present application;
FIG. 5 is a schematic flowchart of a process for generating an enhanced image according to an embodiment of the present application;
FIG. 6 is a schematic flowchart of a process for generating a target enhanced image according to an embodiment of the present disclosure;
fig. 7 is a schematic flowchart of a process for constructing a generate-confrontation network according to an embodiment of the present application;
FIG. 8 is a schematic flowchart of determining a target enhanced image according to an embodiment of the present disclosure;
fig. 9 is a block diagram of an image enhancement apparatus according to an embodiment of the present application;
FIG. 10 is a schematic diagram of an embodiment of the present application for generating an enhanced image;
fig. 11 is a schematic diagram of an operation of generating a countermeasure network according to an embodiment of the present application;
fig. 12 is an internal structural diagram of a computer device according to an embodiment of the present application;
fig. 13 is an internal structural diagram of a computer-readable storage medium according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The image enhancement method provided by the embodiment of the application can be applied to the application environment shown in fig. 1. As shown in fig. 1, a computer device 102 acquires an image to be enhanced and an object instance set; respectively carrying out area difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set to obtain area difference items of each instance; and performing shape difference calculation according to the reference position point of the original object example, the reference position point of each target object example, the area of the original object example and the area of each target object example to obtain each example shape difference item, determining a fusion object example corresponding to the original object example based on each example area difference item and each example shape difference item, and finally fusing the image to be enhanced and the fusion object example to obtain a target enhanced image. The computer device 102 may be, but not limited to, various personal computers, servers, notebook computers, smart phones, tablet computers, internet of things devices, and portable wearable devices, and the internet of things devices may be smart speakers, smart televisions, smart air conditioners, smart car-mounted devices, and the like.
In some embodiments, as shown in fig. 2, an image enhancement method is provided, which is illustrated by way of example as applied to the computer device 102 in fig. 1, and includes the following steps:
step S202, acquiring an image to be enhanced and an object instance set.
The image to be enhanced may or may not include an object instance, and may also include a plurality of object instances; the object instance set is all object instances contained in the whole image sample data, and the object instance is a pattern area with characteristics of a specific area, shape, saturation, depth value, texture feature and the like in the image sample data.
And step S204, area difference calculation is respectively carried out according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set, so as to obtain area difference items of each instance.
The image to be enhanced may include one or more object instances, the object instance set also includes an original object instance, the target object instance is another object instance in the object instance set except the original object instance, and the instance area difference term is used to represent a difference degree of the original object instance and the target object instance in area.
Specifically, the computer equipment obtains a position area of an original object example of the image to be enhanced in the image to be enhanced by identifying pattern information in the image to be enhanced; obtaining contour data of the original object instance by obtaining the position area, and further calculating according to the contour data to obtain an area value of the original object instance; and calculating the area values of all object instances in the object instance set in the same way, then performing area difference calculation on the area of each original object instance in the image to be enhanced and the area of each target object instance in the object instance set to obtain area difference items of each instance, and calculating the area difference items of the instances by adopting addition, subtraction, multiplication, division and other modes.
Step S206, carrying out shape difference calculation according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance and the area of each target object instance to obtain each instance shape difference item.
The reference position point of the original object instance is positioned in the internal area of the original object instance and is used for representing the position information of the corresponding original object instance; similarly, the reference position point of the target object instance is positioned in the internal area of the target object instance and is used for representing the position information of the target object instance; the reference position point of the object instance may be a centroid point position or a geometric center point of the corresponding object instance, or may be a specific point artificially specified in the object instance as a reference position point according to needs, and a specific manner is not limited thereto, and the instance shape difference term is used to represent a difference degree of different object instances in shape.
Specifically, the computer device obtains reference position points of each original object instance and reference position points of each target object instance, aligns the reference position points of each original object instance with the reference position points of each target object instance in a preset mode, calculates the area of a non-overlapping area of the original object instance and the target object instance, and fuses the area of the non-overlapping area, the area of the original object instance and the area of the target object instance to obtain each instance shape difference item; the alignment method of the reference position point of the original object instance and the reference position point of the target object instance includes determining a corresponding circular ring area by taking the reference position point of the original object instance as a center of a circle and taking a preset threshold as a radius, and marking as an aligned state when the reference position point of the target object instance is at any point on the circular ring.
Step S208, determining a fusion object instance corresponding to the original object instance based on each instance area difference item and each instance shape difference item.
And the fusion object instance is an object instance to be copied and pasted in the image to be enhanced.
Specifically, the computer device obtains an area constraint threshold and a shape constraint threshold of the object instance, and when the value of the instance area difference item is smaller than the area constraint threshold, it indicates that the original object instance and the target object instance corresponding to the current instance area difference item satisfy the area constraint condition; similarly, when the value of the instance shape difference item is smaller than the shape constraint threshold, it indicates that the original object instance and the target object instance corresponding to the current instance shape difference item satisfy the shape constraint condition, and when the original object instance and the target object instance satisfy the area constraint condition and the shape constraint condition at the same time, the current corresponding target object instance is taken as the fusion object instance corresponding to the current original object instance.
And step S210, fusing the image to be enhanced and the fusion object instance to obtain a target enhanced image.
The target enhanced image is image data containing original object instances and a preset number of fusion object instances.
Specifically, the computer equipment copies and pastes the fusion object instances into the image to be enhanced, and when the number of the fusion object instances is larger than a preset threshold value, randomly selects and pastes a preset number of the fusion object instances into the image to be enhanced in each fusion object instance, so as to generate a target enhanced image; and when the number of the fusion object instances is smaller than or equal to a preset threshold value, copying and pasting each fusion object instance into the corresponding image to be enhanced, and further generating a target enhanced image.
According to the image enhancement method, the fusion instance corresponding to the original object instance of the image to be enhanced is selected according to the morphological constraint method, and then the fusion instance and the image to be enhanced are fused to generate the target enhanced image, so that the generated enhanced image can effectively retain the original semantic information between the image and the object instance, and the quality of the enhanced image is effectively improved.
In some embodiments, as shown in fig. 3, the performing area difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set respectively to obtain each instance area difference item includes:
step S302, the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set are respectively formed into an area pair.
Specifically, the computer device sequentially matches each original object instance with each target object instance in order, and combines the respective area values into an area pair.
And step S304, respectively carrying out proportional calculation on the maximum value and the minimum value of each area pair to obtain area difference terms of each example.
Specifically, the computer device calculates a maximum value and a minimum value in the area pairs corresponding to each original object instance and each target object instance, and calculates an actual area difference term by taking a ratio of the maximum value to the minimum value as a corresponding instance area difference term, specifically according to a manner shown in the following formula 1:
Figure 435742DEST_PATH_IMAGE002
equation 1
Wherein the content of the first and second substances,
Figure 102347DEST_PATH_IMAGE004
and
Figure 118844DEST_PATH_IMAGE006
binary mask, symbol representing original object instance and target object instance respectively
Figure 616822DEST_PATH_IMAGE008
Representation mask
Figure 60573DEST_PATH_IMAGE010
The area of (a) is,
Figure 715938DEST_PATH_IMAGE012
which represents an instance of an original object,
Figure 536127DEST_PATH_IMAGE014
a representation of an instance of the target object,
Figure 888611DEST_PATH_IMAGE016
showing the area difference term of the example,
Figure 768842DEST_PATH_IMAGE018
representing the area of the largest of the two object instances,
Figure 144460DEST_PATH_IMAGE020
which represents the area of the smallest of the two object instances.
In this embodiment, the area of the original object instance and the area of each target object instance form an area pair, and the maximum value and the minimum value of the area pair are respectively subjected to proportional calculation to obtain each instance area difference term, so that the size of the constructed instance area difference term can directly reflect the area difference between the original object instance and the target object instance, when the ratio is larger, the difference between the areas of the original object instance and the target object instance is larger, and when the ratio is closer to 1, the difference between the areas of the original object instance and the target object instance is smaller, thereby effectively improving the efficiency of judging the area difference between the original object instance and the target object instance.
In some embodiments, as shown in fig. 4, the reference position point is a centroid position, and the shape difference calculation is performed according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance, and the area of each target object instance to obtain each instance shape difference term, including:
step S402, aiming at each target object instance, the centroid position of the original object instance and the centroid position of the target object instance are subjected to coincidence processing, and the area of the non-overlapping area of the original object instance and the target object instance is calculated.
Step S404, a first instance area is obtained, where the first instance area is a maximum value of the area of the original object instance and the area of the target object instance.
Step S406, a proportion calculation is carried out according to the area of the non-overlapping area and the area of the first example, and an example shape difference item is obtained.
Specifically, the computer device may calculate the example shape difference term in the manner shown in equation 2 below:
Figure 1295DEST_PATH_IMAGE022
equation 2
Wherein the content of the first and second substances,
Figure 411548DEST_PATH_IMAGE024
for the purpose of example shape difference terms,
Figure 197101DEST_PATH_IMAGE012
which represents an instance of an original object,
Figure 60015DEST_PATH_IMAGE014
a representation of an instance of the target object,
Figure 723471DEST_PATH_IMAGE026
representing the area of the non-overlapping region of the original object instance and the target object instance after the centroid point alignment,
Figure 50547DEST_PATH_IMAGE028
and with
Figure 272581DEST_PATH_IMAGE030
A binary mask representing an original object instance and a target object instance respectively,
Figure 622791DEST_PATH_IMAGE032
the area of the object instance having the largest area of the two object instances (i.e., the first instance area) is represented.
In this embodiment, the computer device overlaps the centroid position of the original object instance with the centroid position of the target object instance, calculates the area of the non-overlapping region between the original object instance and the target object instance, acquires the area of the first instance, and finally uses the ratio of the area of the non-overlapping region to the area of the first instance as the instance shape difference item, so that the size of the instance shape difference item constructed by the method can directly reflect the shape difference between the original object instance and the target object instance, the larger the ratio is, the larger the difference between the shapes of the original object instance and the target object instance is, the smaller the ratio is, the smaller the difference between the shapes of the original object instance and the target object instance is, and the efficiency of judging the shape difference between the original object instance and the target object instance is effectively improved.
In some embodiments, as shown in fig. 5, fusing the image to be enhanced and the fusion object instance to obtain a target enhanced image includes:
step S502, obtaining a target fusion distance.
The target fusion distance is used for representing the relative position relation between the fusion object instance and the original object instance in the target enhanced image, and the target fusion distance comprises a minimum preset distance threshold value and a maximum preset distance threshold value.
Specifically, the computer device obtains central point position information of a corresponding original object instance, and then determines a circular ring area corresponding to the current original object instance by taking the central point position information as a circle center and taking a minimum preset distance threshold and a maximum preset distance threshold in the target fusion distance as a radius, that is, a range of the target fusion distance is satisfied between any point in the circular ring area and the central point of the original object instance.
And step S504, determining the target fusion position of the fusion object instance according to the centroid position and the target fusion distance of the original object instance.
Specifically, after determining the circular ring region corresponding to the target fusion distance according to the method in step S502, the computer device uses the intersection region of the circular ring regions corresponding to each original object instance in the image to be enhanced as the target fusion position, and any point in the intersection region may be used as the target fusion position of the center point of the fusion object instance; when only a single original object instance exists in the image to be enhanced, a circular ring area corresponding to the single original object instance is used as a target fusion position, and any point in the circular ring area can be used as the target fusion position of the center point of the fusion object instance.
And S506, fusing the fusion object instance to the image to be enhanced according to the target fusion position to obtain a target enhanced image.
Specifically, the computer device may determine the target fusion location according to equation 3 below:
Figure 322893DEST_PATH_IMAGE034
equation 3
Wherein, the first and the second end of the pipe are connected with each other,
Figure 940694DEST_PATH_IMAGE036
representing original object instances
Figure 333629DEST_PATH_IMAGE012
And target object instance
Figure 171135DEST_PATH_IMAGE014
The distance between the two electrodes is restricted,
Figure 737246DEST_PATH_IMAGE038
and
Figure 711018DEST_PATH_IMAGE040
respectively representing a minimum preset distance threshold and a maximum preset distance threshold,
Figure 764601DEST_PATH_IMAGE042
representing original object instances
Figure 89403DEST_PATH_IMAGE012
And target object instance
Figure 131308DEST_PATH_IMAGE014
The distance between them.
In the embodiment, the distance constraint between the center point of the original object instance and the center point of the target object instance is set to determine the region in which the center points of the fusion object instances can be fused, so that the target fusion position is determined, and the fusion object instance is fused to the image to be enhanced according to the target fusion position to obtain the target enhanced image, so that the reasonability of the position layout of each original object instance and each fusion object instance in the enhanced image is effectively improved.
In some embodiments, as shown in fig. 6, fusing an image to be enhanced and a fusion object instance to obtain a target enhanced image includes:
step S602, the image to be enhanced and the fusion object instance are fused to obtain a first enhanced image.
The first enhanced image is an enhanced image comprising an original object instance and a fusion object instance, but the characteristics of texture, color, saturation, depth value and the like of the fused fusion object instance are not optimized according to the corresponding original object instance.
Specifically, the computer device fuses the fusion object instance into the image to be enhanced according to the target fusion position determined in the previous step, and generates a first enhanced image.
And step S604, training the first enhanced image as the input of the generator for generating the countermeasure network to generate the target enhanced image.
The target enhanced image is an enhanced image comprising an original object instance and a fusion object instance, and the characteristics such as texture characteristics, color, saturation, depth value and the like of the fused fusion object instance are optimized according to the corresponding original object instance, so that the similarity between the characteristics such as texture characteristics, color, saturation, depth value and the like of the fusion object instance and the original object instance is greater than a preset threshold value.
Specifically, the computer device takes the first enhanced image as input image data of a generator for generating the countermeasure network, continuously generates new enhanced images through the generator for generating the countermeasure network, continuously performs true and false identification on the newly generated enhanced images through a discriminator for generating the countermeasure network, and finally generates a target enhanced image meeting the preset image quality through the iterative process.
In this embodiment, the generation countermeasure network is used to optimize the first enhanced image to generate the target enhanced image, so that the similarity between the texture features, color, saturation, depth value and the like of the fusion object instance and the original object instance is greater than the preset threshold, and the quality of the target enhanced image is effectively improved.
In some embodiments, as shown in fig. 7, before training the first enhanced image as an input of the generator for generating the countermeasure network to generate the target enhanced image, the method further includes:
in step S702, a preset number of standard image data are acquired.
The standard image data is image data with the same attribute characteristics of all object instances in the image.
Step S704, respectively inputting a preset number of standard image data into the discriminator for generating the countermeasure network for processing, and outputting a corresponding first discrimination result.
The first identification result is an identification result of the identifier of the countermeasure network on the standard image data, and the identification result is a score value for identifying the authenticity of the corresponding image or a probability for judging that the standard image data is an authentic image.
For example, the computer device randomly selects two standard image data from the training set as anchor points
Figure 287483DEST_PATH_IMAGE044
And positive sample
Figure 22221DEST_PATH_IMAGE046
Respectively anchor points
Figure 67275DEST_PATH_IMAGE044
And positive sample
Figure 178451DEST_PATH_IMAGE046
Input into a discriminator generating a countermeasure network to generate a first discrimination result
Figure 126815DEST_PATH_IMAGE048
Step S706, a second authentication result is obtained according to the generated image of the generator for generating the countermeasure network.
The second identification result is an identification result of the identifier of the countermeasure network for generating the generated image of the generator of the countermeasure network, and the identification result is a score value for identifying the authenticity of the corresponding image or a probability for judging that the standard image data is an authentic image.
For example, the computer device obtains the first enhanced image K determined in the foregoing step, inputs the first enhanced image K into a generator for generating a countermeasure network to obtain a generated image G (K), and then obtains an instance index matrix Y; the instance index matrix Y is used for representing the position information of the fusion object instance in the enhanced image; obtaining an optimized result of a generator according to the fusion of the first enhanced image K, the generated image G (K) and the instance index matrix Y
Figure 94771DEST_PATH_IMAGE050
Then using the optimized result as the input of discriminator to obtain the second discrimination result
Figure 128586DEST_PATH_IMAGE052
Step S708, a discriminator loss function corresponding to the discriminator is constructed based on the first discrimination result and the second discrimination result.
Specifically, the computer device performs difference calculation according to the first identification result to obtain a first difference term, performs difference calculation according to the first identification result and the second identification result to obtain a second difference term, and generates the identifier loss function according to the fusion of the first difference term and the second difference term, and the computer device may specifically construct the identifier loss function according to the method shown in the following formula 4
Figure 13759DEST_PATH_IMAGE054
Figure 82209DEST_PATH_IMAGE056
Equation 4
Wherein D and G represent the discriminator and the generator, respectively, and Y represents the instance of the fusion objectThe binary mask of (i.e. the instance index matrix,
Figure 158749DEST_PATH_IMAGE058
representing coordinates in an enhanced image
Figure 679861DEST_PATH_IMAGE060
The pixel at (b) belongs to the fusion object instance),
Figure 365795DEST_PATH_IMAGE062
representing element-by-element multiplication, m is an edge loss custom parameter, K represents an enhanced image of the previous step,
Figure 288751DEST_PATH_IMAGE064
and
Figure 536193DEST_PATH_IMAGE066
are anchor points respectively
Figure 606917DEST_PATH_IMAGE068
And positive sample
Figure 598007DEST_PATH_IMAGE070
A corresponding first result of the authentication,
Figure 876935DEST_PATH_IMAGE072
is the second authentication result.
Step S710, performing difference calculation based on the input image of the generator and the generated image of the generator to obtain a reconstruction loss function.
Wherein the reconstruction loss function is used to characterize a degree of difference between the first enhanced image and the corresponding generated image.
Specifically, the computer device obtains an input image of the generator (i.e. the first enhanced image K in the above step), obtains a generated image G (K) of the generator, and performs difference calculation on the input image of the generator and the generated image of the generator to obtain a reconstruction loss function, and specifically, the reconstruction loss function may be constructed according to a method shown in the following formula 5
Figure 295278DEST_PATH_IMAGE074
Figure 525402DEST_PATH_IMAGE076
Equation 5
Wherein K is the first enhanced image, and G (K) is the generated image corresponding to K.
Step S712, a generator loss function corresponding to the generator is constructed and obtained according to the first identification result, the second identification result, and the reconstruction loss function.
Specifically, the computer device may construct a generator loss function in the manner shown in equation 6 below
Figure 320183DEST_PATH_IMAGE078
Figure 450688DEST_PATH_IMAGE080
Equation 6
Wherein the content of the first and second substances,
Figure 39932DEST_PATH_IMAGE082
is a balance parameter for controlling the penalty function and reconstructing the penalty function.
Step S714, a generated countermeasure network is constructed according to the discriminator loss function and the generator loss function.
In the embodiment, a first identification result and a second identification result are respectively obtained from standard image data and a generated image of a generator, a discriminator loss function is generated by fusion after difference calculation is carried out according to the first identification result and the second identification result, a reconstruction loss function is obtained by difference calculation according to a first enhanced image and the generated image of the generator, a generator loss function is generated by fusion according to the reconstruction loss function, the first identification result and the second identification result, and a generated confrontation network is constructed on the basis of the discriminator loss function and the generator loss function, so that a new enhanced image is generated continuously along with the continuous iteration of the network, a current enhanced image is determined to be an enhanced image after optimization is completed after the new enhanced image is identified by the discriminator, artifacts of an instance of a fusion object in the enhanced image can be effectively eliminated, the attribute feature of the instance of the fusion object is closer to the attribute feature of an original object instance in the enhanced image, and the quality of the enhanced image is effectively improved.
In some embodiments, as shown in fig. 8, training the first enhanced image as an input to a generator for generating the countermeasure network to generate the target enhanced image includes:
step S802, calculating to obtain cosine similarity according to the feature map of the original object instance and the feature map of the fusion object instance, wherein the feature map is used for representing texture feature information of the original object instance or the fusion object instance.
Specifically, the computer device extracts attribute feature information (including texture features, colors, saturation, brightness, depth values and the like) of an original object instance in the enhanced image by adopting a 3x3 convolution operation to generate an original object instance feature map, similarly acquires attribute information (including texture features, colors, saturation, brightness, depth values and the like) of a fusion object instance in the enhanced image, generates a fusion object instance feature map, and calculates cosine similarity between the original object instance feature map and the fusion object instance feature map.
For example, the computer device may calculate the cosine similarity in a manner as shown in the following equation 7:
Figure 22931DEST_PATH_IMAGE084
equation 7
Wherein the content of the first and second substances,
Figure 355824DEST_PATH_IMAGE086
representing the cosine similarity between the ith original object instance feature map and the jth fused object instance feature map in the enhanced image,
Figure 343765DEST_PATH_IMAGE088
and with
Figure 431806DEST_PATH_IMAGE090
The feature map of the ith original object instance and the feature map of the jth fused object instance in the enhanced image are respectively.
And step S804, fusing the feature graph of the original object example based on the cosine similarity to obtain a similarity feature.
Specifically, the computer device may calculate the similarity feature according to the manner shown in the following equation 8
Figure 636523DEST_PATH_IMAGE092
Figure 507527DEST_PATH_IMAGE094
Equation 8
Wherein the content of the first and second substances,
Figure 114089DEST_PATH_IMAGE096
to enhance the number of jth original object instances in the image,
Figure 809250DEST_PATH_IMAGE098
is a characteristic diagram of an original object instance,
Figure 235683DEST_PATH_IMAGE100
the cosine similarity of the ith original object instance feature map and the jth fused object instance feature map in the image is enhanced.
And step S806, fusing the similarity characteristic with the characteristic graph of the fusion object instance to obtain fusion characteristic information.
Step S808, inputting the fusion feature information into a decoder of the generator for generating the countermeasure network, and outputting the generated image.
Step S810, inputting the generated image into a discriminator for discrimination to generate a discrimination result.
In step S812, based on the discrimination result, the target enhanced image is determined.
Specifically, the computer device judges the identification result, judges that the current generated image is the target enhanced image when the truth degree corresponding to the generated image is greater than or equal to a preset threshold value, stops training for generating the countermeasure network, and continues to generate the countermeasure network through training to iteratively generate a new enhanced image if the truth degree of the generated image is less than the preset threshold value.
In the embodiment, an auxiliary encoder is arranged in a generator for generating the confrontation network, cosine similarity between the original object instance feature diagram and the fusion object instance feature diagram is solved, fusion feature information is obtained by fusing the cosine similarity and the fusion object instance feature diagram, and finally a target enhanced image is determined based on the fusion feature information, so that texture features of the original object instance in the image to be enhanced are encoded, and then the texture features are fused into the fusion object instance to complete style conversion of the fusion object instance, so that artifacts in the original enhanced image can be effectively eliminated, and the image quality of the enhanced image is improved.
The application also provides an application scenario in which the image enhancement method is applied, and the method is applied to a data enhancement scenario in surface defect segmentation detection, and specifically, the application of the image enhancement method to the application scenario is as follows:
the computer device collects all foreground instances in the training set (all image data samples) as a foreground instance repository. For a real image
Figure 175957DEST_PATH_IMAGE102
And selecting a certain number of foreground example samples from the foreground example warehouse to perform data enhancement on the image. For any selected foreground instance
Figure 699343DEST_PATH_IMAGE104
And original foreground instance
Figure 801291DEST_PATH_IMAGE106
The following morphological constraint method was used for the constraint:
Figure DEST_PATH_IMAGE108_100A
equation 9
Figure DEST_PATH_IMAGE110_130A
Equation 10
Figure DEST_PATH_IMAGE112_111A
Equation 11
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE113_30A
and with
Figure DEST_PATH_IMAGE114_111A
And the two-valued mask respectively represents the foreground instance and the foreground instance to be pasted in the original image. | represents the area of the mask,
Figure DEST_PATH_IMAGE115_36A
representing the distance between the center points of the two examples,
Figure DEST_PATH_IMAGE116_100A
representing the area of the larger of the two examples,
Figure DEST_PATH_IMAGE117_47A
represents the area of the smaller of the two examples, the division of the two is the area ratio of the two,
Figure 932931DEST_PATH_IMAGE026
representing the area of the non-overlapping region of the two examples after center point alignment,
Figure DEST_PATH_IMAGE119_56A
the statistics is obtained from the training set, and the specific parameter settings of different data sets are different.
Pasting the preset number of examples to be pasted determined in the step into the image to be enhancedGeneration of enhanced images as shown in fig. 10, the method may have more obvious artifacts due to different imaging conditions (e.g., light, environment, imaging equipment) between different images. Therefore, the enhanced image is input into the generation confrontation network for training, and as shown in fig. 11, the generator performs style conversion on the to-be-pasted example by learning style information in the to-be-enhanced image, so as to generate a more real enhanced image. The discriminator is to distinguish the unreal enhanced images. Unlike existing methods based on generation of a countermeasure network, the optimized enhanced image has no real sample. Therefore, a ternary loss function is designed for training the discriminator. Specifically, two real images are randomly selected from a training set as anchor points
Figure 349000DEST_PATH_IMAGE044
And positive sample
Figure DEST_PATH_IMAGE120_111A
And negative sample
Figure DEST_PATH_IMAGE122_61A
For the samples produced by the generator, the discriminator ternary loss function is:
Figure 100793DEST_PATH_IMAGE056
equation 12
Where D and G represent the discriminator and the generator, respectively. Y represents a binary mask of the foreground instance to be pasted, where
Figure DEST_PATH_IMAGE123_41A
Is represented in coordinates
Figure 576905DEST_PATH_IMAGE060
The pixel at (b) belongs to the to-be-pasted instance.
Figure DEST_PATH_IMAGE124_84A
Representing a multiplication element by element. Optimized results
Figure DEST_PATH_IMAGE125_35A
Is to place the generated paste instance G (K) in the image to be pasted and control the other areas to remain unchanged. And m is an edge loss custom parameter.
For the generator, the form of the self-encoder is used, and the loss function of the generator is:
Figure DEST_PATH_IMAGE126_78A
equation 13
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE127_61A
is a balance parameter for controlling the penalty function and reconstructing the penalty function.
The above method may produce a more realistic enhanced image, but the optimized image may only learn the style of pasting a background region with a larger image area. To overcome this problem, an auxiliary encoder, named instance similarity encoder, is designed in the generator, which functions to encode the texture features of the instance of the image to be enhanced and then fuse them into the depth hidden space of the instance to be pasted. As shown in fig. 11, a 3 × 3 convolution operation is first adopted to extract instance information of an image to be enhanced, and then for an instance to be pasted, the cosine similarity between the instance to be pasted and an instance in the image to be enhanced is calculated by using a convolution mode:
Figure DEST_PATH_IMAGE129_42A
equation 14
Wherein the content of the first and second substances,
Figure DEST_PATH_IMAGE130_59A
and
Figure 666346DEST_PATH_IMAGE090
respectively, are feature maps of an example in the band enhanced image and an example to be pasted. And then replacing the characteristics of the example to be pasted with the characteristics in the original image, and normalizing by the following similarity:
Figure DEST_PATH_IMAGE132_75A
equation 15
Wherein, the first and the second end of the pipe are connected with each other,
Figure DEST_PATH_IMAGE133_35A
to enhance the number of original instances in the image.
Then, we will encode the similarity features
Figure DEST_PATH_IMAGE135_32A
And fusing the image with the characteristics of the example to be pasted, and using the fused image as the decoder input of the generator to generate a final optimized image so as to complete the data enhancement of the image.
The image enhancement method comprises the steps of obtaining an image to be enhanced and a foreground instance warehouse, carrying out difference calculation according to the area of an original instance of the image to be enhanced and the area of each foreground instance in the foreground instance warehouse to obtain area difference items of each instance, carrying out shape difference calculation according to a central point in the original instance, a central point in the foreground instance, the area of the original instance and the area of the foreground instance to obtain shape difference items of each instance, judging whether the area and the shape of the foreground instance are in accordance with the standard of the instance to be pasted according to the difference between the area and the shape of the foreground instance and the original instance, determining the instance to be pasted according to the morphological characteristics of the original instance, fusing the image to be enhanced and the instance to be pasted to obtain an enhanced image, using the enhanced image with the artifact at the moment as an enhanced image which is generated to resist a network to train a model to obtain an optimized enhanced image, fusing the texture characteristics of the instance to be pasted with the texture characteristics of the original instance by arranging an auxiliary encoder, further completing the transformation of the pasting of the instance to be enhanced and effectively eliminating the artifact in the enhanced image, and effectively keeping semantic information between the generated enhanced image and the instance, thereby effectively improving the quality of the original image.
It should be understood that, although the steps in the flowcharts related to the embodiments as described above are sequentially shown as indicated by arrows, the steps are not necessarily performed sequentially in the order indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a part of the steps in the flowcharts according to the embodiments described above may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the steps or stages is not necessarily sequential, but may be performed alternately or alternately with other steps or at least a part of the steps or stages in other steps.
In some embodiments, as shown in fig. 9, there is provided an image enhancement apparatus, which may be a part of a computer device using a software module or a hardware module, or a combination of the two, the apparatus specifically includes:
an obtaining module 902, configured to obtain an image to be enhanced and an object instance set;
a first calculating module 904, configured to perform area difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set, to obtain an area difference item of each instance;
a second calculating module 906, configured to perform shape difference calculation according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance, and the area of each target object instance, so as to obtain each instance shape difference item;
a determining module 908, configured to determine, based on each instance area difference item and each instance shape difference item, a fusion object instance corresponding to the original object instance;
and the fusion module 910 is configured to fuse the image to be enhanced and the fusion object instance to obtain a target enhanced image.
In some embodiments, in terms of performing difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set to obtain each instance area difference item, the first calculating module 904 is specifically configured to:
respectively combining the area of an original object instance of the image to be enhanced with the area of each target object instance in the object instance set to form an area pair;
and respectively carrying out proportional calculation on the maximum value and the minimum value of each area pair to obtain area difference terms of each example.
In some embodiments, the reference position point is a centroid position, and in terms of obtaining each instance shape difference term by performing shape difference calculation according to the reference position point of the original object instance, the reference position point of each target object instance, the area of the original object instance, and the area of each target object instance, the second calculation module 906 is specifically configured to:
for each target object instance, carrying out coincidence processing on the centroid position of the original object instance and the centroid position of the target object instance, and calculating the area of a non-overlapping region of the original object instance and the target object instance;
obtaining a first instance area, wherein the first instance area is the maximum value of the area of the original object instance and the area of the target object instance;
and carrying out proportional calculation according to the area of the non-overlapping region and the area of the first example to obtain an example shape difference item.
In some embodiments, in terms of fusing an image to be enhanced and a fusion object instance to obtain a target enhanced image, the fusion module 910 is specifically configured to:
obtaining a target fusion distance;
determining a target fusion position of the fusion object instance according to the centroid position and the target fusion distance of the original object instance;
and fusing the fusion object instance to the image to be enhanced according to the target fusion position to obtain a target enhanced image.
In some embodiments, in terms of fusing an image to be enhanced and a fusion object instance to obtain a target enhanced image, the fusion module 910 is specifically configured to:
fusing an image to be enhanced and a fusion object instance to obtain a first enhanced image;
the first enhanced image is used as an input of a generator for generating the countermeasure network, and a target enhanced image is generated through training.
In some embodiments, the fusion module 910 is further configured to:
acquiring a preset number of standard image data; respectively inputting a preset number of standard image data into a discriminator for generating a countermeasure network for processing, and outputting corresponding first discrimination results; obtaining a second authentication result according to a generated image of a generator generating the countermeasure network; constructing a discriminator loss function corresponding to the discriminator based on the first discrimination result and the second discrimination result; performing difference calculation based on the input image of the generator and the generated image of the generator to obtain a reconstruction loss function; constructing a generator loss function corresponding to the generator according to the first identification result, the second identification result and the reconstruction loss function; and constructing the generation countermeasure network according to the discriminator loss function and the generator loss function.
In some embodiments, in terms of training the first enhanced image as an input to the generator for generating the countermeasure network to generate the target enhanced image, the fusion module 910 is specifically configured to:
calculating to obtain cosine similarity according to the feature map of the original object instance and the feature map of the fusion object instance, wherein the feature map is used for representing texture feature information of the original object instance or the fusion object instance;
fusing the feature graph based on the cosine similarity and the original object example to obtain similarity features;
fusing the similarity characteristic with the characteristic graph of the fusion object example to obtain fusion characteristic information;
inputting the fusion characteristic information into a decoder of a generator for generating a countermeasure network for processing, and outputting a generated image;
inputting the generated image into a discriminator for discrimination to generate a discrimination result;
based on the discrimination result, a target enhanced image is determined.
For specific limitations of the image enhancement device, reference may be made to the above limitations of the image enhancement method, which are not described herein again. The modules in the image enhancement device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
In some embodiments, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 12. The computer device includes a processor, a memory, an Input/Output interface (I/O for short), a communication interface, a display unit, and an Input device. The processor, the memory and the input/output interface are connected through a system bus, and the communication interface, the input device and the display unit are connected to the system bus through the input/output interface. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The communication interface of the computer device is used for communicating with an external terminal in a wired or wireless manner, and the wireless manner can be realized through WIFI, a mobile cellular network, NFC (near field communication) or other technologies. The computer program is executed by a processor to implement the steps in the image enhancement method described above. The display unit of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 12 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In some embodiments, there is also provided a computer device comprising a memory storing a computer program and a processor implementing the steps of the above method embodiments when the processor executes the computer program.
In some embodiments, a computer-readable storage medium 1300 is provided, on which a computer program 1302 is stored, the computer program 1302 implementing the steps in the above-described method embodiments when executed by a processor, an internal structure diagram of which may be as shown in fig. 13.
In some embodiments, a computer program product is provided, comprising computer instructions which, when executed by a processor, implement the steps of the above-described method embodiments.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware related to instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, the computer program can include the processes of the embodiments of the methods described above. Any reference to memory, databases, or other media used in the embodiments provided herein can include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high-density embedded nonvolatile Memory, resistive Random Access Memory (ReRAM), magnetic Random Access Memory (MRAM), ferroelectric Random Access Memory (FRAM), phase Change Memory (PCM), graphene Memory, and the like. Volatile Memory can include Random Access Memory (RAM), external cache Memory, and the like. By way of illustration and not limitation, RAM can take many forms, such as Static Random Access Memory (SRAM) or Dynamic Random Access Memory (DRAM), for example. The databases involved in the embodiments provided herein may include at least one of relational and non-relational databases. The non-relational database may include, but is not limited to, a block chain based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic devices, quantum computing based data processing logic devices, etc., without limitation.
All possible combinations of the technical features in the above embodiments may not be described for the sake of brevity, but should be considered as being within the scope of the present disclosure as long as there is no contradiction between the combinations of the technical features.
The above examples only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present application should be subject to the appended claims.

Claims (10)

1. An image enhancement method, comprising:
acquiring an image to be enhanced and an object instance set;
respectively carrying out area difference calculation according to the area of the original object example of the image to be enhanced and the area of each target object example in the object example set to obtain each example area difference item;
for each target object instance, carrying out coincidence processing on the centroid position of the original object instance and the centroid position of the target object instance, and calculating the non-overlapping area of the original object instance and the target object instance;
obtaining a first instance area, wherein the first instance area is the maximum value of the area of the original object instance and the area of the target object instance;
carrying out proportional calculation according to the area of the non-overlapping area and the area of the first example to obtain an example shape difference item;
determining a fusion object instance corresponding to the original object instance based on the instance area difference items and the instance shape difference items;
and fusing the image to be enhanced and the fused object instance to obtain a target enhanced image.
2. The method according to claim 1, wherein the performing area difference calculation according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set respectively to obtain each instance area difference item comprises:
respectively combining the area of the original object instance of the image to be enhanced with the area of each target object instance in the object instance set to form an area pair;
and respectively carrying out proportional calculation on the maximum value and the minimum value of each area pair to obtain area difference terms of each example.
3. The method according to claim 1, wherein the fusing the image to be enhanced and the fused object instance to obtain a target enhanced image comprises:
obtaining a target fusion distance;
determining a target fusion position of the fusion object instance according to the centroid position of the original object instance and the target fusion distance;
and fusing the fusion object instance to the image to be enhanced according to the target fusion position to obtain a target enhanced image.
4. The method of claim 3, wherein the target fusion distance comprises a minimum preset distance threshold and a maximum preset distance threshold.
5. The method according to claim 1 or 2, wherein the fusing the image to be enhanced and the fused object instance to obtain a target enhanced image comprises:
fusing the image to be enhanced and the fused object instance to obtain a first enhanced image;
and taking the first enhanced image as an input of a generator for generating the confrontation network, and generating a target enhanced image through training.
6. The method of claim 5, wherein before training the first enhanced image as an input to a generator for generating the countermeasure network, the method further comprises:
acquiring a preset number of standard image data;
respectively inputting the preset number of standard image data into the discriminator for generating the countermeasure network for processing, and outputting corresponding first discrimination results;
obtaining a second authentication result according to the generated image of the generator for generating the countermeasure network;
constructing and obtaining a discriminator loss function corresponding to the discriminator based on the first discrimination result and the second discrimination result;
performing difference calculation based on the input image of the generator and the generated image of the generator to obtain a reconstruction loss function;
constructing a generator loss function corresponding to the generator according to the first identification result, the second identification result and the reconstruction loss function;
and constructing and obtaining the generation countermeasure network according to the discriminator loss function and the generator loss function.
7. The method of claim 6, wherein training the first enhanced image as an input to a generator for generating a countermeasure network generates a target enhanced image, comprising:
calculating to obtain cosine similarity according to the feature map of the original object instance and the feature map of the fusion object instance, wherein the feature map is used for representing texture feature information of the original object instance or the fusion object instance;
fusing the cosine similarity with the feature map of the original object example to obtain similarity features;
fusing the similarity characteristic with the characteristic diagram of the fused object example to obtain fused characteristic information;
inputting the fusion characteristic information into a decoder of a generator for generating a countermeasure network for processing, and outputting the generated image;
inputting the generated image into the discriminator to discriminate, and generating a discrimination result;
based on the discrimination result, a target enhanced image is determined.
8. An image enhancement apparatus, comprising:
the acquisition module is used for acquiring an image to be enhanced and an object instance set;
the first calculation module is used for respectively calculating the area difference according to the area of the original object instance of the image to be enhanced and the area of each target object instance in the object instance set to obtain each instance area difference item;
a second calculation module, configured to perform, for each target object instance, coincidence processing on a centroid position of the original object instance and a centroid position of the target object instance, and calculate a non-overlapping area of the original object instance and the target object instance; obtaining a first instance area, wherein the first instance area is the maximum value of the area of the original object instance and the area of the target object instance; calculating the proportion of the area of the non-overlapping area to the area of the first example to obtain an example shape difference item;
a determining module, configured to determine, based on the instance area difference items and the instance shape difference items, a fusion object instance corresponding to the original object instance;
and the fusion module is used for fusing the image to be enhanced and the fusion object instance to obtain a target enhanced image.
9. A computer device comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method of any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 7.
CN202211603412.5A 2022-12-14 2022-12-14 Image enhancement method and device, computer equipment and computer readable storage medium Active CN115601283B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211603412.5A CN115601283B (en) 2022-12-14 2022-12-14 Image enhancement method and device, computer equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211603412.5A CN115601283B (en) 2022-12-14 2022-12-14 Image enhancement method and device, computer equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN115601283A CN115601283A (en) 2023-01-13
CN115601283B true CN115601283B (en) 2023-04-14

Family

ID=84854206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211603412.5A Active CN115601283B (en) 2022-12-14 2022-12-14 Image enhancement method and device, computer equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN115601283B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116523775B (en) * 2023-04-14 2023-11-07 海的电子科技(苏州)有限公司 Enhancement optimization method and apparatus for high-speed image signal, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020014294A1 (en) * 2018-07-11 2020-01-16 Google Llc Learning to segment via cut-and-paste
WO2022121213A1 (en) * 2020-12-10 2022-06-16 深圳先进技术研究院 Gan-based contrast-agent-free medical image enhancement modeling method

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102190527B1 (en) * 2019-02-28 2020-12-14 현대모비스 주식회사 Apparatus and method for automatic synthesizing images
CN113298913A (en) * 2021-06-07 2021-08-24 Oppo广东移动通信有限公司 Data enhancement method and device, electronic equipment and readable storage medium
CN113486944A (en) * 2021-07-01 2021-10-08 深圳市英威诺科技有限公司 Face fusion method, device, equipment and storage medium
CN114863573B (en) * 2022-07-08 2022-09-23 东南大学 Category-level 6D attitude estimation method based on monocular RGB-D image

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020014294A1 (en) * 2018-07-11 2020-01-16 Google Llc Learning to segment via cut-and-paste
WO2022121213A1 (en) * 2020-12-10 2022-06-16 深圳先进技术研究院 Gan-based contrast-agent-free medical image enhancement modeling method

Also Published As

Publication number Publication date
CN115601283A (en) 2023-01-13

Similar Documents

Publication Publication Date Title
CN111814794A (en) Text detection method and device, electronic equipment and storage medium
CN114511576B (en) Image segmentation method and system of scale self-adaptive feature enhanced deep neural network
CN115601283B (en) Image enhancement method and device, computer equipment and computer readable storage medium
CN114898357B (en) Defect identification method and device, electronic equipment and computer readable storage medium
CN114780768A (en) Visual question-answering task processing method and system, electronic equipment and storage medium
CN116630630B (en) Semantic segmentation method, semantic segmentation device, computer equipment and computer readable storage medium
CN115953330B (en) Texture optimization method, device, equipment and storage medium for virtual scene image
CN116012626B (en) Material matching method, device, equipment and storage medium for building elevation image
CN116030466A (en) Image text information identification and processing method and device and computer equipment
US20220092448A1 (en) Method and system for providing annotation information for target data through hint-based machine learning model
Huang et al. DeeptransMap: a considerably deep transmission estimation network for single image dehazing
Wu et al. Salient object detection via reliable boundary seeds and saliency refinement
CN113989671A (en) Remote sensing scene classification method and system based on semantic perception and dynamic graph convolution
CN112667864A (en) Graph alignment method and device, electronic equipment and storage medium
CN115965856B (en) Image detection model construction method, device, computer equipment and storage medium
Wang et al. Boundary detection using unbiased sparseness‐constrained colour‐opponent response and superpixel contrast
CN116665157B (en) Road image processing method, device, computer equipment and storage medium
CN115761239B (en) Semantic segmentation method and related device
KR102569976B1 (en) Method for processing medical image
Yamada et al. Generative approaches for solving tangram puzzles
Yuan et al. Salient object contour extraction based on pixel scales and hierarchical convolutional network
Hu et al. Self-Supervised Segmentation for Terracotta Warrior Point Cloud (EGG-Net)
CN116881122A (en) Test case generation method, device, equipment, storage medium and program product
CN117975473A (en) Bill text detection model training and detection method, device, equipment and medium
CN116977394A (en) Video generation method, apparatus, device, storage medium, and computer program product

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant