CN113139924A - Image enhancement method, electronic device and storage medium - Google Patents

Image enhancement method, electronic device and storage medium Download PDF

Info

Publication number
CN113139924A
CN113139924A CN202110041231.7A CN202110041231A CN113139924A CN 113139924 A CN113139924 A CN 113139924A CN 202110041231 A CN202110041231 A CN 202110041231A CN 113139924 A CN113139924 A CN 113139924A
Authority
CN
China
Prior art keywords
picture
generator
discriminator
image
enhancement method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110041231.7A
Other languages
Chinese (zh)
Inventor
李承政
秦豪
赵明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yogo Robot Co Ltd
Original Assignee
Shanghai Yogo Robot Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yogo Robot Co Ltd filed Critical Shanghai Yogo Robot Co Ltd
Priority to CN202110041231.7A priority Critical patent/CN113139924A/en
Publication of CN113139924A publication Critical patent/CN113139924A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The application relates to an image enhancement method, an electronic device and a storage medium, wherein an available infrared night vision picture is generated by adopting a generation countermeasure network directly based on a color image, and then the generated infrared night vision picture is mixed with the existing actually acquired night vision data to be used as training data of an environment perception model, so that the generation result of a generator is very real, and meanwhile, the discrimination effect of a discriminator is very accurate, so that the acquisition of the night vision data can be obtained in an automatic mode instead of artificial manufacturing scene acquisition, and the problem that the acquisition work of the infrared night vision image is difficult due to the limitation of acquisition requirements and actual conditions is solved.

Description

Image enhancement method, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image enhancement method, an electronic device, and a storage medium.
Background
In the driving process of the robot, the surrounding environment often needs to be understood to a certain extent, for example, pedestrian detection, road segmentation and the like are carried out on the collected images, so that the robot can assist in normal driving, for example, when the robot detects a pedestrian, the robot can effectively avoid an obstacle, avoid an undrawn area and the like, and meanwhile, some human-computer interaction functions such as voice prompt and the like can be additionally added on the basis of pedestrian information. In the daytime scene, the data of the color image can be collected at any time, which is beneficial to the training of the environment perception model, however, the surrounding environment information can be captured only by using a night vision camera at night, but the acquisition of night vision pictures is difficult, and a completely dark or dim light scene is needed first, meanwhile, people need to appear, if the scene is simply manufactured by people, the obtained data can not meet the requirements of actual services, the infrared data can be acquired on site in a large scale and is constrained by actual environmental conditions, although the color image can be directly converted into the gray-scale image, the obtained gray-scale image has no light supplement and black-white contrast effect on the infrared night vision image, such data may have the counter effect and there is a need for an efficient method of converting an existing color image to a night vision image.
Disclosure of Invention
In order to overcome the problems in the related art, the application provides an image enhancement method, an electronic device and a storage medium, and solves the problem that the acquisition work of an infrared night vision image is difficult.
The technical scheme for solving the technical problems is as follows: an image enhancement method comprising the steps of: step 1, acquiring an image through a first shooting device to obtain a database of a first picture; step 2, inputting the database of the first picture into a generation countermeasure network; and 3, converting the first picture into a corresponding second picture through the generation countermeasure network.
Preferably, a second shooting device is used for image acquisition to obtain a plurality of third pictures; the generating confrontation network includes a generator for generating the second picture and a discriminator for discriminating whether the picture input to the generating confrontation network is the second picture or the third picture.
Preferably, before the first picture is converted into the corresponding second picture through the generation countermeasure network, the method further includes the following steps: s301, mixing the second picture and the third picture together to obtain training data of an environment perception model; s302, fixing the generator unchanged, and performing model training on the discriminator; s303, training the discriminator through the training data of the environmental perception model, so that the discriminator distinguishes the second picture from the third picture.
Preferably, after the training iteration of the discriminator by K steps, the method further comprises the following steps: s304, fixing the discriminator to be unchanged, and carrying out model training on the generator; s305, updating the parameters of the generator through a fixed learning frequency to enable the generator to generate the second picture; s306, comparing the second picture with the third picture to obtain comparison data; s307, when the comparison data reach a preset value, a trained generator is obtained.
Preferably, the generating the countermeasure network converts the first picture into a corresponding second picture, specifically including the steps of: s308, inputting the database of the first picture into a trained generator; s309, generating the second picture through the generator to obtain the enhanced data of the actual service data.
Preferably, the optimization function for generating the countermeasure network is as follows:
Figure BDA0002895437590000021
wherein the generator has a model G (z), the discriminator has a model D (x), z is randomly input noise, x represents a picture, pdata(x)Representing the probability distribution, p, of the real imagez(z)Generating probability distribution of the image, wherein a model G (z) of the generator converts randomly input noise z into a picture x, and D (x) outputs a probability value between 0 and 1 to represent the possibility of the picture x being a real picture.
Preferably, the first picture is a color picture, and the second picture is a generated infrared night vision picture.
Preferably, the third picture is a real infrared night vision picture.
A second aspect of an embodiment of the present application provides an electronic device, including:
a processor; and one or more processors; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for performing the methods described above.
A third aspect of the application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a method as described above.
The application provides an image enhancement method, an electronic device and a storage medium, wherein an available infrared night vision picture is generated by adopting a generation countermeasure network directly based on a color image, and then the generated infrared night vision picture is mixed with the existing night vision data which is actually acquired to serve as training data of an environment perception model, so that the generation result of a generator is very real, and meanwhile, the discrimination effect of a discriminator is also very accurate, so that the acquisition of the night vision data can be obtained in an automatic mode instead of artificial scene acquisition, and the problem that the acquisition work of the infrared night vision image is difficult due to the limitation of acquisition requirements and actual conditions is solved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The foregoing and other objects, features and advantages of the application will be apparent from the following more particular descriptions of exemplary embodiments of the application, as illustrated in the accompanying drawings wherein like reference numbers generally represent like parts throughout the exemplary embodiments of the application.
Fig. 1 is a schematic flowchart of an image enhancement method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a model for generating a countermeasure network according to an embodiment of the present application;
FIG. 3 is another schematic flow chart diagram illustrating an image enhancement method according to an embodiment of the present application;
FIG. 4 is another schematic flow chart diagram illustrating an image enhancement method according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Detailed Description
Preferred embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in this application and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It should be understood that although the terms "first," "second," "third," etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the present application. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present application, "a plurality" means two or more unless specifically limited otherwise.
At present, in the process of driving in the daytime, the robot can acquire images through the first shooting device to realize a detection and identification function, but effective data cannot be acquired under a night scene, so that a night vision camera is required to be used for image capture, but the acquisition is limited by acquisition requirements and practical conditions, and the acquisition of night vision images is more difficult than color images.
In view of the above problems, an embodiment of the application provides an image enhancement method, which solves the problem that the acquisition work of infrared night vision images is more reasonable and effective.
The technical solutions of the embodiments of the present application are described in detail below with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a flowchart illustrating an image enhancement method according to a first embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
step S1, a first camera is used to capture an image, and a database of first pictures is obtained.
Specifically, the first photographing device is a color camera (in other embodiments, the first photographing device may be a color camera, a video camera, or other image capturing devices capable of capturing color pictures), the first picture is a color picture, and the color camera may capture images during the day time, so as to obtain a database of a plurality of color pictures, that is, a database of the first picture.
Step S2, inputting the database of the first picture into a generation countermeasure network.
Referring to fig. 2, fig. 2 is a schematic diagram of the model for generating the countermeasure network;
as shown in fig. 2, the generated countermeasure network includes a generator and a discriminator, the generator needs to generate a real picture as much as possible, and the discriminator aims to determine whether a picture is a real picture or a generated picture, and the two pictures compete with each other, so as to promote each other in a game manner, and finally, the generated result of the generator is very real, and the discrimination effect of the discriminator is very accurate.
Step S3, converting the first picture into a corresponding second picture through the generation countermeasure network.
Specifically, the first picture is a color picture, in this embodiment, the color picture is used as an input of the generation countermeasure network, and the color picture is converted into a corresponding infrared night vision picture by using the characteristic of style conversion of the generation countermeasure network, that is, the second picture is the infrared night vision picture generated by the color picture through the generator.
In the embodiment, the usable infrared night vision picture is generated by adopting the generation countermeasure network directly based on the color image, so that the acquisition of night vision data can be obtained in an automatic mode instead of artificially manufacturing scene acquisition, and the problem that the acquisition work of the infrared night vision picture is difficult due to the limitation of acquisition requirements and practical conditions is solved.
Please refer to fig. 3 and fig. 4, which are schematic views of another flow chart of an image enhancement method according to a second embodiment of the present application. The method comprises the following steps:
referring to fig. 3, before the first picture is converted into the corresponding second picture through the generative countermeasure network, the method further includes the following steps:
s301, mixing the second picture and the third picture together to obtain training data of the environment perception model.
Specifically, image acquisition is performed through a second shooting device to obtain a plurality of third pictures, in this embodiment, the second shooting device is an infrared night vision camera and is used for performing image acquisition in a night scene to obtain a plurality of infrared night vision pictures, that is, the third pictures are real infrared night vision pictures.
The generator is used for generating the second picture, and the discriminator is used for distinguishing whether the picture which is input to the generation countermeasure network is the second picture or the third picture, namely the discriminator is used for distinguishing whether the picture which is input to the generation countermeasure network is a real infrared night vision picture or a generated infrared night vision picture.
S302, fixing the generator unchanged, and performing model training on the discriminator;
s303, training the discriminator through the training data of the environment perception model, so that the discriminator distinguishes the second picture from the third picture.
During the model training of the generation countermeasure network, the generator and the discriminator need to be optimized iteratively. Firstly, fixing a fact generator, sampling a plurality of real infrared night vision pictures and a plurality of generated infrared pictures, mixing the plurality of real infrared night vision pictures and the plurality of generated infrared pictures to form training data of the environment perception model, and training the discriminator by using the training data of the environment perception model to enable the discriminator to better distinguish the real pictures from the generated pictures.
In particular, the second picture is a new picture generated by the generative warfare network based on the first picture. The second picture needs to be discriminated by the discrimination network of the discriminator to determine whether the level of the real image is reached. It should be noted that the decision network of the decision device may actually output the probability distribution that the second picture is a real image, and then measure the probability distribution according to a preset probability threshold (e.g., 70%, 80%, etc., specifically determined according to actual needs and experience). And when the probability that the second picture is a real image is greater than a probability threshold value, determining that the second picture is the real image, which indicates that the second picture generated by the generator is effective, and determining the second picture as an intermediate image corresponding to the first picture.
On the contrary, when the discrimination network of the discriminator determines that the second picture is an unreal image, it indicates that the second picture generated by the generator is invalid, and the generator needs to iterate the second picture, specifically: inputting the second picture into the generation network of the generator again to generate a new second picture, judging whether the new second picture is a real image or not through the judgment network of the discriminator, finishing iteration to obtain an intermediate image when the new second picture is determined to be the real image, and inputting the generated new second picture into the generation network again to continue iteration when the new second picture is determined not to be the real image.
Referring to fig. 4, after the training iteration of the discriminator for K steps, the method further includes the following steps:
s304, fixing the discriminator to be unchanged, and carrying out model training on the generator;
s305, updating the parameters of the generator through a fixed learning frequency, so that the generator generates the second picture;
s306, comparing the second picture with the third picture to obtain comparison data;
s307, when the comparison data reach a preset value, a trained generator is obtained.
Specifically, after the iteration step K, the discriminator is fixed, and the parameters of the generator are updated at a certain learning rate, so that the generator generates an infrared night vision image which is close to a real image as much as possible, and the purpose of deceiving the discriminator is achieved. After the above-mentioned steps are repeated for several times, when the training loss value is gradually decreased, a generator similar to the real picture can be obtained.
In one embodiment, the generating the countermeasure network converts the first picture into a corresponding second picture, specifically including the following steps:
s308, inputting the database of the first picture into a trained generator;
s309, generating the second picture through the generator to obtain the enhanced data of the actual service data.
After the trained generator is obtained, an infrared night vision picture which is similar to the real picture in distribution is generated by using the trained generator on the basis of the existing color picture and is used as the enhancement data of the actual service data.
In one embodiment, the optimization function for generating the countermeasure network is:
Figure BDA0002895437590000071
the model of the generator is G (z), the model of the discriminator is D (x), z is noise input randomly, x represents a picture and represents the probability distribution of a real image to generate the probability distribution of the image, and the model G (z) of the generator converts the noise z input randomly into the picture x and outputs a probability value between 0 and 1 to represent the possibility of the picture x being the real picture.
The optimization function for generating the countermeasure network needs to minimize the generation error of the generator G and maximize the discrimination probability of the discriminator D. In this embodiment, the picture x represents a color picture, the generator needs to convert the color picture into an infrared night vision picture as much as possible, and the discriminator needs to distinguish a real infrared night vision picture from the generated infrared night vision picture.
In this embodiment, by establishing the optimization function of the generation countermeasure network, then performing model training on the discriminator and the generator respectively to minimize the generation error of the generator G and maximize the discrimination probability of the discriminator D, so as to obtain an optimized generation countermeasure network, then using a plurality of color pictures as the input of the generation countermeasure network, using the characteristic of style conversion of the generation countermeasure network to convert the color pictures into corresponding infrared night vision pictures to obtain a plurality of generated infrared night vision pictures, and using the plurality of generated infrared night vision pictures as the enhancement data of the actual service data, so that the acquisition of night vision data can be obtained in an automatic manner, instead of manual manufacturing scene acquisition, and the problem of being limited by acquisition requirements and actual conditions is solved, the acquisition of the infrared night vision image is difficult.
Fig. 5 is a schematic structural diagram of an electronic device shown in an embodiment of the present application.
Referring to fig. 5, the electronic device 400 includes a memory 410 and a processor 420.
The Processor 420 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 410 may include various types of storage units, such as system memory, Read Only Memory (ROM), and permanent storage. Wherein the ROM may store static data or instructions that are needed by the processor 1020 or other modules of the computer. The persistent storage device may be a read-write storage device. The persistent storage may be a non-volatile storage device that does not lose stored instructions and data even after the computer is powered off. In some embodiments, the persistent storage device employs a large capacity storage device (e.g., magnetic or optical disk, flash memory) as the persistent storage device. In other embodiments, the permanent storage may be a removable storage device (e.g., floppy disk, optical drive). The system memory may be a read-write memory device or a volatile read-write memory device, such as a dynamic random access memory. The system memory may store instructions and data that some or all of the processors require at runtime. Further, the memory 410 may comprise any combination of computer-readable storage media, including various types of semiconductor memory chips (DRAM, SRAM, SDRAM, flash, programmable read-only memory), magnetic and/or optical disks, may also be employed. In some embodiments, memory 410 may include a removable storage device that is readable and/or writable, such as a Compact Disc (CD), a read-only digital versatile disc (e.g., DVD-ROM, dual layer DVD-ROM), a read-only Blu-ray disc, an ultra-high density optical disc, a flash memory card (e.g., SD card, min SD card, Micro-SD card, etc.), a magnetic floppy disc, or the like. Computer-readable storage media do not contain carrier waves or transitory electronic signals transmitted by wireless or wired means.
The memory 410 has stored thereon executable code that, when processed by the processor 420, may cause the processor 420 to perform some or all of the methods described above.
The aspects of the present application have been described in detail hereinabove with reference to the accompanying drawings. In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments. Those skilled in the art should also appreciate that acts and modules referred to in the specification are not necessarily required in this application. In addition, it can be understood that the steps in the method according to the embodiment of the present application may be sequentially adjusted, combined, and deleted according to actual needs, and the modules in the device according to the embodiment of the present application may be combined, divided, and deleted according to actual needs.
Furthermore, the method according to the present application may also be implemented as a computer program or computer program product comprising computer program code instructions for carrying out part or all of the steps of the above-described method of the present application.
Alternatively, the present application may also be embodied as a non-transitory machine-readable storage medium (or computer-readable storage medium, or machine-readable storage medium) having stored thereon executable code (or a computer program, or computer instruction code) which, when executed by a processor of an electronic device (or electronic device, server, etc.), causes the processor to perform part or all of the various steps of the above-described method according to the present application.
Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the applications disclosed herein may be implemented as electronic hardware, computer software, or combinations of both.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems and methods according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Having described embodiments of the present application, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (10)

1. An image enhancement method, characterized by comprising the steps of:
step 1, acquiring an image through a first shooting device to obtain a database of a first picture;
step 2, inputting the database of the first picture into a generation countermeasure network;
and 3, converting the first picture into a corresponding second picture through the generation countermeasure network.
2. The image enhancement method according to claim 1, comprising:
acquiring images through a second shooting device to obtain a plurality of third pictures;
the generating confrontation network includes a generator for generating the second picture and a discriminator for discriminating whether the picture input to the generating confrontation network is the second picture or the third picture.
3. The image enhancement method according to claim 2, further comprising, before converting the first picture into a corresponding second picture by the generative confrontation network, the steps of:
s301, mixing the second picture and the third picture together to obtain training data of an environment perception model;
s302, fixing the generator unchanged, and performing model training on the discriminator;
s303, training the discriminator through the training data of the environment perception model, so that the discriminator distinguishes the second picture from the third picture.
4. The image enhancement method of claim 3, further comprising, after K steps of training iterations of the discriminator, the steps of:
s304, fixing the discriminator to be unchanged, and carrying out model training on the generator;
s305, updating the parameters of the generator through a fixed learning frequency to enable the generator to generate the second picture;
s306, comparing the second picture with the third picture to obtain comparison data;
s307, when the comparison data reach a preset value, a trained generator is obtained.
5. The image enhancement method according to claim 4, wherein the step of converting the first picture into the corresponding second picture by the generative countermeasure network comprises the steps of:
s308, inputting the database of the first picture into a trained generator;
s309, generating the second picture through the generator to obtain the enhanced data of the actual service data.
6. The image enhancement method of claim 2, wherein the optimization function for generating the countermeasure network is:
Figure FDA0002895437580000021
wherein the generator has a model G (z), the discriminator has a model D (x), z is randomly input noise, x represents a picture, pdata (x) represents the probability distribution of a real image, and pz(z)Generating probability distribution of the image, wherein a model G (z) of the generator converts randomly input noise z into a picture x, and D (x) outputs a probability value between 0 and 1 to represent the possibility of the picture x being a real picture.
7. The image enhancement method according to claim 1, wherein the first picture is a color picture and the second picture is a generated infrared night vision picture.
8. The image enhancement method of claim 2 wherein the third picture is a real infrared night vision picture.
9. An electronic device, comprising: a memory; one or more processors; one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs comprising instructions for performing any of the methods of claims 1-8.
10. A storage medium storing a computer program which, when executed by a processor, implements the image enhancement method of any one of claims 1 to 8.
CN202110041231.7A 2021-01-13 2021-01-13 Image enhancement method, electronic device and storage medium Pending CN113139924A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110041231.7A CN113139924A (en) 2021-01-13 2021-01-13 Image enhancement method, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110041231.7A CN113139924A (en) 2021-01-13 2021-01-13 Image enhancement method, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN113139924A true CN113139924A (en) 2021-07-20

Family

ID=76810276

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110041231.7A Pending CN113139924A (en) 2021-01-13 2021-01-13 Image enhancement method, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN113139924A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399440A (en) * 2022-01-13 2022-04-26 马上消费金融股份有限公司 Image processing method, image processing network training method and device and electronic equipment
CN114532919A (en) * 2022-01-26 2022-05-27 深圳市杉川机器人有限公司 Multi-mode target detection method and device, sweeper and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614996A (en) * 2018-11-28 2019-04-12 桂林电子科技大学 The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image
CN111047546A (en) * 2019-11-28 2020-04-21 中国船舶重工集团公司第七一七研究所 Infrared image super-resolution reconstruction method and system and electronic equipment
US20200151508A1 (en) * 2018-11-09 2020-05-14 Adobe Inc. Digital Image Layout Training using Wireframe Rendering within a Generative Adversarial Network (GAN) System

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200151508A1 (en) * 2018-11-09 2020-05-14 Adobe Inc. Digital Image Layout Training using Wireframe Rendering within a Generative Adversarial Network (GAN) System
CN109614996A (en) * 2018-11-28 2019-04-12 桂林电子科技大学 The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image
CN111047546A (en) * 2019-11-28 2020-04-21 中国船舶重工集团公司第七一七研究所 Infrared image super-resolution reconstruction method and system and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114399440A (en) * 2022-01-13 2022-04-26 马上消费金融股份有限公司 Image processing method, image processing network training method and device and electronic equipment
CN114532919A (en) * 2022-01-26 2022-05-27 深圳市杉川机器人有限公司 Multi-mode target detection method and device, sweeper and storage medium

Similar Documents

Publication Publication Date Title
WO2020259118A1 (en) Method and device for image processing, method and device for training object detection model
Villalba et al. Smartphone image clustering
CN113643189A (en) Image denoising method, device and storage medium
CN111444744A (en) Living body detection method, living body detection device, and storage medium
WO2020082382A1 (en) Method and system of neural network object recognition for image processing
CN109977832B (en) Image processing method, device and storage medium
CN113139924A (en) Image enhancement method, electronic device and storage medium
CN110189354B (en) Image processing method, image processor, image processing apparatus, and medium
US20120189193A1 (en) Detection of objects represented in images
CN116452469B (en) Image defogging processing method and device based on deep learning
CN112580581A (en) Target detection method and device and electronic equipment
CN115358962B (en) End-to-end visual odometer method and device
CN115358952B (en) Image enhancement method, system, equipment and storage medium based on meta-learning
CN113705666B (en) Split network training method, use method, device, equipment and storage medium
TWI803243B (en) Method for expanding images, computer device and storage medium
CN114419102B (en) Multi-target tracking detection method based on frame difference time sequence motion information
US20220318954A1 (en) Real time machine learning-based privacy filter for removing reflective features from images and video
CN112733754A (en) Infrared night vision image pedestrian detection method, electronic device and storage medium
CN112036342A (en) Document snapshot method, device and computer storage medium
CN116057937A (en) Method and electronic device for detecting and removing artifacts/degradation in media
CN113139517B (en) Face living body model training method, face living body model detection method, storage medium and face living body model detection system
CN116597527B (en) Living body detection method, living body detection device, electronic equipment and computer readable storage medium
CN117671472B (en) Underwater multi-target group identification method based on dynamic visual sensor
CN116452962A (en) Underwater target detection method and device, training method and device and electronic equipment
CN114359059A (en) Method for eliminating video data distortion of automobile data recorder and related equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination