CN114373204A - Image processing method and device, electronic device and storage medium - Google Patents

Image processing method and device, electronic device and storage medium Download PDF

Info

Publication number
CN114373204A
CN114373204A CN202111564680.6A CN202111564680A CN114373204A CN 114373204 A CN114373204 A CN 114373204A CN 202111564680 A CN202111564680 A CN 202111564680A CN 114373204 A CN114373204 A CN 114373204A
Authority
CN
China
Prior art keywords
image
infrared
images
visible light
loss value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111564680.6A
Other languages
Chinese (zh)
Inventor
周婷
刘威
袁淮
吕晋
王普佳
曹斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Neusoft Reach Automotive Technology Shenyang Co Ltd
Original Assignee
Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Neusoft Reach Automotive Technology Shenyang Co Ltd filed Critical Neusoft Reach Automotive Technology Shenyang Co Ltd
Priority to CN202111564680.6A priority Critical patent/CN114373204A/en
Publication of CN114373204A publication Critical patent/CN114373204A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Abstract

The application discloses an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium. The method comprises the following steps: acquiring a plurality of reference infrared images; and converting the visible light image into the infrared image according to the plurality of reference infrared images and the visible light image based on a preset conversion model. Through acquiring a plurality of reference infrared images, a plurality of reference infrared images are used for processing visible light images, so that the visible light images are converted into infrared images, and compared with the difference of images acquired by different environments and different infrared cameras, when the same visible light image and different infrared reference images are converted into the infrared images, the difference of the converted infrared images is large, and the difference is not in line with expectation.

Description

Image processing method and device, electronic device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer-readable storage medium.
Background
At present, in a face recognition system such as a vehicle, an entrance guard and the like, because there is a recognition requirement at night, an infrared camera needs to be adopted to collect a face image for face recognition. The machine model training of face recognition needs a large amount of different image data, the infrared face data is difficult to obtain, most of face data sets which can be obtained at present are visible light face data, and therefore, how to obtain the infrared face data based on the visible light face data is necessary.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a non-volatile computer readable storage medium.
The image processing method comprises the steps of obtaining a plurality of reference infrared images; and converting the visible light image into an infrared image according to the plurality of reference infrared images and the plurality of visible light images based on a preset conversion model.
The image processing device of the embodiment of the application comprises an acquisition module and a conversion module. The acquisition module is used for acquiring a plurality of reference infrared images; the conversion module is used for converting the visible light image into the infrared image according to the reference infrared image and the visible light image on the basis of a preset conversion model.
The electronic equipment comprises a processor, a processing unit and a display unit, wherein the processor is used for acquiring a plurality of reference infrared images; and converting the visible light image into an infrared image according to the plurality of reference infrared images and the plurality of visible light images based on a preset conversion model.
The non-transitory computer-readable storage medium of the embodiments of the present application contains a computer program that, when executed by one or more processors, causes the processors to perform the following image processing method: acquiring a plurality of reference infrared images; and converting the visible light image into an infrared image according to the plurality of reference infrared images and the plurality of visible light images based on a preset conversion model.
In the image processing method, the image processing device, the electronic device and the nonvolatile computer readable storage medium according to the embodiment of the application, the plurality of reference infrared images are acquired, and then the plurality of reference infrared images are used for processing the visible light images based on the preset conversion model, so that the visible light images are converted into the infrared images.
Additional aspects and advantages of embodiments of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the present application.
Drawings
The above and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart diagram of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the present application;
FIG. 3 is a schematic plan view of an electronic device of some embodiments of the present application;
FIGS. 4 and 5 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIGS. 6 and 7 are schematic views of a scene of an image processing method according to some embodiments of the present application;
FIGS. 8-11 are schematic flow diagrams of image processing methods according to certain embodiments of the present application;
FIG. 12 is a schematic diagram of a connection state of a non-volatile computer readable storage medium and a processor of some embodiments of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below by referring to the drawings are exemplary only for the purpose of explaining the embodiments of the present application, and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1, an embodiment of the present application provides an image processing method. The image processing method includes the steps of:
011: acquiring a plurality of reference infrared images; and
012: and converting the visible light image into the infrared image according to the plurality of reference infrared images and the visible light image based on a preset conversion model.
Referring to fig. 2, an image processing apparatus 10 is provided in the present embodiment. The image processing apparatus 10 includes an acquisition module 11 and a conversion module 12. The obtaining module 11 and the converting module 12 are configured to perform step 011 and step 012, respectively. Namely, the obtaining module 11 is configured to obtain a plurality of reference infrared images; the conversion module 12 is configured to convert the visible light image into an infrared image according to the multiple reference infrared images and the visible light image based on a preset conversion model.
Referring to fig. 3, an electronic device 100 according to an embodiment of the present disclosure is further provided, where the electronic device 100 includes a processor 20, and the processor 20 is configured to obtain a plurality of reference infrared images; and converting the visible light image into the infrared image according to the plurality of reference infrared images and the visible light image based on a preset conversion model. That is, step 011 and step 012 can be executed by processor 20.
The electronic device 100 may be a mobile phone, a tablet computer, a display device, a notebook computer, a desktop computer, or the like.
Specifically, under the condition that the number of infrared images which can be acquired is small, the visible light images can be converted into the infrared images, so that the defect that the number of the infrared images is insufficient is overcome, and a depth conversion model with poor detection performance is obtained according to the training of the infrared images with insufficient number is prevented.
According to the image processing method, the multiple reference infrared images are acquired firstly, the reference infrared images are all the infrared images shot in a real scene, the shooting scenes of the different reference infrared images and the shooting parameters of the infrared cameras for shooting the reference infrared images can be different, so that the infrared images generated after the subsequent visible light images are processed through the multiple reference infrared images are guaranteed to be less influenced by environmental factors and infrared camera factors, and the converted infrared images are closer to the real infrared images.
Then, the processor 20 processes the visible light image according to the plurality of reference infrared images based on a preset conversion model, so as to convert the visible light image into an infrared image. For example, based on a conversion algorithm trained by a conversion model, each reference infrared image is used to convert the visible light image, so as to obtain a plurality of intermediate infrared images, and finally, the plurality of intermediate infrared images are fused, so as to generate a final infrared image. For example, averaging the pixels with the same position in the multiple intermediate infrared images to generate the final infrared image through fusion.
Therefore, the final infrared image is generated according to the intermediate infrared image generated by the multiple reference infrared images, and the multiple reference infrared images can be obtained by shooting with different infrared cameras in different scenes, so that the final infrared image is less influenced by environmental factors and infrared camera factors.
According to the image processing method, the image processing device 10 and the electronic equipment 100, when the meal delivery service is started, the plurality of reference infrared images are acquired, the plurality of reference infrared images are used for processing the visible light images based on the preset conversion model, so that the visible light images are converted into the infrared images, and compared with the situation that the images collected by different environments and different infrared cameras are different, when the same visible light image and different infrared reference images are converted into the infrared images, the converted infrared images are large in difference and not in accordance with expectation, the visible light images are converted into the infrared images by comprehensively referring to the plurality of infrared reference images, the influence of environmental factors and infrared camera factors on the conversion effect is reduced, and the converted infrared images are closer to the real infrared images.
Referring to fig. 2, 3, 4 and 5, in some embodiments, the conversion model includes a generator and a discriminator, the training set includes a plurality of visible light sample images, and the image processing method further includes:
013: and training the conversion model according to a preset training set so as to make the conversion model converge.
Step 013 comprises:
0131: inputting a visible light sample image to a generator;
0132: outputting an infrared sample image according to the plurality of reference infrared images and the visible light sample image;
0133: inputting the infrared sample image to a discriminator to calculate a loss value; and
0134: the generator and the arbiter are adjusted according to the loss value so that the conversion model converges.
In some embodiments, image processing apparatus 10 further comprises a training module 13, and training module 13 is configured to perform steps 013, 0131, 0132, 0133, and 0134. That is, the training module 13 is configured to train the transformation model according to a preset training set, so that the transformation model converges. The training module 13 is further configured to input the visible light sample image to the generator; outputting an infrared sample image according to the plurality of reference infrared images and the visible light sample image; inputting the infrared sample image to a discriminator to calculate a loss value; and adjusting the generator and the discriminator according to the loss value so as to make the conversion model converge. Body fraud
In certain embodiments, processor 20 is also configured to perform step 013, step 0131, step 0132, step 0133, and step 0134. That is, the processor 20 is configured to train the transformation model according to a preset training set, so that the transformation model converges. The processor 20 is also for inputting the visible light sample image to the generator; outputting an infrared sample image according to the plurality of reference infrared images and the visible light sample image; inputting the infrared sample image to a discriminator to calculate a loss value; and adjusting the generator and the discriminator according to the loss value so as to make the conversion model converge.
Specifically, referring to fig. 6, the transformation model may be trained according to a predetermined training set until convergence. The training is as follows:
first, the conversion model is divided into two parts, a generator part (G shown in fig. 6) for converting the visible light sample image (P1 shown in fig. 6) into the infrared sample image (P2 shown in fig. 6) based on the reference infrared image (P3 shown in fig. 6) and a discriminator part (D shown in fig. 6) for judging whether the converted infrared sample image is a true infrared sample image or a pseudo infrared sample image. It can be understood that, because the visible light sample image is converted into the infrared sample image, the infrared sample image is generated by image processing rather than a real infrared image, and therefore, the authenticity of the infrared sample image is judged by the discriminator, and it can be understood that when the converted infrared sample image is judged as the real infrared image, the conversion effect of the generator on the visible light sample image is better, and certainly, the judgment accuracy of the discriminator is lower, therefore, the generator and the discriminator can be trained simultaneously through the judgment result of the discriminator, so that the image quality of the converted infrared sample image is continuously improved, the infrared sample image is close to the real infrared image, and the judgment accuracy of the discriminator is also continuously improved, and an antagonistic network model is formed.
The processor 20 calculates the loss value according to the discrimination result of the discriminator, for example, when the first probability of discriminating the infrared sample image as the real infrared sample image is 0.8, and the second probability of discriminating the infrared sample image as the pseudo infrared sample image is 0.2, the loss value can be calculated according to the preset loss function, the first probability, the second probability and the preset probability, it can be understood that, since the infrared sample image is generated by the generator, the preset probability is (0,1), and corresponds to the first probability and the second probability, that is, the probability of the real infrared sample image and the probability of the pseudo infrared sample image respectively.
After calculating the loss value, the processor 20 adjusts the generator and the discriminator according to the loss value, thereby causing the generator and the discriminator to converge. Therefore, the generator and the discriminator are jointly trained according to the discrimination result of the discriminator, and the training effect is good.
Of course, the discriminator (D shown in fig. 7) can also be trained separately by a preset infrared sample set to make the discriminator converge. For example, an infrared sample image (P4 shown in fig. 7) in the infrared sample set is input to the discriminator, the discriminator outputs a first probability that the infrared sample image is a real infrared sample image and a second probability that the infrared sample image is a pseudo infrared sample image, and the first probability and the second probability are determined together with a preset probability of the infrared sample image, for example, whether each infrared sample image is a real infrared sample image or a pseudo infrared sample image is determined, the preset probability of the real infrared sample image is (1,0), the preset probability of the pseudo infrared sample image is (0,1), a first item of the preset probability corresponds to the first probability, and a second item of the preset probability corresponds to the second probability, and the discriminator is adjusted according to the preset loss function, the first probability, the second probability, and the preset probability, so that the discriminator converges.
Referring to fig. 2, 3 and 8, in some embodiments, the loss value comprises a first loss value, and step 0133 further comprises:
01331: inputting the infrared sample image into a discriminator to output the image type of the infrared sample image and the probability corresponding to the image type;
01332: calculating a first loss value according to the image type of the infrared sample image, the probability corresponding to the image type and a preset probability;
step 0134 includes:
01341: the generator and the discriminator are adjusted according to the first loss value so that the conversion model converges.
In certain embodiments, training module 13 is further configured to perform steps 01331, 01332, and 01341. That is, the training module 13 is configured to input the infrared sample image to the discriminator to output an image type of the infrared sample image and a probability corresponding to the image type; calculating a first loss value according to the image type of the infrared sample image, the probability corresponding to the image type and a preset probability; the generator and the discriminator are adjusted according to the first loss value so that the conversion model converges.
In certain embodiments, processor 20 is further configured to perform steps 01331, 01332, and 01341. That is, the processor 20 is configured to input the infrared sample image to the discriminator to output an image type of the infrared sample image and a probability corresponding to the image type; calculating a first loss value according to the image type of the infrared sample image, the probability corresponding to the image type and a preset probability; the generator and the discriminator are adjusted according to the first loss value so that the conversion model converges.
Specifically, in calculating the loss value, the infrared sample image is first input to the discriminator, and the discriminator is able to output the image type of the infrared sample image and the probability corresponding to the image type.
The image type comprises a real infrared image and a pseudo infrared image, and the discriminator can output the probability that the infrared sample image is the real infrared image and the probability that the infrared sample image is the pseudo infrared image, wherein the sum of the probability that the infrared sample image is the real infrared image and the probability that the infrared sample image is the pseudo infrared image is 1. For example, the discriminator outputs an infrared sample image having a probability of being a true infrared image of 0.7 and a probability of being a false infrared image of 0.3.
Then, the processor 20 calculates a first loss value according to the image type and the probability corresponding to the image type, a preset probability, and a preset loss value function (e.g., a cross entropy function). Since the infrared sample image is generated by the generator and is necessarily a pseudo-infrared image, the preset probability is (0,1), the first term represents the probability that the infrared sample image is a real infrared image, and the second term represents the probability that the infrared sample image is a pseudo-infrared image.
The first loss value is used for adjusting the generator and the discriminator so that the infrared sample image generated by the generator after convergence is closer to a real infrared image, and the accuracy of judging whether the infrared sample image is the real infrared image or the pseudo infrared image by the discriminator is continuously improved. It can be understood that, as the infrared sample image is closer to the real infrared image, the probability that the discriminator identifies the infrared sample image as the pseudo infrared image gradually decreases until the discriminator cannot identify whether the infrared sample image is the pseudo infrared image or the real infrared image, that is, the probability that the discriminator identifies the infrared sample image as the real infrared image or the pseudo infrared image is 0.5. In this manner, the generator and the discriminator are countertrained to converge, thereby making the infrared sample image generated by the generator more closely resemble a true infrared image.
Referring to fig. 2, 3 and 9, in some embodiments, step 01341 includes:
01342: and adjusting the weights of different reference infrared images in the generator and preset parameter values of the generator according to the first loss value.
In certain embodiments, training module 13 is also configured to perform step 01342. That is, the training module 13 is configured to adjust the weights of the different reference infrared images in the generator and the preset parameter values of the generator according to the first loss value.
In certain embodiments, processor 20 is also configured to perform step 01342. That is, the processor 20 is configured to adjust the weights of the different reference infrared images in the generator and the preset parameter values of the generator according to the first loss value.
Specifically, when the generator is adjusted according to the first loss value, the weight of the reference infrared image may be adjusted according to the first loss value. In an initial state, weights corresponding to a plurality of reference infrared images (e.g., 5) are the same, if the weights are all 0.2, after a first loss value is obtained through calculation, the larger the first loss value is, the larger the difference between an infrared sample image generated according to the weights corresponding to the plurality of reference infrared images and a real infrared image is, so that the weights need to be adjusted greatly, if the weights of two reference infrared images are modified to be 0.1, the weights of the two reference infrared images are modified to be 0.3, and the weight of the remaining reference infrared image is unchanged, so that the infrared sample image is generated again according to the adjusted weights of the plurality of reference infrared images, and the weights corresponding to the plurality of reference infrared images are adjusted again according to the first loss value of the infrared sample image, so that the accuracy of the generated infrared sample image is improved continuously, the first loss value is reduced continuously, and finally, the discriminator judges that the plurality of continuous infrared sample images are real infrared images or pseudo infrared images The rates were all 0.5. In this way, the infrared sample image generated by the generator is made to more closely approximate a real infrared image.
In addition, initially (that is, the conversion model is not trained yet), the generator may generate the infrared sample image according to the preset parameter value and the initial weight, and when the generator is adjusted according to the first loss value, the preset parameter value and the weight of each reference infrared image are continuously adjusted, so that the infrared sample image is generated again according to the adjusted preset parameter value and weight, so that the first loss value is continuously decreased, and finally, the probability that the discriminator discriminates that a plurality of continuous infrared sample images are real infrared images or pseudo infrared images is 0.5.
Referring to fig. 2, 3 and 10, in some embodiments, the loss value further includes a second loss value, and step 0133 includes:
01335: performing face recognition on the infrared sample image, and calculating the similarity between the recognized face and a preset face of the visible light sample image;
01336: determining a second loss value according to the similarity;
01337: the generator is adjusted according to the second loss value such that the conversion model converges.
In certain embodiments, training module 13 is further configured to perform step 01335, step 01336, and step 01337. That is, the training module 13 is configured to perform face recognition on the infrared sample image, and calculate similarity between a recognized face and a preset face of the visible light sample image; determining a second loss value according to the similarity; the generator is adjusted according to the second loss value such that the conversion model converges.
In certain embodiments, processor 20 is also configured to perform step 01335, step 01336, and step 01337. That is, the processor 20 is configured to perform face recognition on the infrared sample image, and calculate a similarity between a recognized face and a preset face of the visible light sample image; determining a second loss value according to the similarity; the generator is adjusted according to the second loss value such that the conversion model converges.
Specifically, in order to ensure that the generated infrared sample image and the visible light sample image are both the same person, the accuracy of the generated infrared sample image is further ensured. The processor 20 may perform face recognition on the infrared sample image, calculate a similarity between the recognized face and a preset face of the visible light sample image after the face is recognized, and then calculate a second loss value according to the similarity, where the greater the similarity, the smaller the second loss value.
After the second loss value is determined, the generator may be adjusted again according to the second loss value. For example, the processor 20 may adjust the weight of the reference infrared image according to the second loss value. The larger the second loss value is, the larger the difference between the infrared sample image generated according to the weights corresponding to the multiple reference infrared images and the preset face of the visible light sample image is, and therefore, the weights need to be adjusted greatly, for example, the weights of two reference infrared images are modified to 0.1, the weights of two reference infrared images are modified to 0.25, and the weight of the remaining reference infrared image is adjusted to 0.3, so that the infrared sample image is generated again according to the adjusted weights of the multiple reference infrared images, and the weights corresponding to the multiple reference infrared images are adjusted again according to the second loss value of the infrared sample image, so that the face accuracy of the generated infrared sample image is continuously improved, and the second loss value is continuously reduced until the similarity between the face of the infrared sample image and the face of the visible light image is higher (for example, 90%). In this way, the generator is trained through the second loss value to ensure that the generated infrared sample image and the generated visible light sample image are the same person image.
In other embodiments, after determining the first loss value and the second loss value, the conversion model is simultaneously adjusted according to the first loss value and the second loss value, so that the conversion model converges, thereby improving training efficiency.
Referring to fig. 2, 3 and 11, in some embodiments, step 012 includes:
0121: processing the visible light image according to each reference infrared image to generate an intermediate infrared image;
0122: and generating an infrared image according to the preset weight of each reference infrared image and the middle infrared image corresponding to each reference infrared image.
In certain embodiments, the generation module 12 is further configured to perform step 0121 and step 0122. That is, the generating module 12 is configured to process the visible light image according to each reference infrared image to generate an intermediate infrared image; and generating an infrared image according to the preset weight of each reference infrared image and the middle infrared image corresponding to each reference infrared image.
In certain embodiments, processor 20 is also configured to perform step 0121 and step 0122. That is, the processor 20 is configured to process the visible light image according to each reference infrared image to generate an intermediate infrared image; and generating an infrared image according to the preset weight of each reference infrared image and the middle infrared image corresponding to each reference infrared image.
Specifically, when the visible light image is processed according to the plurality of reference infrared images to generate the infrared image, the processor 20 may process the visible light image according to each reference infrared image to generate a plurality of intermediate infrared images, and then fuse the plurality of intermediate infrared images according to the preset weight of each reference infrared image.
For example, the reference infrared images are three, and are respectively a first reference infrared image, a second reference infrared image, and a third reference infrared image, and the weights of the first reference infrared image, the second reference infrared image, and the third reference infrared image are respectively 0.4, 0.3, and 0.3. The processor 20 may process the visible light image according to the first reference infrared image, may generate a first intermediate infrared image, process the visible light image according to the second reference infrared image, may generate a second intermediate infrared image, process the visible light image according to the third reference infrared image, and may generate a third intermediate infrared image.
The processor 20 then processes the first intermediate infrared image, the third intermediate infrared image and the third intermediate infrared image according to the weights of the first reference infrared image, the second reference infrared image and the third reference infrared image to generate the infrared images. For example, the infrared image is the first intermediate infrared image, the weight of the first reference infrared image + the second intermediate infrared image, the weight of the second reference infrared image + the weight of the third intermediate infrared image, the weight of the third reference infrared image, that is, the infrared image is the first intermediate infrared image 0.4+ the second intermediate infrared image 0.3+ the third intermediate infrared image 0.3.
The method specifically comprises the following steps: and carrying out weighted summation on the pixel values of the pixels with the same positions in the first intermediate infrared image, the third intermediate infrared image and the third intermediate infrared image to obtain the pixel values of the pixels with the same positions in the infrared images. Therefore, the visible light image is processed together through the plurality of reference infrared images to generate the infrared image, the influence of environmental factors and infrared camera factors on the conversion effect is reduced, and the converted infrared image is closer to the real infrared image.
Referring to fig. 12, the present application also provides a non-volatile computer-readable storage medium 400 containing a computer program 401. The computer program 401, when executed by the one or more processors 500, causes the one or more processors 500 to perform the image processing method of any of the embodiments described above.
Referring to fig. 1, for example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following image processing methods:
011: acquiring a plurality of reference infrared images; and
012: and converting the visible light image into the infrared image according to the plurality of reference infrared images and the visible light image based on a preset conversion model.
Referring to fig. 4 and 5, for another example, the computer program 201, when executed by the one or more processors 20, causes the processors 20 to perform the following image processing method:
013: and training the conversion model according to a preset training set so as to make the conversion model converge.
Step 013 comprises:
0131: inputting a visible light sample image to a generator;
0132: outputting an infrared sample image according to the plurality of reference infrared images and the visible light sample image;
0133: inputting the infrared sample image to a discriminator to calculate a loss value; and
0134: the generator and the arbiter are adjusted according to the loss value so that the conversion model converges.
In the description herein, references to the description of "certain embodiments," "in one example," "exemplary," etc., mean that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and the scope of the preferred embodiments of the present application includes other implementations in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
Although embodiments of the present application have been shown and described above, it is to be understood that the above embodiments are exemplary and not to be construed as limiting the present application, and that changes, modifications, substitutions and alterations can be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (10)

1. An image processing method, comprising:
acquiring a plurality of reference infrared images; and
and converting the visible light image into an infrared image according to the plurality of reference infrared images and the plurality of visible light images based on a preset conversion model.
2. The image processing method according to claim 1, further comprising:
and training the conversion model according to a preset training set so as to make the conversion model converge.
3. The image processing method of claim 2, wherein the transformation model comprises a generator and a discriminator, the training set comprises a plurality of visible light sample images, and the training the transformation model according to a preset training set such that the transformation model converges comprises:
inputting the visible light sample image to the generator;
outputting an infrared sample image according to the plurality of reference infrared images and the visible light sample images;
inputting the infrared sample image to a discriminator to calculate a loss value; and
adjusting the generator and the discriminator according to the loss value so that the conversion model converges.
4. The image processing method according to claim 3, wherein the loss value includes a first loss value, and the inputting the infrared sample image to a discriminator to calculate a loss value includes:
inputting the infrared sample image into the discriminator to output the image type of the infrared sample image and the probability corresponding to the image type;
calculating the first loss value according to the image type of the infrared sample image, the probability corresponding to the image type and a preset probability;
the adjusting the generator and the discriminator according to the loss value to make the conversion model converge comprises:
adjusting the generator and the arbiter according to the first loss value to converge the conversion model.
5. The image processing method of claim 4, wherein said adjusting the generator according to the first loss value comprises:
and adjusting the weight of different reference infrared images and the preset parameter value of the generator in the generator according to the first loss value.
6. The image processing method according to claim 4, wherein the loss value further includes a second loss value, and the inputting the infrared sample image to a discriminator to calculate a loss value includes:
performing face recognition on the infrared sample image, and calculating the similarity between the recognized face and a preset face of the visible light sample image;
determining the second loss value according to the similarity;
adjusting the generator according to the second loss value to cause the conversion model to converge.
7. The method according to claim 4, wherein said converting the visible light image into an infrared image based on the plurality of reference infrared images and visible light images comprises:
processing the visible light image according to each reference infrared image to generate an intermediate infrared image;
and generating the infrared image according to the preset weight of each reference infrared image and the intermediate infrared image corresponding to each reference infrared image.
8. An image processing apparatus characterized by comprising:
the acquisition module is used for acquiring a plurality of reference infrared images;
and the generating module is used for converting the visible light image into the infrared image according to the plurality of reference infrared images and the plurality of visible light images based on a preset conversion model.
9. An electronic device, comprising a processor configured to acquire a plurality of reference infrared images; and converting the visible light image into an infrared image according to the plurality of reference infrared images and the plurality of visible light images based on a preset conversion model. .
10. A non-transitory computer-readable storage medium comprising a computer program which, when executed by a processor, causes the processor to perform the image processing method of any one of claims 1 to 7.
CN202111564680.6A 2021-12-20 2021-12-20 Image processing method and device, electronic device and storage medium Pending CN114373204A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111564680.6A CN114373204A (en) 2021-12-20 2021-12-20 Image processing method and device, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111564680.6A CN114373204A (en) 2021-12-20 2021-12-20 Image processing method and device, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN114373204A true CN114373204A (en) 2022-04-19

Family

ID=81141149

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111564680.6A Pending CN114373204A (en) 2021-12-20 2021-12-20 Image processing method and device, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN114373204A (en)

Similar Documents

Publication Publication Date Title
CN109636754B (en) Extremely-low-illumination image enhancement method based on generation countermeasure network
CN109977921B (en) Method for detecting hidden danger of power transmission line
CN105847703B (en) A kind of image processing method and electronic equipment
CN111292264A (en) Image high dynamic range reconstruction method based on deep learning
Zuo et al. Screen content image quality assessment via convolutional neural network
CN111985281B (en) Image generation model generation method and device and image generation method and device
CN110991380A (en) Human body attribute identification method and device, electronic equipment and storage medium
CN112115979B (en) Fusion method and device of infrared image and visible image
CN112668522B (en) Human body key point and human body mask joint detection network and method
CN111611934A (en) Face detection model generation and face detection method, device and equipment
CN111047543A (en) Image enhancement method, device and storage medium
EP3779775A1 (en) Media processing method and related apparatus
CN112580434B (en) Face false detection optimization method and system based on depth camera and face detection equipment
CN112084952B (en) Video point location tracking method based on self-supervision training
CN109286758A (en) A kind of generation method of high dynamic range images, mobile terminal and storage medium
CN111242868A (en) Image enhancement method based on convolutional neural network under dark vision environment
CN114331946A (en) Image data processing method, device and medium
JP2021068056A (en) On-road obstacle detecting device, on-road obstacle detecting method, and on-road obstacle detecting program
CN113936252A (en) Battery car intelligent management system and method based on video monitoring
JP2019148980A (en) Image conversion apparatus and image conversion method
CN113099121A (en) ISP implementation method based on weak supervised learning
CN112597995A (en) License plate detection model training method, device, equipment and medium
CN116612355A (en) Training method and device for face fake recognition model, face recognition method and device
CN114373204A (en) Image processing method and device, electronic device and storage medium
CN115147705B (en) Face copying detection method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination