CN112308094B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112308094B
CN112308094B CN202011343355.2A CN202011343355A CN112308094B CN 112308094 B CN112308094 B CN 112308094B CN 202011343355 A CN202011343355 A CN 202011343355A CN 112308094 B CN112308094 B CN 112308094B
Authority
CN
China
Prior art keywords
channel data
illumination condition
image
training image
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011343355.2A
Other languages
Chinese (zh)
Other versions
CN112308094A (en
Inventor
汤寅航
李锴莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ainnovation Chongqing Technology Co ltd
Original Assignee
Ainnovation Chongqing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ainnovation Chongqing Technology Co ltd filed Critical Ainnovation Chongqing Technology Co ltd
Priority to CN202011343355.2A priority Critical patent/CN112308094B/en
Publication of CN112308094A publication Critical patent/CN112308094A/en
Application granted granted Critical
Publication of CN112308094B publication Critical patent/CN112308094B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a storage medium. The method comprises the following steps: acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition; extracting channel data of an image to be processed to obtain Y channel data, cb channel data and Cr channel data; inputting Y-channel data into a neural network model to obtain a residual error result; obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition. According to the method and the device, the Y-channel data of the image to be processed is corrected by using the pre-trained neural network model, and the image under the second illumination condition is obtained according to the corrected Y-channel data, the Cb-channel data and the Cr-channel data, so that the appearance characteristics of skus in the abnormal illumination image can be effectively recovered.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
For an intelligent container, an image acquisition device is usually required to be installed in the intelligent container so as to acquire an image of a commodity in the container and obtain a corresponding commodity distribution map. However, due to the factors such as the lamp tube, the camera hardware, the sunlight irradiation and the like, the image acquired by the image acquisition device in the container has the phenomenon of overexposure or overcodim, and the condition influences the identification accuracy of the later detection and identification algorithm, so that the problem of low order identification accuracy of the intelligent container system is caused.
The existing illumination processing algorithm has poor processing effect on the over-exposed or over-dim commodity distribution diagram, and the phenomenon of changing the appearance of the commodity distribution diagram is easy to occur, so that different commodity distribution diagrams show similar colors.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an electronic device, and a storage medium, so as to solve the problem in the prior art that an image captured under an overexposure or overconvasion condition is not good in processing effect.
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition; extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data; inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model; obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
According to the embodiment of the application, the Y-channel data of the image to be processed is corrected by using the pre-trained neural network model, and the image under the second illumination condition is obtained according to the corrected Y-channel data, cb-channel data and Cr-channel data, so that the appearance characteristics of skus in the abnormal illumination image can be effectively recovered.
Further, the obtaining, according to the Y channel data, the residual result, the Cb channel data, and the Cr channel data, an image under a second illumination condition corresponding to the image to be processed includes: adding Y channel data corresponding to the image to be processed and the residual error result according to the pixel point position to obtain corrected Y channel data; and converting the corrected Y channel data, the corrected Cb channel data and the corrected Cr channel data to obtain an image under the second illumination condition.
According to the embodiment of the application, the corrected Y channel data, the Cb channel data and the Cr channel data are converted to generate the image under the second illumination condition, and the Y channel data represents a brightness component, so that the problem of low image quality caused by overexposure or overcompaction of the environment can be well solved by correcting the Y channel data.
Further, the method further comprises: acquiring a training sample, wherein the training sample comprises a training image under a first illumination condition and a training image under a second illumination condition; extracting Y-channel data of the training image under the first illumination condition and Y-channel data of the training image under the second illumination condition; inputting Y-channel data of the training image under the first illumination condition into a neural network model to be trained, and obtaining a prediction residual error output by the neural network model to be trained; and optimizing parameters in the neural network model to be trained according to the prediction residual error of the Y-channel data of the training image under the first illumination condition and the Y-channel data of the training image under the second illumination condition to obtain the trained neural network model.
According to the embodiment of the application, the neural network model is trained by utilizing the Y-channel data under the first illumination condition and the Y-channel data under the second illumination condition, and the obtained trained neural network model is used for correcting the Y-channel data of the image to be processed well.
Further, the optimizing the parameters in the neural network model to be trained according to the prediction residual and the Y-channel data of the training image under the first illumination condition includes: performing linear addition on the Y-channel data of the training image under the first illumination condition and the prediction residual error to obtain corrected Y-channel data of the training image; calculating an L1 distance, structural similarity and a peak signal-to-noise ratio according to the corrected Y-channel data of the training image and the Y-channel data of the training image under the second illumination condition; calculating a loss function according to the corrected Y-channel data of the training image, the Y-channel data of the training image under the second illumination condition, the L1 distance, the structural similarity and the peak signal-to-noise ratio; and optimizing parameters in the neural network model to be trained by utilizing the loss function.
According to the embodiment of the application, the loss function is determined according to the L1 distance, the structural similarity and the peak signal-to-noise ratio between the Y-channel data under the first illumination condition and the Y-channel data under the second illumination condition, and the parameters in the neural network model are optimized according to the loss function, so that the obtained trained neural network model can be suitable for images acquired under various environments.
Further, the calculating a loss function according to the corrected Y-channel data of the training image, the Y-channel data of the training image under the second illumination condition, the L1 distance, the structural similarity, and the peak signal-to-noise ratio includes: calculating according to a formula loss = α · L1 (I, G _ Y) + β · SSIM (I, G _ Y) + γ · PSNR (I, G _ Y) to obtain the loss function; wherein α, β, and γ are hyper-parameters, I is the Y-channel data of the corrected training image, G _ Y is the Y-channel data of the training image under the second illumination condition, L1 (I, G _ Y) is the distance between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition, SSIM (I, G _ Y) is the structural similarity between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition, and PSNR (I, G _ Y) is the peak signal-to-noise ratio between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition.
Further, the obtaining training samples includes: the method comprises the steps of obtaining a training image under a second illumination condition, carrying out gamma conversion on the training image under the second illumination condition, obtaining a corresponding training image under a first illumination condition, and enabling the training image under the first illumination condition and the training image under the second illumination condition to form a pair of training images.
According to the embodiment of the application, the training image under the first illumination condition is obtained by performing gamma conversion on the training image under the second illumination condition, the first illumination condition does not need to be really created, and the efficiency of acquiring the training image is improved.
Further, the image to be processed is a cargo distribution map obtained by image acquisition of the cargo in the intelligent container.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: the device comprises an image acquisition module, a processing module and a processing module, wherein the image acquisition module is used for acquiring an image to be processed, and the image to be processed is an image acquired under a first illumination condition; the data extraction module is used for extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data; the data correction module is used for inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model; the data conversion module is used for obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
In a third aspect, an embodiment of the present application provides an electronic device, including: the system comprises a processor, a memory and a bus, wherein the processor and the memory are communicated with each other through the bus; the memory stores program instructions executable by the processor, the processor being capable of performing the method of the first aspect when invoked by the program instructions.
In a fourth aspect, an embodiment of the present application provides a non-transitory computer-readable storage medium, including: the non-transitory computer readable storage medium stores computer instructions that cause the computer to perform the method of the first aspect.
Additional features and advantages of the present application will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the present application. The objectives and other advantages of the application may be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
To more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
FIG. 2 is a schematic flow chart illustrating a neural network model training method according to an embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
It can be understood that the model training method and the image processing method provided by the embodiment of the present application can be applied to a terminal device (also referred to as an electronic device) and a server; the terminal device may be a smart phone, a tablet computer, a Personal Digital Assistant (PDA), or the like; the server may specifically be an application server, and may also be a Web server. In addition, both the model training method and the image processing method can be executed by the same terminal device, and can also be executed by different terminal devices.
For convenience of understanding, in the technical solution provided in the embodiment of the present application, an electronic device is taken as an example to describe an application scenario of the model training method and the image processing method provided in the embodiment of the present application.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application, and as shown in fig. 1, the method includes:
step 101: acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition.
The image to be processed can be acquired by a camera in an intelligent container or can be an image acquired from a network, and the source of the image to be processed is not particularly limited. The first illumination condition refers to abnormal illumination, including overexposure, overconvergence, and the like.
Step 102: and extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data.
YCBCR is a type of color space that is commonly used in video processing in motion pictures, or in digital photography systems. Y is so-called luminance (luminance), which represents the concentration of light and is nonlinear, and is encoded using gamma correction (gamma correction). Cb refers to the blue chrominance component and Cr refers to the red chrominance component. After the electronic equipment acquires the image to be processed, channel data extraction is carried out on the image to be processed, and Y channel data, cb channel data and Cr channel data are obtained. It is understood that the extraction of the channel data may adopt the prior art, and this is not specifically limited in this embodiment of the present application.
Step 103: and inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model.
After the acquired Y-channel data is obtained, the Y-channel data is input into a pre-trained neural network model, which may include an up-sampling module and a down-sampling module, for example, the neural network model may be an automatic encoder or a U-net model. The down-sampling is to increase the receptive field of the neural network and reduce the calculation cost; because the resolution of the residual result output by the neural network model is the same as the resolution of the input image to be processed, after the down-sampling process, the corresponding up-sampling is carried out later. It will be appreciated that the residual results are a signature for modifying the Y-channel data.
Step 104: obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
In a specific implementation process, after a residual result corresponding to the image to be processed is obtained, the Y-channel data is corrected by using the residual result, and the Cb channel data and the Cr channel data are combined to obtain the image under the second illumination condition. The method specifically comprises the following steps: because the resolution of the residual image output by the neural network model is the same as that of the image to be processed, the Y channel data corresponding to the image to be processed and the residual result can be added according to the pixel points to obtain the corrected Y channel data, that is, the Y channel data of the pixel points corresponding to the first row and the first column in the image to be processed and the numerical values corresponding to the first row and the first column in the residual result are added, the Y channel data of the pixel points corresponding to the first row and the second column in the image to be processed and the numerical values corresponding to the first row and the second column in the residual result are added, and so on until the Y channel data and the residual result corresponding to all the pixel points are added. And converting the corrected Y channel data, cb channel data and Cr channel data to obtain the image under the second illumination condition.
It is understood that the second lighting condition refers to a normal lighting condition, and thus, the lighting quality of the second lighting condition is better than the lighting quality of the first lighting condition.
According to the embodiment of the application, the Y-channel data of the image to be processed is corrected by using the pre-trained neural network model, and the image under the second illumination condition is obtained according to the corrected Y-channel data, cb-channel data and Cr-channel data, so that the appearance characteristics of a commodity distribution diagram (skus) in the image with abnormal illumination can be effectively recovered. It should be noted that the commodity distribution map skus may be an image of the commodity obtained by image capturing the intelligent container.
On the basis of the foregoing embodiments, an embodiment of the present application provides a neural network model training method, as shown in fig. 2, the method includes:
step 201: training samples are obtained, wherein the training samples comprise training images under a first lighting condition and training images under a second lighting condition.
In a specific implementation process, one training sample comprises two images, one is a training image under a first illumination condition, and the other is a training image under a second illumination condition, wherein Y-channel data of the training image under the first illumination condition is used as input of a neural network model to be trained, and Y-channel data of the training image under the second illumination condition is used as a real label. The training image under the first illumination condition can be obtained by setting an overexposure or an overexposure environment and collecting commodities in a container by using an image collecting device. The training image under the second illumination condition can be obtained by setting a normal illumination environment and collecting commodities in the same container by using the image collecting device. Therefore, the distribution of the commodities collected by the training image under the first lighting condition is the same as that collected by the training image under the second lighting condition. Of course, the training image under the first illumination condition can also be obtained by simulating an overexposed or excessively dark image through the training image under the second illumination condition by using the gamma conversion of the image processing algorithm.
Step 202: and extracting Y-channel data of the training image under the first illumination condition and Y-channel data of the training image under the second illumination condition.
In a specific implementation process, Y-channel data of the training image under the first illumination condition and Y-channel data of the training image under the second illumination condition are respectively extracted, it may be understood that the method for extracting Y-channel data may use the prior art, and this is not specifically limited in this embodiment of the present application.
Step 203: and inputting Y-channel data of the training image under the first illumination condition into a neural network model to be trained to obtain a prediction residual error output by the neural network model to be trained.
The neural network model processes input Y-channel data of the training image under the first illumination condition, and outputs a prediction residual, where it can be understood that the resolution of the output prediction residual is the same as the resolution of the training image under the first illumination condition.
Step 204: and optimizing parameters in the neural network model to be trained according to the prediction residual error of the Y-channel data of the training image under the first illumination condition and the Y-channel data of the training image under the second illumination condition to obtain the trained neural network model.
And calculating a loss value according to the prediction residual error of the Y-channel data of the training image under the first illumination condition and the Y-channel data of the training image under the second illumination condition, and then transmitting the loss value back to the neural network model to be trained for parameter optimization, thereby obtaining the trained neural network model. It should be noted that, the number of the training samples is multiple, and the training samples may be batched to train the neural network model in a batch processing manner. Inputting a batch of training samples into a neural network model, performing parameter optimization once, then inputting another batch of training samples into the optimized neural network model, and performing parameter optimization again, and iterating in the above way until the accuracy of the optimized neural network model meets the requirement.
On the basis of the above embodiment, the optimizing the parameters in the neural network model to be trained according to the prediction residual and the Y-channel data of the training image under the first illumination condition includes:
performing linear addition on the Y-channel data of the training image under the first illumination condition and the prediction residual error to obtain corrected Y-channel data of the training image;
calculating an L1 distance, structural similarity and a peak signal-to-noise ratio according to the corrected Y-channel data of the training image and the Y-channel data of the training image under the second illumination condition;
calculating a loss function according to the corrected Y-channel data of the training image, the Y-channel data of the training image under the second illumination condition, the L1 distance, the structural similarity and the peak signal-to-noise ratio;
and optimizing parameters in the neural network model to be trained by utilizing the loss function.
In a specific implementation process, after the prediction residual is obtained, since the resolution of the prediction residual is the same as the resolution of the training image under the first illumination condition, the prediction residual and the Y-channel data of the training image under the first illumination condition may be added according to the positions of the pixel points to obtain corrected Y-channel data of the training image. The formula can be: i = A _ Y + a, wherein I is Y-channel data of the corrected training image; a _ Y is Y-channel data of the training image under the first illumination condition; a is the prediction residual.
Calculating according to a formula loss = α · L1 (I, G _ Y) + β · SSIM (I, G _ Y) + γ · PSNR (I, G _ Y) to obtain the loss function; wherein α, β, and γ are hyper-parameters, I is the Y-channel data of the corrected training image, G _ Y is the Y-channel data of the training image under the second illumination condition, L1 (I, G _ Y) is the distance between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition, SSIM (I, G _ Y) is the structural similarity between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition, and PSNR (I, G _ Y) is the peak signal-to-noise ratio between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition.
According to the embodiment of the application, the neural network model is trained by utilizing the Y-channel data under the first illumination condition and the Y-channel data under the second illumination condition, and the obtained trained neural network model is used for correcting the Y-channel data of the image to be processed well.
Fig. 3 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, where the apparatus may be a module, a program segment, or code on an electronic device. It should be understood that the apparatus corresponds to the above-mentioned embodiment of the method of fig. 1, and can perform various steps related to the embodiment of the method of fig. 1, and the specific functions of the apparatus can be referred to the description above, and the detailed description is appropriately omitted here to avoid redundancy. The device comprises: an image acquisition module 301, a data extraction module 302, a data correction module 303, and a data conversion module 304, wherein:
the image acquisition module 301 is configured to acquire an image to be processed, where the image to be processed is an image acquired under a first lighting condition; the data extraction module 302 is configured to perform channel data extraction on the image to be processed to obtain Y channel data, cb channel data, and Cr channel data; the data correction module 303 is configured to input the Y channel data into a pre-trained neural network model, and obtain a residual error result output by the neural network model; the data conversion module 304 is configured to obtain, according to the Y channel data, the residual error result, the Cb channel data, and the Cr channel data, an image under a second illumination condition corresponding to the image to be processed; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
On the basis of the foregoing embodiment, the data conversion module 304 is specifically configured to:
adding Y channel data corresponding to the image to be processed and the residual error result according to the pixel point position to obtain corrected Y channel data;
and converting the corrected Y channel data, the Cb channel data and the Cr channel data to obtain an image under the second illumination condition.
On the basis of the above embodiment, the apparatus further includes a training module configured to:
acquiring a training sample, wherein the training sample comprises a training image under a first illumination condition and a training image under a second illumination condition;
extracting Y-channel data of the training image under the first illumination condition and Y-channel data of the training image under the second illumination condition;
inputting Y-channel data of the training image under the first illumination condition into a neural network model to be trained to obtain a prediction residual error output by the neural network model to be trained;
and optimizing parameters in the neural network model to be trained according to the prediction residual error of the Y-channel data of the training image under the first illumination condition and the Y-channel data of the training image under the second illumination condition to obtain the trained neural network model.
On the basis of the above embodiment, the training module is specifically configured to:
performing linear addition on the Y-channel data of the training image under the first illumination condition and the prediction residual error to obtain corrected Y-channel data of the training image;
calculating the L1 distance, the structural similarity and the peak signal-to-noise ratio according to the corrected Y-channel data of the training image and the Y-channel data of the training image under the second illumination condition;
calculating a loss function according to the corrected Y-channel data of the training image, the Y-channel data of the training image under the second illumination condition, the L1 distance, the structural similarity and the peak signal-to-noise ratio;
and optimizing parameters in the neural network model to be trained by utilizing the loss function.
On the basis of the above embodiment, the training module is specifically configured to:
calculating according to a formula loss = α · L1 (I, G _ Y) + β · SSIM (I, G _ Y) + γ · PSNR (I, G _ Y) to obtain the loss function;
wherein α, β, and γ are hyper-parameters, I is the Y channel data of the corrected training image, G _ Y is the Y channel data of the training image under the second illumination condition, L1 (I, G _ Y) is the distance between the Y channel data of the corrected training image and the Y channel data of the training image under the second illumination condition, SSIM (I, G _ Y) is the structural similarity between the Y channel data of the corrected training image and the Y channel data of the training image under the second illumination condition, and PSNR (I, G _ Y) is the peak signal-to-noise ratio between the Y channel data of the corrected training image and the Y channel data of the training image under the second illumination condition.
On the basis of the above embodiment, the training module is specifically configured to:
the method comprises the steps of obtaining a training image under a second illumination condition, carrying out gamma conversion on the training image under the second illumination condition, obtaining a corresponding training image under a first illumination condition, and enabling the training image under the first illumination condition and the training image under the second illumination condition to form a pair of training images.
On the basis of the above embodiment, the image to be processed is a distribution map of the cargo obtained by image acquisition of the cargo in the intelligent container.
Fig. 4 is a schematic structural diagram of an entity of an electronic device provided in an embodiment of the present application, and as shown in fig. 4, the electronic device includes: a processor (processor) 401, a memory (memory) 402, and a bus 403; wherein,
the processor 401 and the memory 402 complete communication with each other through the bus 403;
the processor 401 is configured to call the program instructions in the memory 402 to execute the methods provided by the above method embodiments, for example, including: acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition; extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data; inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model; obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
The processor 401 may be an integrated circuit chip having signal processing capabilities. The Processor 401 may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. Which may implement or perform the various methods, steps, and logic blocks disclosed in the embodiments of the present application. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The Memory 402 may include, but is not limited to, random Access Memory (RAM), read Only Memory (ROM), programmable Read Only Memory (PROM), erasable Read Only Memory (EPROM), electrically Erasable Read Only Memory (EEPROM), and the like.
The present embodiment discloses a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the method provided by the above method embodiments, for example, including: acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition; extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data; inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model; obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition; extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data; inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model; obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. An image processing method, comprising:
acquiring an image to be processed, wherein the image to be processed is an image acquired under a first illumination condition;
extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data;
inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model;
obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition;
obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data, and the Cr channel data, including:
adding Y channel data corresponding to the image to be processed and the residual error result according to the pixel point position to obtain corrected Y channel data;
converting the corrected Y channel data, the corrected Cb channel data and the corrected Cr channel data to obtain an image under the second illumination condition;
and the resolution of the residual error result is the same as the resolution of the image to be processed.
2. The method of claim 1, further comprising:
acquiring a training sample, wherein the training sample comprises a training image under a first illumination condition and a training image under a second illumination condition;
extracting Y-channel data of the training image under the first illumination condition and Y-channel data of the training image under the second illumination condition;
inputting Y-channel data of the training image under the first illumination condition into a neural network model to be trained, and obtaining a prediction residual error output by the neural network model to be trained;
and optimizing parameters in the neural network model to be trained according to the prediction residual error of the Y-channel data of the training image under the first illumination condition and the Y-channel data of the training image under the second illumination condition to obtain the trained neural network model.
3. The method of claim 2, wherein the optimizing parameters in the neural network model to be trained according to the prediction residual and the Y-channel data of the training image under the first lighting condition comprises:
performing linear addition on the Y-channel data of the training image under the first illumination condition and the prediction residual error to obtain corrected Y-channel data of the training image;
calculating an L1 distance, structural similarity and a peak signal-to-noise ratio according to the corrected Y-channel data of the training image and the Y-channel data of the training image under the second illumination condition;
calculating a loss function according to the corrected Y-channel data of the training image, the Y-channel data of the training image under the second illumination condition, the L1 distance, the structural similarity and the peak signal-to-noise ratio;
and optimizing parameters in the neural network model to be trained by utilizing the loss function.
4. The method of claim 3, wherein calculating a loss function from the corrected Y-channel data for the training image, the Y-channel data for the training image under the second illumination condition, the L1 distance, the structural similarity, and the peak signal-to-noise ratio comprises:
calculating according to a formula loss = α · L1 (I, G _ Y) + β · SSIM (I, G _ Y) + γ · PSNR (I, G _ Y) to obtain the loss function;
wherein α, β, and γ are hyper-parameters, I is the Y-channel data of the corrected training image, G _ Y is the Y-channel data of the training image under the second illumination condition, L1 (I, G _ Y) is the distance between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition, SSIM (I, G _ Y) is the structural similarity between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition, and PSNR (I, G _ Y) is the peak signal-to-noise ratio between the Y-channel data of the corrected training image and the Y-channel data of the training image under the second illumination condition.
5. The method of claim 2, wherein the obtaining training samples comprises:
the method comprises the steps of obtaining a training image under a second illumination condition, carrying out gamma conversion on the training image under the second illumination condition, obtaining a corresponding training image under a first illumination condition, and forming a pair of training images by the training image under the first illumination condition and the training image under the second illumination condition.
6. The method according to any one of claims 1 to 5, wherein the image to be processed is a distribution map of the cargo obtained by image acquisition of the cargo in the intelligent container.
7. An image processing apparatus characterized by comprising:
the device comprises an image acquisition module, a processing module and a processing module, wherein the image acquisition module is used for acquiring an image to be processed, and the image to be processed is an image acquired under a first illumination condition;
the data extraction module is used for extracting channel data of the image to be processed to obtain Y channel data, cb channel data and Cr channel data;
the data correction module is used for inputting the Y-channel data into a pre-trained neural network model to obtain a residual error result output by the neural network model;
the data conversion module is used for obtaining an image under a second illumination condition corresponding to the image to be processed according to the Y channel data, the residual error result, the Cb channel data and the Cr channel data; wherein the illumination quality of the first illumination condition is lower than the illumination quality of the second illumination condition;
the data conversion module is specifically configured to: adding Y channel data corresponding to the image to be processed and the residual error result according to the pixel point position to obtain corrected Y channel data; converting the corrected Y channel data, the corrected Cb channel data and the corrected Cr channel data to obtain an image under the second illumination condition; and the resolution of the residual error result is the same as the resolution of the image to be processed.
8. An electronic device, comprising: a processor, a memory, and a bus, wherein,
the processor and the memory are communicated with each other through the bus;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-6.
9. A non-transitory computer-readable storage medium storing computer instructions which, when executed by a computer, cause the computer to perform the method of any one of claims 1-6.
CN202011343355.2A 2020-11-25 2020-11-25 Image processing method and device, electronic equipment and storage medium Active CN112308094B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011343355.2A CN112308094B (en) 2020-11-25 2020-11-25 Image processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011343355.2A CN112308094B (en) 2020-11-25 2020-11-25 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112308094A CN112308094A (en) 2021-02-02
CN112308094B true CN112308094B (en) 2023-04-18

Family

ID=74486966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011343355.2A Active CN112308094B (en) 2020-11-25 2020-11-25 Image processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112308094B (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107767408B (en) * 2017-11-09 2021-03-12 京东方科技集团股份有限公司 Image processing method, processing device and processing equipment
CN110363830B (en) * 2018-04-10 2023-05-02 阿里巴巴集团控股有限公司 Element image generation method, device and system
CN110458754B (en) * 2018-05-07 2021-12-03 Tcl科技集团股份有限公司 Image generation method and terminal equipment
CN108921786B (en) * 2018-06-14 2022-06-28 天津大学 Image super-resolution reconstruction method based on residual convolutional neural network
CN111680750B (en) * 2020-06-09 2022-12-06 创新奇智(合肥)科技有限公司 Image recognition method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于注意力残差卷积网络的视频超分辨率重构;董猛等;《长春理工大学学报(自然科学版)》(第01期);全文 *

Also Published As

Publication number Publication date
CN112308094A (en) 2021-02-02

Similar Documents

Publication Publication Date Title
CN108805103B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107527044B (en) Method and device for clearing multiple license plates based on search
CN111079764B (en) Low-illumination license plate image recognition method and device based on deep learning
CN108805198A (en) Image processing method, device, computer readable storage medium and electronic equipment
CN108897786A (en) Recommended method, device, storage medium and the mobile terminal of application program
CN105827897A (en) Adjustment card manufacturing method, system, color correction matrix debugging method and device
CN112132216B (en) Vehicle type recognition method and device, electronic equipment and storage medium
CN111612000A (en) Commodity classification method and device, electronic equipment and storage medium
CN108763580A (en) Image processing method and device, electronic equipment, computer storage media
CN114998122A (en) Low-illumination image enhancement method
CN110717864A (en) Image enhancement method and device, terminal equipment and computer readable medium
CN112308094B (en) Image processing method and device, electronic equipment and storage medium
CN116744125B (en) Image color data processing method, device, equipment and storage medium
CN111539975B (en) Method, device, equipment and storage medium for detecting moving object
CN113824894A (en) Exposure control method, device, equipment and storage medium
CN113408380A (en) Video image adjusting method, device and storage medium
CN117152182A (en) Ultralow-illumination network camera image processing method and device and electronic equipment
CN116311290A (en) Handwriting and printing text detection method and device based on deep learning
CN112581001B (en) Evaluation method and device of equipment, electronic equipment and readable storage medium
CN112418279A (en) Image fusion method and device, electronic equipment and readable storage medium
CN115147752A (en) Video analysis method and device and computer equipment
CN113191376A (en) Image processing method, image processing device, electronic equipment and readable storage medium
CN113674169A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111222468A (en) People stream detection method and system based on deep learning
CN110633740A (en) Image semantic matching method, terminal and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant