WO2022068584A1 - 图像处理方法、装置、计算机设备及存储介质 - Google Patents
图像处理方法、装置、计算机设备及存储介质 Download PDFInfo
- Publication number
- WO2022068584A1 WO2022068584A1 PCT/CN2021/118437 CN2021118437W WO2022068584A1 WO 2022068584 A1 WO2022068584 A1 WO 2022068584A1 CN 2021118437 W CN2021118437 W CN 2021118437W WO 2022068584 A1 WO2022068584 A1 WO 2022068584A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- color image
- light intensity
- real
- fusion
- reference color
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 34
- 230000008859 change Effects 0.000 claims abstract description 161
- 238000000034 method Methods 0.000 claims abstract description 68
- 230000004927 fusion Effects 0.000 claims description 179
- 238000012545 processing Methods 0.000 claims description 23
- 238000012549 training Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 16
- 230000001186 cumulative effect Effects 0.000 claims description 10
- 238000005286 illumination Methods 0.000 claims description 9
- 238000004590 computer program Methods 0.000 claims description 8
- 238000010801 machine learning Methods 0.000 claims description 7
- 238000005457 optimization Methods 0.000 claims description 6
- 230000000007 visual effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 12
- 230000003068 static effect Effects 0.000 description 12
- 230000003287 optical effect Effects 0.000 description 9
- 238000003780 insertion Methods 0.000 description 8
- 230000037431 insertion Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000005516 engineering process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 230000008901 benefit Effects 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000009977 dual effect Effects 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000005284 excitation Effects 0.000 description 2
- 238000007499 fusion processing Methods 0.000 description 2
- 230000002401 inhibitory effect Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 description 1
- 102000002067 Protein Subunits Human genes 0.000 description 1
- 108010001267 Protein Subunits Proteins 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000002902 bimodal effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000007935 neutral effect Effects 0.000 description 1
- 238000011176 pooling Methods 0.000 description 1
- 230000003252 repetitive effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- Embodiments of the present invention relate to image processing technologies, and in particular, to an image processing method, apparatus, computer equipment, and storage medium.
- the embodiments of the present invention provide an image processing method, apparatus, computer equipment and storage medium, and provide a new dual-modal image fusion technology to improve the image precision of the fused color images.
- an embodiment of the present invention provides an image processing method, the method includes: acquiring a reference color image and at least one real-time light intensity variation corresponding to the reference color image; The real-time light intensity change amount generates a fused color image.
- an embodiment of the present invention further provides an image processing method, the method includes: acquiring a plurality of color images through a color image sensing circuit in a dual-modal vision sensor; A quantity sensing circuit to obtain a real-time light intensity change; according to each of the color images and each of the real-time light intensity changes, the image processing method according to any one of the embodiments of the present invention is used to generate an image for inserting into At least one fused color image between two consecutive color images.
- an embodiment of the present invention further provides an image processing apparatus, the apparatus includes: a fusion feature acquisition module, configured to acquire a reference color image, and at least one real-time light intensity variation corresponding to the reference color image ; a fusion image generation module for generating a fusion color image according to the reference color image and each of the real-time light intensity changes.
- a fusion feature acquisition module configured to acquire a reference color image, and at least one real-time light intensity variation corresponding to the reference color image
- a fusion image generation module for generating a fusion color image according to the reference color image and each of the real-time light intensity changes.
- an embodiment of the present invention further provides an image processing device, the device includes: a color image acquisition module for acquiring multiple color images through a color image sensing circuit in a dual-modal vision sensor;
- the strong change amount acquisition module is used to obtain the real-time light intensity change amount through the light intensity change amount sensing circuit in the dual-modal vision sensor;
- the fusion color image generation module is used for according to the color images and the real-time light
- the image processing method according to any embodiment of the present invention is used to generate at least one fused color image for insertion between two consecutive color images.
- an embodiment of the present invention further provides a computer device, including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements any of the present invention when executing the program.
- a computer device including a memory, a processor, and a computer program stored in the memory and running on the processor, where the processor implements any of the present invention when executing the program.
- an embodiment of the present invention further provides a computer-readable storage medium on which a computer program is stored, characterized in that, when the program is executed by a processor, the image processing described in any embodiment of the present invention is implemented method.
- the technical solution of the embodiment of the present invention is to obtain a reference color image and at least one real-time light intensity change amount corresponding to the reference color image; and generate a fusion color image according to the reference color image and each of the real-time light intensity changes proposed a new dual-modal image fusion technology that fuses low-speed color images and high-speed light intensity changes.
- the feature of high-speed real-time light intensity changes is added to the fusion color image, which improves the fusion color. Image accuracy and image quality of the image.
- Fig. 1 is the realization flow chart of a kind of image processing method in the embodiment of the present invention
- Fig. 2 is the realization flow chart of another image processing method in the embodiment of the present invention.
- Fig. 3a is the realization flow chart of another image processing method in the embodiment of the present invention.
- Fig. 3b is a schematic diagram of a fusion color image generation process applicable to an embodiment of the present invention.
- Fig. 4a is an implementation flowchart of another image processing method in an embodiment of the present invention.
- 4b is a schematic structural diagram of a dual-modal fusion model to which an embodiment of the present invention is applicable;
- Fig. 5a is the realization flow chart of another image processing method in the embodiment of the present invention.
- 5b is a schematic diagram of a process of inserting and fused color images in a continuous color image to which an embodiment of the present invention is applicable;
- FIG. 6 is a structural diagram of an image processing apparatus in an embodiment of the present invention.
- FIG. 7 is a structural diagram of another image processing apparatus in an embodiment of the present invention.
- FIG. 8 is a structural diagram of a computer device in an embodiment of the present invention.
- FIG. 9 is a structural diagram of a computer-readable storage medium in an embodiment of the present invention.
- one or more interpolated images may be inserted between two consecutively acquired color images through a set interpolation algorithm, so as to reduce the time interval between the finally obtained adjacent color images.
- a set interpolation algorithm so as to reduce the time interval between the finally obtained adjacent color images.
- FIG. 1 is a flowchart of an image processing method provided by an embodiment of the present invention.
- This embodiment can be applied to the case where image signals of two different modalities are fused to obtain a fused color image. It can be executed by an image processing device, which can be implemented by software and/or hardware, and can generally be integrated in a terminal with image processing function, or directly integrated in a dual-mode image signal acquisition device for two modalities. in modal vision sensors.
- the method of the embodiment of the present invention specifically includes the following steps.
- the reference color image specifically refers to a reference image used for fusion to obtain a fused color image based on the reference color image
- the reference color image is composed of a matrix combination composed of a plurality of rows and columns of pixels.
- Each pixel has a set pixel value, and the pixel value can reflect the color information of the location of the pixel.
- the color information of the above-mentioned pixel points may be characterized by using RGB color space, YUV color space, or YCbCr color space.
- the real-time light intensity change specifically refers to the light intensity change of a certain pixel in the reference color image in a certain period of time, or can also be expressed as a relative gray value (brightness). value) change amount, the real-time light intensity change amount represents the change amount between the current brightness value of the pixel point and the historical brightness value at a previous moment.
- the inventor creatively proposes to use the real-time light intensity variation of the pixel points obtained by high-speed real-time acquisition to fuse with the reference color image to obtain a fused color image.
- the at least one real-time light intensity change amount corresponding to the reference color image specifically refers to, for each pixel in the reference color image, at one or more times after the acquisition time of the reference color image point, one or more real-time light intensity changes are collected.
- the pixel value of a certain pixel in the reference color image changes, it can be captured at high speed and in real time to generate the real-time light intensity change.
- the acquisition time of the reference color image may be recorded, and then, for the same shooting area, a period of time (for example, 5ms, 10ms, etc.) after the acquisition time is obtained. ) of the real-time light intensity change of one or more pixels in the .
- a static image sensor and a dynamic vision sensor may be used to obtain a reference color image and at least one real-time light intensity variation corresponding to the reference color image, respectively.
- a static image sensor and a dynamic vision sensor can be used to collect images in the same shooting area, and ensure that each pixel of the image collected by the static image sensor is one-to-one with each pixel collected by the dynamic vision sensor.
- one or more real-time light intensity changes obtained by the dynamic vision sensor high-speed real-time acquisition can be obtained during the time gap between two pictures taken by the static image sensor, and the acquisition time between the two pictures can be obtained by accurate fusion. Quality fused color images.
- the static image sensor includes an APS sensor, a CCD sensor, and the like.
- the above-mentioned reference color image and at least one real-time light intensity variation corresponding to the reference color image may be acquired simultaneously by using a dual-modality vision sensor.
- the dual-modal vision sensor can simultaneously acquire a reference color image and a real-time light intensity change for the same shooting area, and then can collect one or more real-time light images obtained after the collection time of the reference color image. Intensity change amount, as the real-time light intensity change amount corresponding to the reference color image.
- the fused color image is a color image that simultaneously carries the information of the reference color image and the information of each of the real-time light intensity changes.
- the method of generating the fused color image may be: according to the order of the acquisition time of each real-time light intensity change from first to last, sequentially acquiring a real-time light intensity change
- the pixel value for example, the brightness value of the pixel point, or the R value of the pixel point
- the pixel value of the pixel point at the corresponding position in the fused color image (or, each pixel point in a pixel area determined by the pixel point of the corresponding position) , B value, G value, etc.) to adjust the pixel value of at least one pixel point in the reference color image according to each real-time light intensity change, and finally obtain a fusion color image.
- the technical solution of the embodiment of the present invention is to obtain a reference color image and at least one real-time light intensity change amount corresponding to the reference color image; and generate a fusion color image according to the reference color image and each of the real-time light intensity changes proposed a dual-modal image fusion technology that fuses low-speed color images and high-speed light intensity changes.
- the features of high-speed real-time light intensity changes are added to the fused color images to improve the image quality of the fused color images. Accuracy and image quality.
- generating a fusion color image according to the reference color image and each of the real-time light intensity changes may be: The pixel values of the fusion regions corresponding to the pixel positions are adjusted to generate the fusion color image.
- the real-time light intensity change reflects the brightness change of a certain pixel in the reference color image. Therefore, after obtaining the real-time light intensity change, only the real-time light intensity change matching the real-time light intensity change can be The brightness value of the pixel point is adjusted, but such adjustment will cause sudden change in brightness between the pixel point and other pixel points, which in turn makes the image display effect of the fused color image more abrupt and the fusion effect is not ideal.
- the inventor proposes to determine a fusion region (a collection of pixel points, including a plurality of pixel points) in the entire reference color image based on the pixel position of each real-time light intensity change amount, and to determine the fusion region
- the pixel value of each pixel in the area is adjusted as a whole to further improve the image display effect of the fused color image.
- the pixel position matching the real-time light intensity change amount can be determined as the center point, and a set shape (for example, a rectangle or a circle, etc.) and a set size (for example, 10 Pixels*10 pixels, 100 pixels*100 pixels, or the pixel area with the set number of pixels as the radius) is used as the fusion area.
- a set shape for example, a rectangle or a circle, etc.
- a set size for example, 10 Pixels*10 pixels, 100 pixels*100 pixels, or the pixel area with the set number of pixels as the radius
- FIG. 2 shows a flowchart of another image processing method in the embodiment of the present invention.
- This embodiment is embodied on the basis of the above-mentioned embodiment.
- the The pixel value adjustment is performed on the fusion regions corresponding to the pixel positions of the real-time light intensity changes, which are embodied as: determining the target pixel positions corresponding to the currently processed target real-time light intensity changes; in the reference color image, obtaining The target fusion area matching the target pixel position; according to the change amount of the target real-time light intensity, the pixel value adjustment of the target fusion area is performed.
- the method of this embodiment may include the following steps.
- S220 Acquire one of the at least one real-time light intensity change corresponding to the reference color image in sequence as the target real-time light intensity change currently processed.
- each real-time light intensity variation has a generation time, and then the generation time of the reference acquisition image can be used as the starting point, and the generation time of the reference acquisition image is in the order from near to farthest.
- the at least one real-time light intensity change amount corresponding to the reference color image is sorted, and the sorting result reflects the luminance change order of the pixel values of each pixel point in the reference color image. Further, according to the sorting result, a real-time light intensity change amount can be sequentially acquired, and the pixel value of one or more pixel points in the reference color image can be adjusted.
- each pixel in the reference color image has a one-to-one correspondence with each pixel associated with the real-time light intensity change, that is, the real-time light intensity change corresponds to one of the reference color images.
- Pixel position ie the position of a pixel.
- the real-time light intensity variation obtained by the dynamic vision sensor or the dual-modal vision sensor may be in the form of (X, Y, P, T).
- X, Y is the event address
- P is the 4-value event output (including the first sign bit)
- T is the time when the event is generated.
- the event address corresponds to a pixel position in the reference color image.
- "X, Y” can be the row and column positions in the reference color image, respectively, and "P” is the specific value of the real-time light intensity change.
- "T” is the generation time of the real-time light intensity change.
- the target pixel position corresponding to the target real-time light intensity change amount can be further obtained.
- the pixel value of each pixel in the target fusion region whose position of the target pixel is matched may be adjusted.
- acquiring a target fusion region matching the target pixel position may be: according to the pixel position and a preset extended fusion range, A target fusion region is determined in the reference color image.
- the extended fusion range may be a preset area range (for example, a rectangle or a circle, etc.) with the target pixel position as the center position, for example, 100 pixels with the target pixel position as the center position.
- the target real-time light intensity change amount may be a relative luminance value (also referred to as a gray value).
- the pixel value of each pixel in the target fusion area can be subjected to brightness value (gray value) or color value of each color (for example, R value, B value or G value, or It can be understood as the adjustment of the brightness value or gray value of each color component.
- the method of adjusting the pixel value of the target fusion area may be: acquiring the brightness of each pixel in the target fusion area value; adjust the brightness value of each pixel point according to the change amount of the target real-time light intensity.
- the pixel value of each pixel in the reference color image is represented by the RGB color space
- the pixel value of each pixel in the target fusion area can be first converted into Represented using the YUV color space, or using the YCbCr color space.
- the target real-time light intensity change to adjust the pixel value of the Y channel (luminance component channel) in the pixel point of the YUV color space, or adjust the pixel value of the Y channel in the pixel point of the YCbCr color space.
- the inverse transformation of the above-mentioned color space transformation formula is used to re-convert the pixel values of each pixel in the target fusion region to be represented by the RGB color space.
- the target real-time light intensity change amount may be a positive value (representing an increase in the luminance value of a pixel) or a negative value (representing a decrease in the luminance value of a pixel). Furthermore, after directly superimposing the target real-time light intensity change with the Y channel pixel value of a pixel point, a new Y channel pixel value of the pixel point can be obtained.
- the target real-time light intensity change can be used to directly compare each YUV in the target fusion area. Adjust the pixel value of the Y channel in the pixel point of the color space, or adjust the pixel value of the Y channel in the pixel point of the YCbCr color space, so as to obtain a new pixel value of each pixel point in the target fusion area.
- the real-time light intensity variation of the target may be used to uniformly adjust the brightness value of each pixel in the target fusion area, or the brightness value of each pixel in the target fusion area may be adjusted uniformly.
- the target pixel position corresponding to the currently processed target real-time light intensity change is determined; in the reference color image, the The target fusion area matching the target pixel position; according to the target real-time light intensity change, the pixel value adjustment method for the target fusion area is to add the image information contained in the target real-time light intensity change to the benchmark In the color image, it also ensures the natural transition of the entire fusion process, making the transition of the pixel values of each pixel in the fused color image smoother, and further improving the display effect of the fused color image.
- the method of adjusting the pixel value of the target fusion region may also be: according to the target real-time light intensity change, calculate the average value of the illumination change ; Adjust the R value, G value and B value of each pixel in the target fusion area according to the average value of the illumination change.
- each pixel in the target fusion area will produce chromatic aberration. Therefore, it can be considered to adjust the R value, G value and B value of each pixel at the same time according to the same amount of light intensity change, so as to ensure the display effect of the fused color image under the premise of reducing the amount of calculation to the greatest extent. .
- the target real-time light intensity change needs to be used to adjust the R value, G value, and B value of each pixel at the same time, the target real-time light intensity change needs to be equally divided into each pixel. on the color channel. Therefore, according to the real-time light intensity change of the target, the average value of the illumination change can be calculated first.
- the target real-time light intensity change is A
- A/3 can be used as the average value of the light change
- K1*(A/3) can be used as the average value of the light change
- the K1 can be Adjust the scale factor for a preset.
- other methods may also be adopted to calculate the average value of the illumination change, which is not limited in this embodiment.
- the average value of the illumination change can be used to adjust each pixel in the target fusion area.
- the R value, G value and B value of the pixel point are adjusted uniformly, and a weight value can also be set for each pixel point in the target fusion area, and the pixel point that is closer to the target pixel position is specified.
- Fig. 3a shows a flowchart of another image processing method in the embodiment of the present invention.
- This embodiment is embodied on the basis of the above-mentioned embodiment.
- an image corresponding to the reference color image is acquired.
- the operation of at least one real-time light intensity change is embodied as: acquiring the image generation time of the reference color image; determining the light intensity change collection time period according to the image generation time and the light intensity change cumulative duration;
- the real-time light intensity change detected in the light intensity change collection time period is determined as at least one real-time light intensity change corresponding to the reference color image.
- the method of the embodiment of the present invention may include the following steps.
- the reference color image obtained in S310 may refer to a color image collected by a static image sensor, or may refer to a color image collected by a dual-modal vision sensor, etc.
- the above reference collected image is generated Baseline image for fusing color images.
- the reference color image described in S320 may refer to the reference color image obtained in S310, or may refer to a fused color image obtained by fusion in S370 (that is, the previous fused image obtained by fusion can be used The color image is used as the reference color image, and the latter fused color image is generated).
- the image generation time of the reference color image may refer to the acquisition time of the reference color image acquired by the static image sensor or the dual-modal vision sensor, or may refer to the reference color image and at least one real-time light intensity change amount.
- the image fusion time of a fused color image obtained by fusion may refer to the acquisition time of the reference color image acquired by the static image sensor or the dual-modal vision sensor, or may refer to the reference color image and at least one real-time light intensity change amount.
- the time interval during which the reference color image is acquired by the static image sensor or the dual-modal vision sensor, and the quantity of the fused color images inserted into the two reference color images may be acquired in advance, based on the time interval and the quantity value to determine the cumulative duration of the matching light intensity change.
- a light intensity change collection time period can be determined to be [T1, T1+ ⁇ t].
- the new reference color image acquisition condition specifically refers to the acquisition time of the next reference color image of the device used to acquire the reference color image in S310.
- the reference color image newly acquired by the device is used as the reference color image, and a new fusion color image is continuously generated;
- the obtained fused color image is used as the reference color image, and a new fused color image is continuously generated.
- the pixel value of each pixel in the reference color image collected by the device is the most accurate, and although one or more real-time light intensity changes are introduced into each fused color image, during the fusion process, A certain fusion error will also be introduced. Therefore, if too many fused color images are introduced between the two reference color images collected by the device, the above-mentioned fusion errors will continue to accumulate and enlarge. It is required to scientifically select the quantity value of the interpolated fused color images in the two reference color images acquired continuously.
- Fig. 3b shows a schematic diagram of a process of generating a fused color image to which the embodiment of the present invention is applicable.
- a reference color image 3110 is acquired by a static image sensor or a dual-modal vision sensor at time Tx
- another reference color image 3120 is acquired by the same device at time Ty.
- a total of three fused color images are inserted into the above two reference color images, that is, the first fused color image 3111 obtained by fusion at time Ta, the second fused color image 3112 obtained by fusion at time Tb, and the third fused color image 3112 obtained by fusion at time Tc Fused Color Image 3113.
- the first fused color image 3111 is obtained by fusing the reference color image 3110 and at least one real-time light intensity variation (discrete image points shown in FIG. 3b ) detected in the [Tx, Ta] time period; the second fusion The color image 3112 is obtained by merging the first fused color image 3111 and at least one real-time light intensity change detected in the [Ta, Tb] time period; the third fused color image 3113 is obtained by merging the second fused color image 3112 and [Tb, At least one real-time light intensity change detected in the Tc] time period is fused to obtain.
- the technical solution of the embodiment of the present invention can flexibly set the adjacent fusion by selectively selecting the accumulated time of the light intensity change amount according to the accuracy requirements of the fusion color image, and then determining the light intensity change collection time period.
- the time interval between color images and the image precision of each fused color image further expand the versatility of the technical solutions of the embodiments of the present invention.
- Fig. 4a shows a flowchart of another image processing method in the embodiment of the present invention.
- This embodiment is embodied on the basis of the above-mentioned embodiment.
- the operation of generating a fusion color image is embodied as: inputting the reference color image and each real-time light intensity change amount into a pre-trained dual-modal fusion model, and obtaining the The fused color image output by the bimodal fusion model.
- the method of this embodiment may specifically include the following steps:
- the first input terminal of the dual-modal fusion model is used for receiving the reference color image
- the second input terminal of the dual-modal fusion model is used for receiving each of the real-time light intensity changes.
- a dual-modal fusion model in order to further reduce the real-time calculation amount in the process of generating the fused image, can be obtained by pre-training, and by combining the reference color image and each of the real-time light intensity changes corresponding to the reference captured image The parameters are respectively input into the dual-modal fusion model, and the matched fused color image can be output by the model in real time.
- the first fusion color image 3111 can be obtained at the time Tx After reaching the reference color image 3110, firstly input the reference color image 3110 to the first input end of the dual-modal fusion model, and in the [Tx, Ta] time period, whenever a real-time light intensity change is detected, immediately The real-time light intensity variation is input to the second input end of the dual-modal fusion model, so that the dual-modal fusion model performs cumulative fusion on the reference color image 3110 for each input real-time light intensity variation Calculate, and when the determined time point reaches time Ta, obtain the fused color image output by the dual-modal fusion model, that is, the first fused color image 3111 .
- the method may further include: acquiring a training sample set, each training sample includes: a standard color image, at least one light strong variation and standard fusion color images; use each of the training samples in the training sample set to train the set machine learning model to obtain the dual-modal fusion model.
- frame interpolation may be performed on two consecutively obtained standard color images to obtain a color image after frame interpolation. Since the above frame insertion process does not introduce additional image information, the image quality of the color image after frame insertion is not high enough to be used as a training sample for the dual-modal fusion model, and some advanced image processing techniques need to be used. Image optimization is performed on the color image after frame insertion to improve the image quality of the color image after frame insertion, and the color image after frame insertion after image optimization is used as the standard to fuse the color image.
- the machine learning model trained to obtain the dual-modal fusion model may be a CNN (Convolutional Neural Networks, convolutional neural network) model, or a machine learning model such as an ANN (Artificial Neutral Network, artificial neural network) model. model, which is not limited in this embodiment.
- CNN Convolutional Neural Networks, convolutional neural network
- ANN Artificial Neutral Network, artificial neural network
- a twin convolutional network can be selected for training to obtain the dual-modal fusion model.
- the Siamese convolutional network may include a first convolution module 4110, a second convolution module 4120, a fusion module 4130, and an output module 4140, and the first convolution module 4110 includes the first convolution module 4110.
- an input end the second convolution module 4120 includes the second input end; the output ends of the first convolution module 4110 and the second convolution module 4120 are respectively connected to the input end of the fusion module 4130,
- the output end of the fusion module 4130 is connected to the input end of the ASZD 4140, and the output end of the output module is used for outputting a fusion color image.
- the network structures in the first convolution module 4110 and the second convolution module 4120 are the same, and may specifically include: an input convolution layer and a plurality of hidden layers.
- the input convolution layer may include one or more convolution units
- each hidden layer may include a maximum pooling layer and one or more convolution units connected in sequence.
- the Siamese convolutional network includes the first convolution module 4110 and the second convolution module 4120, which can share weight values during the training process.
- the twin convolution is performed on the twin convolution During the training process of the network, the first convolution module 4110 and the second convolution module do not share weight values.
- the fusion module 4130 is the core module in the dual-modal fusion model, and is used to generate a fusion result according to a reference color image and at least one real-time light intensity variation corresponding to the reference color image, and provide the fusion result to to output module 4140.
- the output module 4140 can be designed as a single output or as a dual output.
- the output module 4140 can only output the fused color image; when the output module 4140 is designed as a dual output, it can output the fused color image in one way, and the other way outputs the corresponding at least one real-time light intensity change.
- the light intensity change cumulative image in the light intensity change cumulative image, the pixel position where the black (or a specific color) pixel value is located is the pixel position where no light intensity change occurs, and the non-black pixel value is located.
- the pixel position is the pixel position where the light intensity change occurs, and through the non-black pixel value, the specific light intensity change amount corresponding to the pixel position is recorded.
- the L1-loss function can be used as the loss function.
- the optical flow (light intensity change) error is calculated by using ADAM (A Method for Stochastic Optimization, stochastic optimization method) and The back-propagation algorithm is jointly trained.
- the technical solution of the embodiment of the present invention uses a pre-trained dual-modal fusion model to fuse the reference color image and the real-time light intensity changes corresponding to the reference color image to obtain a fusion color image, which can save fusion to the greatest extent. It reduces the computing power consumption in the process, and further improves the generation efficiency and timeliness of the fusion color image.
- the reference color image can be generated according to the voltage signal of the light intensity of the light signal collected by the color image sensing circuit in the dual-mode vision sensor; the real-time light intensity change can be based on the The current signal of the light intensity change of the light signal collected by the light intensity change sensing circuit in the dual-modal vision sensor is generated.
- the dual-modal visual sensor may specifically include: a first sensing circuit (also referred to as a light intensity variation sensing circuit) and a second sensing circuit ( It can also be referred to as a color image sensing circuit); a first sensing circuit is used to extract the light signal of the first set wavelength band in the target light signal, and output the light intensity representing the light signal of the first set wavelength band A current signal of varying amount; a second sensing circuit for extracting the optical signal of the second set wavelength band in the target optical signal, and outputting a voltage signal representing the light intensity of the optical signal of the second set wavelength band.
- a first sensing circuit also referred to as a light intensity variation sensing circuit
- a second sensing circuit It can also be referred to as a color image sensing circuit
- a first sensing circuit is used to extract the light signal of the first set wavelength band in the target light signal, and output the light intensity representing the light signal of the first set wavelength band A current signal of varying amount
- a second sensing circuit for extracting the
- the first sensing circuit includes a first excitation type photosensitive unit and a first inhibitory type photosensitive unit, and both the first excitation type photosensitive unit and the first inhibitory type photosensitive unit are used to extract the target light signal.
- the optical signal in the first set wavelength band and convert the optical signal in the first set wavelength band into a current signal;
- the first sensing circuit is also used to
- a suppression type photosensitive unit converts the difference between the current signals, and outputs a current signal representing the light intensity variation of the light signal in the first set wavelength band.
- the second sensing circuit includes at least one second photosensitive unit, and the second photosensitive unit is used to extract the light signal of the second set wavelength band in the target light signal, and to convert the second set wavelength band.
- the optical signal is converted into a current signal; the second sensing circuit is further configured to output a voltage signal representing the light intensity of the optical signal in the second set wavelength band according to the current signal converted by the second photosensitive unit.
- the advantage of this setting is that by using the dual-modal vision sensor to generate the reference color image and the image signals of the two modes of the real-time light intensity change, the accurate alignment between the signals of the above two modes can be ensured, so as to further ensure the fusion color The image quality of the image.
- FIG. 5a is a flowchart of another image processing method provided by an embodiment of the present invention.
- This embodiment can be applied to the case where at least one fused color image is inserted between two color images obtained by continuous collection , the method can be executed by an image processing device, the device can be implemented by software and/or hardware, and generally can be integrated in a terminal with image processing function, or directly integrated in a dual-modal vision sensor.
- the method of this embodiment specifically includes the following steps.
- the dual-modal vision sensor acquires a plurality of color images at a set acquisition time interval.
- FIG. 5b is a schematic diagram of a process of inserting and merging color images in continuous color images to which an embodiment of the present invention is applicable.
- the color image sensing circuit in the dual-modal vision sensor samples the image signals such as color image 1, color image 2, and color image 3 at a set sampling interval.
- the light intensity change sensing circuit obtains the real-time light intensity change, and can insert one or more (two as shown in Figure 5b) fused color images between two consecutive color images.
- the technical solution of the embodiment of the present invention breaks through the limitation of the color image acquisition time interval in the prior art, can obtain color images with a smaller acquisition time interval, and can ensure the image quality of the fused color image obtained by fusion to the greatest extent.
- image signals with different types of image feature information can be obtained.
- FIG. 6 is a structural diagram of an image processing apparatus according to an embodiment of the present invention.
- the apparatus 600 includes a fusion feature acquisition module 610 and a fusion image generation module 620 .
- the fusion feature acquisition module 610 is configured to acquire a reference color image and at least one real-time light intensity variation corresponding to the reference color image.
- the fused image generation module 620 is configured to generate a fused color image according to the reference color image and each of the real-time light intensity changes.
- the technical solution of the embodiment of the present invention is to obtain a reference color image and at least one real-time light intensity change amount corresponding to the reference color image; and generate a fusion color image according to the reference color image and each of the real-time light intensity changes proposed a new dual-modal image fusion technology that fuses low-speed color images and high-speed light intensity changes.
- the feature of high-speed real-time light intensity changes is added to the fusion color image, which improves the fusion color. Image accuracy and image quality of the image.
- the fusion image generation module 620 may include: a region fusion unit, configured to perform pixel fusion on the fusion regions corresponding to the pixel positions of the real-time light intensity changes in the reference color image. value adjustment to generate the fused color image.
- the area fusion unit may specifically include: a target pixel position determination subunit, used to determine the target pixel position corresponding to the currently processed target real-time light intensity change; a target fusion area acquisition subunit , used to obtain the target fusion area matching the target pixel position in the reference color image; the target fusion area adjustment subunit is used to perform the target fusion area adjustment according to the change amount of the target real-time light intensity. Pixel value adjustment.
- the target fusion area acquisition subunit may be specifically used to: determine the target fusion area in the reference color image according to the pixel position and the preset extended fusion range.
- the target fusion area adjustment subunit may be specifically used to: obtain the brightness value of each pixel in the target fusion area; Adjust the brightness value of the point.
- the target fusion area adjustment subunit can be specifically used to: calculate the average value of the illumination change according to the real-time light intensity change of the target; Adjust the R value, G value and B value of each pixel in the fusion area.
- the fusion feature acquisition module 610 may be specifically configured to: acquire the image generation time of the reference color image; and determine the light intensity change amount according to the image generation time and the cumulative duration of the light intensity change amount Collection time period; determining the real-time light intensity change amount detected in the light intensity change amount collection time period as at least one real-time light intensity change amount corresponding to the reference color image.
- it may further include a repetitive execution module, configured to: after generating the fused color image according to the reference color image and each of the real-time light intensity changes, if the new reference color image does not satisfy the new reference color image When the condition is acquired, the reference color image is updated to the currently generated fused color image, and the step of acquiring at least one real-time light intensity variation corresponding to the reference color image is returned to be executed.
- a repetitive execution module configured to: after generating the fused color image according to the reference color image and each of the real-time light intensity changes, if the new reference color image does not satisfy the new reference color image
- the reference color image is updated to the currently generated fused color image, and the step of acquiring at least one real-time light intensity variation corresponding to the reference color image is returned to be executed.
- the fusion image generation module 620 may be specifically configured to: input the reference color image and each of the real-time light intensity changes into the pre-trained dual-modal fusion model, and obtain The fused color image output by the dual-modal fusion model; wherein, the first input of the dual-modal fusion model is used for receiving the reference color image, and the second input of the dual-modal fusion model is used for Each of the real-time light intensity changes is received.
- a model training module configured to: acquire a training sample set before acquiring a reference color image and at least one real-time light intensity variation corresponding to the reference color image, and train The samples include: a standard color image, at least one light intensity variation, and a standard fusion color image; using each of the training samples in the training sample set to train the set machine learning model to obtain the dual-modal fusion model .
- the standard fusion color image is obtained by performing frame interpolation and image optimization processing on two consecutively obtained standard color images.
- the set machine learning model is a twin convolution network;
- the twin convolution network includes a first convolution module, a second convolution module, a fusion module and an output module, Vol.
- the product module includes the first input terminal, and the second convolution module includes the second input terminal; the output terminals of the first convolution module and the second convolution module are respectively connected with the input terminal of the fusion module. connected, the output end of the fusion module is connected with the input end of the output module, and the output end of the output module is used to output the fusion color image; wherein, in the training process of the twin convolutional network, the first The convolution module and the second convolution module do not share weight values.
- the reference color image is generated according to the voltage signal of the light intensity of the light signal collected by the color image sensing circuit in the dual-modal vision sensor; the real-time light intensity variation is based on the The current signal of the light intensity change of the light signal collected by the light intensity change sensing circuit in the modal vision sensor is generated.
- the image processing apparatus provided by the embodiment of the present invention can execute the image processing method provided by any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method.
- FIG. 7 is a structural diagram of another image processing apparatus provided by an embodiment of the present invention.
- the apparatus 700 includes: a color image acquisition module 710 , a real-time light intensity change acquisition module 720 , and The fused color image generation module 730 .
- the color image acquisition module 710 is configured to acquire multiple color images through the color image sensing circuit in the dual-modal vision sensor.
- the real-time light intensity change obtaining module 720 is configured to obtain the real-time light intensity change through the light intensity change sensing circuit in the dual-mode visual sensor.
- the fused color image generation module 730 is configured to, according to each of the color images and each of the real-time light intensity changes, use the method according to any embodiment of the present invention to generate a color image for inserting into two consecutive color images At least one fused color image in between.
- the technical solution of the embodiment of the present invention breaks through the limitation of the color image acquisition time interval in the prior art, can obtain color images with a smaller acquisition time interval, and can ensure the image quality of the fused color image obtained by fusion to the greatest extent.
- image signals with different types of image feature information can be obtained.
- the image processing apparatus provided by the embodiment of the present invention can execute the image processing method provided by any embodiment of the present invention, and has functional modules and beneficial effects corresponding to the execution method.
- FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present invention.
- the computer device includes a processor 80, a memory 81, and may also include an input device 82 and an output device 83; the computer
- the number of processors 80 in the device can be one or more, and one processor 80 is taken as an example in FIG. 8; the processor 80, memory 81, input device 82 and output device 83 in the computer device can be connected by a bus or other means , the connection through the bus is taken as an example in FIG. 8 .
- the memory 81 can be used to store software programs, computer-executable programs, and modules, such as modules corresponding to the image processing method in the embodiment of the present invention.
- the processor 80 executes various functional applications and data processing of the computer device by running the software programs, computer-executable programs and modules stored in the memory 81, that is, realizes the image processing method described in any embodiment of the present invention, so
- the method includes: acquiring a reference color image and at least one real-time light intensity change amount corresponding to the reference color image; and generating a fusion color image according to the reference color image and each of the real-time light intensity changes.
- the processor 80 executes various functional applications and data processing of the computer device by running the software programs, computer-executable programs and modules stored in the memory 81 to implement another image processing method according to any embodiment of the invention
- the method includes: obtaining a plurality of color images through a color image sensing circuit in a dual-modal vision sensor; obtaining a real-time light intensity change through a light intensity change sensing circuit in the dual-modal vision sensor; The color image and each of the real-time light intensity changes are generated by using the image processing method according to any embodiment of the present invention to generate at least one fused color image for insertion between two consecutive color images.
- the memory 81 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Additionally, memory 81 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some instances, memory 81 may further include memory located remotely from processor 80, which may be connected to the computer device through a network. Examples of such networks include, but are not limited to, the Internet, an intranet, a local area network, a mobile communication network, and combinations thereof.
- the input device 82 may be used to receive input numerical or character information, and to generate key signal input related to user settings and function control of the computer device.
- the output device 83 may include a display device such as a display screen.
- an embodiment of the present invention further provides a computer-readable storage medium 900 containing computer-executable instructions (computer programs), and the computer-executable instructions, when executed by a computer processor, are used to perform operations such as the present invention.
- the image processing method includes: acquiring a reference color image and at least one real-time light intensity variation corresponding to the reference color image; The amount of change to generate a fused color image.
- the computer-executable instructions are executed by a computer processor to execute still another image processing method according to any embodiment of the present invention, the method comprising: acquiring a multi-color image sensing circuit in a dual-modal vision sensor, acquiring multiple A color image is obtained; the real-time light intensity change is obtained through the light intensity change sensing circuit in the dual-modal visual sensor; The image processing method generates at least one fused color image for inserting between two consecutive color images.
- a computer-readable storage medium 900 containing computer-executable instructions (computer programs) provided by the embodiments of the present invention is not limited to the above method operations, and the computer-executable instructions provided by any embodiment of the present invention can also be executed. related operations in the method.
- the present invention can be realized by software and necessary general-purpose hardware, and of course can also be realized by hardware, but in many cases the former is a better embodiment .
- the technical solutions of the present invention can be embodied in the form of software products in essence or the parts that make contributions to the prior art, and the computer software products can be stored in a computer-readable storage medium, such as a floppy disk of a computer , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc., including several instructions to make a computer device (which can be a personal computer , server, or network device, etc.) to execute the methods of various embodiments of the present invention.
- a computer-readable storage medium such as a floppy disk of a computer , read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), flash memory (FLASH), hard disk or optical disk, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
一种图像处理方法、装置、计算机设备及存储介质,该方法包括:获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量(S110);根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像(S120)。
Description
本发明实施例涉及图像处理技术,尤其涉及一种图像处理方法、装置、计算机设备及存储介质。
随着计算机视觉技术的不断发展,人们对传统拍摄设备捕捉得到的彩色图像的质量要求也在不断提高。传统拍摄设备通过捕捉绝对光强信息和颜色信息,可以形成高图像还原度的彩色图像。
目前,传统拍摄设备在家用娱乐电子设备中的应用非常广泛,但是,由于传统拍摄设备每次进行图像采集和输出时,需要采集和输出图像中每个像素点的像素值,因此,彩色图像的采集和输出间隔时间一般都较长,无法有效应用在对图像采集速度要求较高的工业控制领域。
发明内容
本发明实施例提供了一种图像处理方法、装置、计算机设备及存储介质,提供了一种新的双模态图像融合技术,以提高融合彩色图像的图像精度。
第一方面,本发明实施例提供了一种图像处理方法,该方法包括:获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
第二方面,本发明实施例还提供了一种图像处理方法,该方法包括:通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像;通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量;根据各所述彩色图像以及各所述实时光强变化量,采用如本发明实施例中任一项所述的图像处理方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
第三方面,本发明实施例还提供了一种图像处理装置,所述装置包括:融合特征获取模块,用于获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;融合图像生成模块,用于根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
第四方面,本发明实施例还提供了一种图像处理装置,所述装置包括:彩色图像获取模块,用于通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像;实时光强变化量获取模块,用于通过双 模态视觉传感器中光强变化量传感电路,获取实时光强变化量;融合彩色图像生成模块,用于根据各所述彩色图像以及各所述实时光强变化量,采用如本发明任一实施例所述的图像处理方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
第五方面,本发明实施例还提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现本发明任一实施例所述的图像处理方法。
第六方面,本发明实施例还提供了一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该程序被处理器执行时实现如本发明任一实施例所述的图像处理方法。
本发明实施例的技术方案通过获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像的技术方案,提出了一种将低速彩色图像与高速光强变化量进行融合的新的双模态图像融合技术,在融合彩色图像中加入了高速实时光强变化量的特征,提高了融合彩色图像的图像精度和图像质量。
图1为本发明实施例中的一种图像处理方法的实现流程图;
图2是本发明实施例中的另一种图像处理方法的实现流程图;
图3a是本发明实施例中的另一种图像处理方法的实现流程图;
图3b是本发明实施例所适用的一种融合彩色图像生成过程的示意图;
图4a是本发明实施例中的另一种图像处理方法的一种实现流程图;
图4b是本发明实施例所适用的一种双模态融合模型的结构示意图;
图5a是本发明实施例中的另一种图像处理方法的实现流程图;
图5b是本发明实施例所适用的一种在连续彩色图像中插入融合彩色图像过程的示意图;
图6是本发明实施例中的一种图像处理装置的结构图;
图7是本发明实施例中的另一种图像处理装置的结构图;
图8是本发明实施例中的一种计算机设备的结构图。
图9是本发明实施例中的一种计算机可读存储介质的结构图。
下面结合附图和实施例对本发明作进一步的详细说明。可以理解 的是,此处所描述的具体实施例仅仅用于解释本发明,而非对本发明的限定。另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部结构。
另外还需要说明的是,为了便于描述,附图中仅示出了与本发明相关的部分而非全部内容。在更加详细地讨论示例性实施例之前应当提到的是,一些示例性实施例被描述成作为流程图描绘的处理或方法。虽然流程图将各项操作(或步骤)描述成顺序的处理,但是其中的许多操作可以被并行地、并发地或者同时实施。此外,各项操作的顺序可以被重新安排。当其操作完成时所述处理可以被终止,但是还可以具有未包括在附图中的附加步骤。所述处理可以对应于方法、函数、规程、子例程、子程序等等。
在一些相关技术中,可以通过设定的插值算法在两张连续采集的彩色图像之间插入一张或者多张插值图像,以减少最终得到的相邻彩色图像之间的时间间隔。但是,由于相关技术仍然使用的是传统拍摄设备采集的彩色图像进行插值,实质上没有引入额外的图像信息,因此,插值效果差,插值图像的图像精度低、质量差。
第一方面,图1为本发明实施例提供的一种图像处理方法的流程图,本实施例可适用于对两种不同模态的图像信号进行融合,以得到融合彩色图像的情况,该方法可以由图像处理装置来执行,该装置可以由软件和/或硬件的方式实现,并一般可以集成在具有图像处理功能的终端中,或者直接集成在用于获取两种模态的图像信号的双模态视觉传感器中。本发明实施例的方法具体包括如下步骤。
S110、获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量。
在本实施例中,所述基准彩色图像,具体是指用于以其为基础融合得到融合彩色图像的基准图像,该基准彩色图像由多个行、列像素点构成的阵组合构成。每个像素点具有设定像素值,该像素值可以反映像素点所在位置的颜色信息。上述像素点的颜色信息可以使用RGB颜色空间、YUV颜色空间或者YCbCr颜色空间等进行表征。
其中,所述实时光强变化量,具体是指在某一时间段中,所述基准彩色图像中某一像素点的光强变化量,或者,也可以表述为一个相对的灰度值(亮度值)变化量,该实时光强变化量表征该像素点当前的亮度值与之前某一时刻的历史亮度值之间的变化量。可选的,在某一时刻下获取的实时光强变化量可以为一个或者多个,如果得到的实时光强变化量为多个时,该多个实时光强变化量对应基准彩色图像中不同的像素位置。
可以理解的是,每张彩色图像在生成时,都要生成各像素点的像 素值,因此,彩色图像的生成速度会比较慢,而一个像素点的亮度变化值却可以被高速实时捕捉(例如,通过动态视觉传感器捕捉得到)。因此,在本实施例中,发明人创造性的提出使用高速实时采集得到的像素点的实时光强变化量,与基准彩色图像进行融合,得到融合彩色图像。
在本实施例中,与所述基准彩色图像对应的至少一个实时光强变化量,具体是指在基准彩色图像的采集时刻之后的一个或者多个时刻内,针对该基准彩色图像中的各像素点,采集得到的一个或者多个实时光强变化量。一般来说,每当基准彩色图像中的某一个像素点的像素值发生改变时,即可被高速、实时的捕捉得到,以生成该实时光强变化量。
可选的,在针对某一拍摄区域采集得到基准彩色图像之后,可以记录该基准彩色图像的采集时刻,之后,针对相同的拍摄区域,获取从该采集时刻后一段时间(例如,5ms,10ms等)内的一个或者多个像素点的实时光强变化量。
在一个具体例子中,可以使用静态图像传感器,以及动态视觉传感器分别获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量。
可选的,可以分别使用静态图像传感器,以及动态视觉传感器对同一拍摄区域进行图像采集,并保证静态图像传感器所采集的图像的各像素点,与动态视觉传感器所采集的各像素点具有一一对应关系,进而可以在使用静态图像传感器拍摄两张图片的时间间隙,获取动态视觉传感器高速实时采集得到的一个或者多个实时光强变化量,准确融合得到采集时间位于两张图片之间的高质量的融合彩色图像。
可选的,静态图像传感器包括APS传感器、CCD传感器等。
在另一个具体的例子中,可以使用双模态视觉传感器同时获取上述基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量。
可选的,该双模态视觉传感器可以同时针对同一拍摄区域,采集得到基准彩色图像,以及实时光强变化量,进而可以将基准彩色图像的采集时刻之后,采集得到的一个或者多个实时光强变化量,作为与基准彩色图像对应的实时光强变化量。
S120、根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
在本实施例中,所述融合彩色图像中为同时带有该基准彩色图像的信息,以及各所述实时光强变化量的信息的彩色图像。
其中,根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像的方式可以为:根据各实时光强变化量采集时间由先到后的顺序,依次获取一个实时光强变化量对融合彩色图像中的对应位置的像素点(或者,由对应位置的像素点所确定的一个像素区域内各像素点)的像素值(例如,像素点的亮度值,或者像素点的R值、B值、G值等)进行调整,以根据每个实时光强变化量对该基准彩色图像中至少一个像素点进行一次像素值的更新,并最终得到融合彩色图像。
本发明实施例的技术方案通过获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像的技术方案,提出了一种将低速彩色图像与高速光强变化量进行融合的双模态图像融合技术,在融合彩色图像中加入了高速实时光强变化量的特征,提高融合彩色图像的图像精度和图像质量。
在上述各实施例的基础上,根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像,可以为:在所述基准彩色图像中,对与各实时光强变化量的像素位置分别对应的融合区域进行像素值调整,以生成该融合彩色图像。
如前所述,实时光强变化量反映了基准彩色图像中某一个像素点的亮度变化量,因此,在获取该实时光强变化量之后,可以仅对与该实时光强变化量相匹配的像素点的亮度值进行调整,但是,这样调整会使得该像素点与其他像素点之间发生亮度突变,进而使得融合彩色图像的图像显示效果比较突兀,融合效果不理想。
基于此,在本实施例中,发明人提出基于每个实时光强变化量的像素位置,在整个基准彩色图像中确定一个融合区域(像素点集合,其中包括多个像素点),并对融合区域内的每个像素点的像素值进行整体调整,以进一步提高融合彩色图像的图像显示效果。
其中,在确定该融合区域时,可以将与该实时光强变化量匹配的像素位置确定为中心点,确定一个设定形状(例如,矩形或者圆形等)的、设定尺寸(例如,10像素点*10像素点、100像素点*100像素点或者以设定像素点数量为半径)的像素区域,作为该融合区域。这样设置的好处是,使得融合彩色图像中各像素点的像素值的亮度过渡更加平滑,提高融合彩色图像的显示效果。
图2示出了本发明实施例中的另一种图像处理方法的流程图,本实施例以上述实施例为基础进行具体化,在本实施例中,将在所述基准彩色图像中,对与各实时光强变化量的像素位置分别对应的融合区域进行像素值调整,具体化为:确定与当前处理的目标实时光强变化量对应的目标像素位置;在所述基准彩色图像中,获取与所述目标像 素位置匹配的目标融合区域;根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整。
相应的,如图2所示,本实施例的方法可以包括以下步骤。
S210、获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量。
S220、在与所述基准彩色图像对应的至少一个实时光强变化量中依次获取一个作为当前处理的目标实时光强变化量。
在本实施例中,每个实时光强变化量均具有一个生成时间,进而可以将基准采集图像的生成时间作为起始点,按照距离该基准采集图像的生成时间由近到远的顺序,将与该基准彩色图像对应的至少一个实时光强变化量进行排序,该排序结果反映了基准彩色图像中各像素点的像素值的亮度变化顺序。进而,可以根据该排序结果,依次获取一个实时光强变化量,对该基准彩色图像中的一个或者多个像素点的像素值进行调整。
S230、确定与当前处理的目标实时光强变化量对应的目标像素位置。
如前所述,基准彩色图像中的各像素点,与实时光强变化量所关联的各像素点具有一一对应关系,也即,该实时光强变化量对应于基准彩色图像中的某一个像素位置(即一个像素点的位置)。
在本实施例的一个可选的实施方式中,动态视觉传感器或双模态视觉传感器采集得到的实时光强变化量的形式可以为(X,Y,P,T)。其中“X,Y”为事件地址,“P”为4值事件输出(包括第一位符号位),“T”为事件产生的时间。
其中,该事件地址对应于基准彩色图像中的一个像素位置,可选的,“X,Y”可以分别为基准彩色图像中的行、列位置,“P”为实时光强变化量的具体数值,“T”为该实时光强变化量的生成时间。
相应的,在获取与当前处理的目标实时光强变化量之后,可以进而获取与该目标实时光强变化量对应的目标像素位置。
S240、在所述基准彩色图像中,获取与所述目标像素位置匹配的目标融合区域。
在本实施例中,为了使得最终得到的融合彩色图像更加平滑,可以对该目标像素位置匹配的目标融合区域内的每个像素点的像素值均进行像素值的调整。
在本实施例的一个可选的实施方式中,在所述基准彩色图像中,获取与所述目标像素位置匹配的目标融合区域,可以为:根据所述像 素位置以及预设的扩展融合范围,在所述基准彩色图像中确定目标融合区域。
可选的,所述扩展融合范围可以为一个预先设定的以目标像素位置为中心位置的一个区域范围(例如,矩形或者圆形等),例如,以目标像素位置为中心位置的100像素点*100像素点的区域范围;也可以为一个通过神经网络模型学习得到的动态变化的学习值等,本实施例对此并不进行限制。
S250、根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整。
在本实施例中,该目标实时光强变化量可以为一个相对的亮度值(也可以称为灰度值)。基于该目标实时光强变化量,可以对目标融合区域内每个像素点的像素值进行亮度值(灰度值)或者每种颜色的色彩值(例如,R值、B值或者G值,也可理解为每种颜色的分量的亮度值或灰度值)的调整。
在本实施例的一个可选的实施方式中,根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整的方式可以为:获取所述目标融合区域中各像素点的亮度值;根据所述目标实时光强变化量,对各所述像素点的亮度值进行调整。
可选的,如果所述基准彩色图像中各像素点的像素值使用RGB颜色空间表示,则可以首先按照预设的颜色空间变换公式,将目标融合区域中的各像素点的像素值首先转换为使用YUV颜色空间表示,或者使用YCbCr颜色空间表示。
之后,可以使用目标实时光强变化量对YUV颜色空间的像素点中的Y通道(亮度成分通道)像素值进行调整,或者对YCbCr颜色空间的像素点中的Y通道像素值进行调整,并将完成像素值的调整后,使用上述颜色空间变换公式的逆变换,重新将该目标融合区域中的各像素点的像素值转换为使用RGB颜色空间表示。
具体的,该目标实时光强变化量可以为一个正值(表示一个像素点亮度值的增加),也可以为一个负值(表示一个像素点亮度值的减少)。进而,可以直接将该目标实时光强变化量与一个像素点的Y通道像素值进行直接叠加后,得到该像素点的新的Y通道像素值。
可选的,如果所述基准彩色图像中各像素点的像素值直接使用YUV颜色空间表示,或者直接使用YCbCr颜色空间表示,则可以使用目标实时光强变化量直接对目标融合区域中的各YUV颜色空间的像素点中的Y通道像素值进行调整,或者对YCbCr颜色空间的像素点中的Y通道像素值进行调整,以得到目标融合区域中的各像素点的新的 像素值。
在本实施例的一个可选方式中,可以使用该目标实时光强变化量,对目标融合区域中每个像素点的亮度值均进行统一的调整,也可以针对目标融合区域内的每个像素点分别设定一个权重值,并规定距离目标像素位置越近的像素点,其权重值越大,进而根据权重值与目标实时光强变化量的乘积,对目标融合区域中各像素点的亮度值进行匹配的个性化的调整。
S260、判断是否完成对全部实时光强变化量的处理:若是,则执行S270;否则,返回执行S220。
S270、将当前调整得到的基准彩色图像,作为所述融合彩色图像。
本发明实施例的技术方案在使用实时光强变化量对基准彩色图像进行调整时,通过确定与当前处理的目标实时光强变化量对应的目标像素位置;在所述基准彩色图像中,获取与所述目标像素位置匹配的目标融合区域;根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整的方式,在将目标实时光强变化量中包含的图像信息加入至基准彩色图像中的同时,也保证了整个融合过程的自然过渡,使得融合彩色图像中各像素点的像素值的过渡更加平滑,并进一步提高了融合彩色图像的显示效果。
在上述各实施例的基础上,根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整的方式还可以为:根据所述目标实时光强变化量,计算光照变化量均值;根据所述光照变化量均值,分别对所述目标融合区域中各像素点的R值、G值和B值进行调整。
在本可选实施方式中,为了避免在基准彩色图像使用RGB颜色空间表示时,在对所述目标融合区域进行像素值的调整过程中,引入额外的颜色空间转换的计算量。因此,进一步提出了一种直接使用目标实时光强变化量对目标融合区域中各像素点的R值、G值和B值进行调整的方式。
具体的,考虑到不论使用目标实时光强变化量对目标融合区域中各像素点中的R值、G值和B值中的哪一通道进行调整,均会使得目标融合区域中各像素点产生色差。因此,可以考虑按照相同的光强变化量,同时对各像素点的R值、G值和B值进行等值调整,以实现在最大程度减少计算量的前提下,保证融合彩色图像的显示效果。
在本实施例中,由于需要使用目标实时光强变化量同时对每个像素点的R值、G值和B值均进行调整,因而,需要将该目标实时光强变化量均分至每一个颜色通道上。因此,可以根据所述目标实时光强变化量,首先计算光照变化量均值。
在一个具体的例子中,例如,目标实时光强变化量为A,可以将A/3作为光照变化量均值,或者也可以将K1*(A/3)作为该光照变化量均值,该K1可以为一个预设的调整比例系数。当然,还可以采取其他的方式,计算光照变化量均值,本实施例对此并不进行限制。
其中,在根据所述光照变化量均值,分别对所述目标融合区域中各像素点的R值、G值和B值进行调整时,可以使用该光照变化量均值,对目标融合区域中每个像素点的R值、G值和B值均进行统一的调整,也可以针对目标融合区域内的每个像素点,均设定一个权重值,并规定距离目标像素位置越近的像素点,其权重值越大,进而可以根据权重值与光照变化量均值的乘积,对目标融合区域中各像素点的R值、G值和B值进行匹配的个性化的调整。
图3a示出了本发明实施例中的另一种图像处理方法的流程图,本实施例以上述实施例为基础进行具体化,在本实施例中,将获取与所述基准彩色图像对应的至少一个实时光强变化量的操作,具体化为:获取所述基准彩色图像的图像生成时间;根据所述图像生成时间以及光强变化量累计时长,确定光强变化量采集时间段;将所述光强变化量采集时间段内检测到的实时光强变化量,确定为与所述基准彩色图像对应的至少一个实时光强变化量。
相应的,如图3a所述,本发明实施例的方法可以包括以下步骤。
S310、获取基准彩色图像。
在本实施例中,S310中获取得到的基准彩色图像可以是指由静态图像传感器采集得到的彩色图像,也可以是指由双模态视觉传感器采集得到的彩色图像等,上述基准采集图像为生成融合彩色图像的基准图像。
S320、获取基准彩色图像的图像生成时间。
在本实施例中,S320中所述的基准彩色图像可以是指S310中获取的基准彩色图像,也可以是指S370中融合得到的一张融合彩色图像(即可以用融合得到的前一张融合彩色图像作为基准彩色图像,生成后一张融合彩色图像)。
相应的,所述基准彩色图像的图像生成时间可以是指静态图像传感器或者双模态视觉传感器采集得到该基准彩色图像的采集时间,也可以是指根据基准彩色图像和至少一个实时光强变化量融合得到的一张融合彩色图像的图像融合时间。
S330、根据所述图像生成时间以及光强变化量累计时长,确定光强变化量采集时间段。
在本实施例中,可以预先获取静态图像传感器或者双模态视觉传 感器采集得到基准彩色图像的时间间隔,以及在上述两张基准彩色图像中所插入的融合彩色图像的数量值,基于该时间间隔和数量值,确定匹配的光强变化量累计时长。
在一个具体的例子中,如果静态图像传感器或者双模态视觉传感器采集得到基准彩色图像的时间间隔为40ms,且预设在上述设备连续采集的两张基准彩色图像中插入3张融合彩色图像,进而,可以确定光强变化量累计时长为40ms/(3+1)=10ms。也即,每张融合彩色图像在融合时,使用了10ms前由设备采集得到的基准彩色图像,或者是10ms前融合得到的融合彩色图像,以及在这10ms内采集的得到的一个或者多个实时光强变化量。
在一个具体的例子中,如果确定所述图像生成时间为T1、光强变化量累计时长为Δt,则可以确定一个光强变化量采集时间段为[T1,T1+Δt]。
S340、将所述光强变化量采集时间段内检测到的实时光强变化量,确定为与所述基准彩色图像对应的至少一个实时光强变化量。
S350、根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
S360、判断是否满足新基准彩色图像获取条件:若是,则返回执行S310;否则,执行S370。
S370、在不满足新基准彩色图像获取条件时,将所述基准彩色图像更新为当前生成的所述融合彩色图像,返回执行S320。
在本实施例中,所述新基准彩色图像获取条件具体是指用于采集得到S310中的基准彩色图像的设备的下一个基准彩色图像的采集时刻。
如果确定满足新基准彩色图像获取条件,则使用该设备新采集得到的基准彩色图像为基准彩色图像,继续生成新的融合彩色图像;如果不确定满足新基准彩色图像获取条件,则使用最近时刻生成得到的融合彩色图像为基准彩色图像,继续生成新的融合彩色图像。
可以理解的是,由设备采集得到的基准彩色图像中各像素点的像素值是最准确的,而各融合彩色图像中虽然引入了一个或者多个实时光强变化量,但是,在融合过程中也会引入一定的融合误差。因此,在设备采集得到的两张基准彩色图像之间如果引入了过多张的融合彩色图像,会使得上述融合误差不断累积并放大,因此,在实际应用时,可以根据对融合彩色图像的精度要求,科学的选择连续采集的两张基准彩色图像中所插入的融合彩色图像的数量值。
其中,在图3b中示出了本发明实施例所适用的一种融合彩色图像 生成过程的示意图。如图3b所示,在Tx时刻由静态图像传感器或者双模态视觉传感器采集得到了一张基准彩色图像3110,在Ty时刻由同一设备采集得到了另一张基准彩色图像3120。在上述两张基准彩色图像中共插入了三张融合彩色图像,也即,Ta时刻融合得到的第一融合彩色图像3111、Tb时刻融合得到的第二融合彩色图像3112以及Tc时刻融合得到的第三融合彩色图像3113。
该第一融合彩色图像3111通过基准彩色图像3110,以及[Tx,Ta]时间段内检测到的至少一个实时光强变化量(图3b中所示的离散图像点)融合得到;该第二融合彩色图像3112通过第一融合彩色图像3111以及[Ta,Tb]时间段内检测到的至少一个实时光强变化量融合得到;该第三融合彩色图像3113通过第二融合彩色图像3112以及[Tb,Tc]时间段内检测到的至少一个实时光强变化量融合得到。
本发明实施例的技术方案通过根据对融合彩色图像的精度要求,有选择的选定光强变化量累计时间,并进而确定光强变化量采集时间段的技术手段,可以灵活设定相邻融合彩色图像之间的时间间隔,以及每张融合彩色图像的图像精度,进一步扩大的本发明各实施例技术方案的通用性。
图4a示出了本发明实施例中的另一种图像处理方法的流程图,本实施例以上述实施例为基础进行具体化,在本实施例中,将根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像的操作,具体化为:将所述基准彩色图像以及各所述实时光强变化量分别输入至预先训练的双模态融合模型中,并获取所述双模态融合模型输出的融合彩色图像。
相应的,如图4a所示,本实施例的方法具体可以包括以下步骤:
S410、获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量。
S420、将所述基准彩色图像以及各所述实时光强变化量分别输入至预先训练的双模态融合模型中,并获取所述双模态融合模型输出的融合彩色图像。
其中,所述双模态融合模型的第一输入端用于接收所述基准彩色图像,所述双模态融合模型的第二输入端用于接收各所述实时光强变化量。
在本实施例中,为了进一步降低融合图像生成过程中的实时计算量,可以预先训练得到一个双模态融合模型,通过将基准彩色图像以及与该基准采集图像对应的各所述实时光强变化量分别输入至该双模态融合模型中,可以由模型实时输出匹配的融合彩色图像。
具体的,如图3b所示,假设使用基准彩色图像3110,以及[Tx,Ta]时间段内检测到的至少一个实时光强变化量融合得到第一融合彩色图像3111,则可以在Tx时刻获取到基准彩色图像3110之后,首先将该基准彩色图像3110输入至双模态融合模型的第一输入端,并在[Tx,Ta]时间段内,每当检测到一个实时光强变化量,即刻将该实时光强变化量输入至该双模态融合模型的第二输入端,以使得该双模态融合模型针对输入的每个实时光强变化量,对该基准彩色图像3110进行累加的融合计算,并在确定时间点到达Ta时刻时,获取该双模态融合模型输出的融合彩色图像,也即,第一融合彩色图像3111。
基于此,需要预先训练得到该双模态融合模型。相应的,在获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量之前,还可以包括:获取训练样本集合,每个训练样本中包括:标准彩色图像,至少一个光强变化量以及标准融合彩色图像;使用所述训练样本集合中的各所述训练样本,对设定机器学习模型进行训练,得到所述双模态融合模型。
如前所述,由于相关技术中使用静态图像传感器是无法采集得到间隔时间比较近的两张连续的标准彩色图像的,而每个训练样本中所包括的标准彩色图像与标准融合彩色图像中,两者的间隔时间是需要远远小于静态图像传感器采集得到的两张连续的标准彩色图像的间隔时间的。
相应的,在构造训练样本时,可以首先对连续获取的两张标准彩色图像进行插帧,得到插帧后彩色图像。由于上述插帧过程未引入额外的图像信息,因此,该插帧后彩色图像的图像质量不高,不足以作为双模态融合模型的训练样本,进而需要采用一些的高级的图像处理技术,对该插帧后彩色图像进行图像优化,以提高该插帧后彩色图像的图像质量,并将图像优化后的插帧后彩色图像作为所述标准融合彩色图像。
在本实施例中,训练得到该双模态融合模型的机器学习模型可以为CNN(Convolutional Neural Networks,卷积神经网络)模型,也可以为ANN(Artificial Neutral Network,人工神经网络)模型等机器学习模型,本实施例对此并不进行限制。
在本实施例的一个可选的实施方式中,考虑到该双模态融合模型具有双输入,因此可以选取孪生卷积网络训练得到该双模态融合模型。
可选的,如图4b所示,所述孪生卷积网络可以包括第一卷积模块4110、第二卷积模块4120、融合模块4130以及输出模块4140,第一卷积模块4110包括所述第一输入端,第二卷积模块4120包括所述第二输入端;所述第一卷积模块4110和所述第二卷积模块4120的输出 端分别与所述融合模块4130的输入端相连,所述融合模块4130的输出端与所述ASZD 4140的输入端相连,所述输出模块的输出端用于输出融合彩色图像。
具体的,第一卷积模块4110和第二卷积模块4120中的网络结构相同,具体可以包括:输入卷积层以及多个隐藏层。其中,该输入卷积层可以包括一个或者多个卷积单元,每个隐藏层中可以包括依次相连的一个最大池化层以及一个或者多个卷积单元。
一般来说,孪生卷积网络中包括第一卷积模块4110以及第二卷积模块4120可以在训练过程中共享权重值。但是,考虑到本发明各实施例的技术方案需要使用两种不同模式的图像信号,也即:基准彩色信号以及实时光强变化量,因此,在本实施例中,在对所述孪生卷积网络的训练过程中,第一卷积模块4110和第二卷积模块不共享权重值。
其中,融合模块4130为该双模态融合模型中的核心模块,用于实现根据基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量生成融合结果,并将融合结果提供至输出模块4140。
可选的,基于孪生卷积网络的结构,该输出模块4140可以设计为单输出,也可以设计为双输出。当该输出模块4140设计为单输出时,可以仅输出融合彩色图像,当该输出模块4140设计为双输出时,可以一路输出融合彩色图像,另一路输出与上述至少一个实时光强变化量对应的光强变化量累加图像,在光强变化量累加图像中,黑色(也可以为其他某一特定颜色)像素值所在的像素位置,为未发生光强变化的像素位置,非黑色像素值所在的像素位置,为发生光强变化的像素位置,并通过该非黑色像素值,记录对应像素位置的具体的光强变化量。
可选的,在训练得到双模态融合模型时,可以使用L1-loss函数作为损失函数,同时,光流(光强变化量)误差通过使用ADAM(A Method for Stochastic Optimization,随机优化方法)与反向传播算法共同训练得到。
本发明实施例的技术方案通过使用一个预先训练得到的双模态融合模型,将基准彩色图像以及与该基准彩色图像对应的各实时光强变化量融合得到融合彩色图像,可以最大程度的节省融合过程中的算力消耗,并进一步提高融合彩色图像的生成效率以及时效性。
在上述各实施例的基础上,所述基准彩色图像可以根据双模态视觉传感器中彩色图像传感电路采集得到的光信号的光强的电压信号生成;所述实时光强变化量可以根据所述双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号生成。
在本发明实施例的一个可选实施方式中,所述双模态视觉传感器可以具体包括:第一传感电路(也可以称为,光强变化量传感电路)和第二传感电路(也可以称为,彩色图像传感电路);第一传感电路,用于提取目标光信号中第一设定波段的光信号,并输出表征所述第一设定波段的光信号的光强变化量的电流信号;第二传感电路,用于提取目标光信号中第二设定波段的光信号,并输出表征所述第二设定波段的光信号的光强的电压信号。
可选的,所述第一传感电路包括第一兴奋型感光单元和第一抑制型感光单元,所述第一兴奋型感光单元和所述第一抑制型感光单元均用于提取目标光信号中第一设定波段的光信号,并将所述第一设定波段的光信号转换为电流信号;所述第一传感电路还用于根据所述第一兴奋型感光单元和所述第一抑制型感光单元转换的电流信号之间的差异,输出表征所述第一设定波段的光信号的光强变化量的电流信号。
可选的,所述第二传感电路包括至少一个第二感光单元,所述第二感光单元用于提取目标光信号中第二设定波段的光信号,并将所述第二设定波段的光信号转换为电流信号;所述第二传感电路还用于根据所述第二感光单元转换的电流信号,输出表征所述第二设定波段的光信号的光强的电压信号。
这样设置的好处在于:通过使用双模态视觉传感器生成基准彩色图像以及实时光强变化量这两种模式的图像信号,可以保证上述两种模式信号之间的准确对准,以进一步保证融合彩色图像的图像质量。
第二方面,图5a为本发明实施例提供的另一种图像处理方法的流程图,本实施例可适用于在连续采集得到的两张彩色图像之间,插入至少一张融合彩色图像的情况,该方法可以由图像处理装置来执行,该装置可以由软件和/或硬件的方式实现,并一般可以集成在具有图像处理功能的终端,或者直接集成在双模态视觉传感器中。本实施例的方法具体包括如下步骤。
S510、通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像。
在本实施例中,双模态视觉传感器以设定的采集时间间隔,采集得到多张彩色图像。
S520、通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量。
S530、根据各所述彩色图像以及各所述实时光强变化量,采用如本发明任一实施例所述的方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
其中,图5b是本发明实施例所适用的一种在连续彩色图像中插入融合彩色图像过程的示意图。如图5b所示,双模态视觉传感器中彩色图像传感电路以设定的采样间隔,分别采样得到彩色图像1、彩色图像2以及彩色图像3等图像信号,通过使用双模态视觉传感器中光强变化量传感电路,获取实时光强变化量,可以在两张连续的彩色图像之间,插入一张或者多张(图5b所示的为2张)融合彩色图像。
本发明实施例的技术方案突破了现有技术对彩色图像采集时间间隔的限制,可以得到采集时间间隔更小的彩色图像,并可以最大程度的保证融合得到的融合彩色图像的图像质量,同时综合了彩色图像与实时光强变化量信号的优点,可以得到兼具不同类型的图像特征信息的图像信号。
第三方面,图6是本发明实施例提供的一种图像处理装置的结构图,如图6所示,所述装置600包括:融合特征获取模块610以及融合图像生成模块620。
融合特征获取模块610,用于获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量。
融合图像生成模块620,用于根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
本发明实施例的技术方案通过获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像的技术方案,提出了一种将低速彩色图像与高速光强变化量进行融合的新的双模态图像融合技术,在融合彩色图像中加入了高速实时光强变化量的特征,提高了融合彩色图像的图像精度和图像质量。
在上述各实施例的基础上,融合图像生成模块620,可以包括:区域融合单元,用于在所述基准彩色图像中,对与各实时光强变化量的像素位置分别对应的融合区域进行像素值调整,以生成所述融合彩色图像。
在上述各实施例的基础上,所述区域融合单元具体可以包括:目标像素位置确定子单元,用于确定与当前处理的目标实时光强变化量对应的目标像素位置;目标融合区域获取子单元,用于在所述基准彩色图像中,获取与所述目标像素位置匹配的目标融合区域;目标融合区域调整子单元,用于根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整。
在上述各实施例的基础上,目标融合区域获取子单元可以具体用于:根据所述像素位置以及预设的扩展融合范围,在所述基准彩色图 像中确定目标融合区域。
在上述各实施例的基础上,目标融合区域调整子单元,可以具体用于:获取所述目标融合区域中各像素点的亮度值;根据所述目标实时光强变化量,对各所述像素点的亮度值进行调整。
在上述各实施例的基础上,目标融合区域调整子单元,可以具体用于:根据所述目标实时光强变化量,计算光照变化量均值;根据所述光照变化量均值,分别对所述目标融合区域中各像素点的R值、G值和B值进行调整。
在上述各实施例的基础上,融合特征获取模块610,可以具体用于:获取所述基准彩色图像的图像生成时间;根据所述图像生成时间以及光强变化量累计时长,确定光强变化量采集时间段;将所述光强变化量采集时间段内检测到的实时光强变化量,确定为与所述基准彩色图像对应的至少一个实时光强变化量。
在上述各实施例的基础上,还可以包括,重复执行模块,用于:在根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像之后,在不满足新基准彩色图像获取条件时,将所述基准彩色图像更新为当前生成的所述融合彩色图像,返回执行获取与所述基准彩色图像对应的至少一个实时光强变化量的步骤。
在上述各实施例的基础上,融合图像生成模块620,可以具体用于:将所述基准彩色图像以及各所述实时光强变化量分别输入至预先训练的双模态融合模型中,并获取所述双模态融合模型输出的融合彩色图像;其中,所述双模态融合模型的第一输入端用于接收所述基准彩色图像,所述双模态融合模型的第二输入端用于接收各所述实时光强变化量。
在上述各实施例的基础上,还可以包括:模型训练模块,用于:在获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量之前,获取训练样本集合,训练样本包括:标准彩色图像,至少一个光强变化量以及标准融合彩色图像;使用所述训练样本集合中的各所述训练样本,对设定机器学习模型进行训练,得到所述双模态融合模型。
在上述各实施例的基础上,所述标准融合彩色图像为对连续获取的两张标准彩色图像进行插帧以及图像优化处理后得到。
在上述各实施例的基础上,所述设定机器学习模型为孪生卷积网络;所述孪生卷积网络包括第一卷积模块、第二卷积模块、融合模块以及输出模块,第一卷积模块包括所述第一输入端,第二卷积模块包括所述第二输入端;所述第一卷积模块和所述第二卷积模块的输出端 分别与所述融合模块的输入端相连,所述融合模块的输出端与所述输出模块的输入端相连,所述输出模块的输出端用于输出融合彩色图像;其中,在对所述孪生卷积网络的训练过程中,第一卷积模块和第二卷积模块不共享权重值。
在上述各实施例的基础上,所述基准彩色图像根据双模态视觉传感器中彩色图像传感电路采集得到的光信号的光强的电压信号生成;所述实时光强变化量根据所述双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号生成。
本发明实施例所提供的图像处理装置可执行本发明任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。
第四方面,图7是本发明实施例提供的另一种图像处理装置的结构图,如图7所示,所述装置700包括:彩色图像获取模块710、实时光强变化量获取模块720以及融合彩色图像生成模块730。
彩色图像获取模块710,用于通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像。
实时光强变化量获取模块720,用于通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量。
融合彩色图像生成模块730,用于根据各所述彩色图像以及各所述实时光强变化量,采用如本发明任一实施例所述的方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
本发明实施例的技术方案突破了现有技术对彩色图像采集时间间隔的限制,可以得到采集时间间隔更小的彩色图像,并可以最大程度的保证融合得到的融合彩色图像的图像质量,同时综合了彩色图像与实时光强变化量信号的优点,可以得到兼具不同类型的图像特征信息的图像信号。
本发明实施例所提供的图像处理装置可执行本发明任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。
第五方面,图8为本发明实施例提供的一种计算机设备的结构示意图,如图8所示,该计算机设备包括处理器80、存储器81,还可包括输入装置82和输出装置83;计算机设备中处理器80的数量可以是一个或多个,图8中以一个处理器80为例;计算机设备中的处理器80、存储器81、输入装置82和输出装置83可以通过总线或其他方式连接,图8中以通过总线连接为例。
存储器81作为一种计算机可读存储介质,可用于存储软件程序、计算机可执行程序以及模块,如本发明实施例中的图像处理方法对应的模块。处理器80通过运行存储在存储器81中的软件程序、计算机 可执行程序以及模块,从而执行计算机设备的各种功能应用以及数据处理,即实现如本发明任意实施例所述的图像处理方法,所述方法包括:获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
或者,处理器80通过运行存储在存储器81中的软件程序、计算机可执行程序以及模块,从而执行计算机设备的各种功能应用以及数据处理,实现如发明任意实施例所述的另一图像处理方法,所述方法包括:通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像;通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量;根据各所述彩色图像以及各所述实时光强变化量,采用如本发明任一实施例所述的图像处理方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
存储器81可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序;存储数据区可存储根据终端的使用所创建的数据等。此外,存储器81可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他非易失性固态存储器件。在一些实例中,存储器81可进一步包括相对于处理器80远程设置的存储器,这些远程存储器可以通过网络连接至计算机设备。上述网络的实例包括但不限于互联网、企业内部网、局域网、移动通信网及其组合。
输入装置82可用于接收输入的数字或字符信息,以及产生与计算机设备的用户设置以及功能控制有关的键信号输入。输出装置83可包括显示屏等显示设备。
第六方面,参照图9,本发明实施例还提供一种包含计算机可执行指令(计算机程序)的计算机可读存储介质900,计算机可执行指令在由计算机处理器执行时用于执行如本发明任意实施例所述的图像处理方法,所述方法包括:获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
或者,计算机可执行指令在由计算机处理器执行用于执行如本发明任意实施例所述的又一图像处理方法,所述方法包括:通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像;通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量;根据各所述彩色图像以及各所述实时光强变化量,采用如本发明任一实施例所述的图像处理方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
当然,本发明实施例所提供的一种包含计算机可执行指令(计算机程序)的计算机可读存储介质900,其计算机可执行指令不限于如上的方法操作,还可以执行本发明任意实施例所提供的方法中的相关操作。
通过以上关于实施方式的描述,所属领域的技术人员可以清楚地了解到,本发明可借助软件及必需的通用硬件来实现,当然也可以通过硬件实现,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如计算机的软盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、闪存(FLASH)、硬盘或光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本发明各个实施例的方法。
注意,上述仅为本发明的较佳实施例及所运用技术原理。本领域技术人员会理解,本发明不限于这里所述的特定实施例,对本领域技术人员来说能够进行各种明显的变化、重新调整和替代而不会脱离本发明的保护范围。因此,虽然通过以上实施例对本发明进行了较为详细的说明,但是本发明不仅仅限于以上实施例,在不脱离本发明构思的情况下,还可以包括更多其他等效实施例,而本发明的范围由所附的权利要求范围决定。
Claims (18)
- 一种图像处理方法,其特征在于,包括:获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
- 根据权利要求1所述的方法,其特征在于,根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像,包括:在所述基准彩色图像中,对与各实时光强变化量的像素位置分别对应的融合区域进行像素值调整,以生成所述融合彩色图像。
- 根据权利要求2所述的方法,其特征在于,在所述基准彩色图像中,对与各实时光强变化量的像素位置分别对应的融合区域进行像素值调整,包括:确定与当前处理的目标实时光强变化量对应的目标像素位置;在所述基准彩色图像中,获取与所述目标像素位置匹配的目标融合区域;根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整。
- 根据权利要求3所述的方法,其特征在于,在所述基准彩色图像中,获取与所述目标像素位置匹配的目标融合区域,包括:根据所述像素位置以及预设的扩展融合范围,在所述基准彩色图像中确定目标融合区域。
- 根据权利要求3所述的方法,其特征在于,根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整,包括:获取所述目标融合区域中各像素点的亮度值;根据所述目标实时光强变化量,对各所述像素点的亮度值进行调整。
- 根据权利要求3所述的方法,其特征在于,根据所述目标实时光强变化量,对所述目标融合区域进行像素值调整,包括:根据所述目标实时光强变化量,计算光照变化量均值;根据所述光照变化量均值,分别对所述目标融合区域中各像素点的R值、G值和B值进行调整。
- 根据权利要求1所述的方法,其特征在于,获取与所述基准彩色图像对应的至少一个实时光强变化量,包括:获取所述基准彩色图像的图像生成时间;根据所述图像生成时间以及光强变化量累计时长,确定光强变化量采集时间段;将所述光强变化量采集时间段内检测到的实时光强变化量,确定为与所述基准彩色图像对应的至少一个实时光强变化量。
- 根据权利要求1所述的方法,其特征在于,在根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像之后,还包括:在不满足新基准彩色图像获取条件时,将所述基准彩色图像更新为当前生成的所述融合彩色图像,返回执行获取与所述基准彩色图像对应的至少一个实时光强变化量的步骤。
- 根据权利要求1所述的方法,其特征在于,根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像,包括:将所述基准彩色图像以及各所述实时光强变化量分别输入至预先训练的双模态融合模型中,并获取所述双模态融合模型输出的融合彩色图像;其中,所述双模态融合模型的第一输入端用于接收所述基准彩色图像,所述双模态融合模型的第二输入端用于接收各所述实时光强变化量。
- 根据权利要求9所述的方法,其特征在于,在获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量之前,还包括:获取训练样本集合,训练样本包括:标准彩色图像,至少一个光强变化量以及标准融合彩色图像;使用所述训练样本集合中的各所述训练样本,对设定机器学习模型进行训练,得到所述双模态融合模型。
- 根据权利要求10所述的方法,其特征在于,所述标准融合彩色图像为对连续获取的两张标准彩色图像进行插帧以及图像优化处理后得到。
- 根据权利要求10所述的方法,其特征在于,所述设定机器学习模型为孪生卷积网络;所述孪生卷积网络包括第一卷积模块、第二卷积模块、融合模块以及输出模块,第一卷积模块包括所述第一输入端,第二卷积模块包括所述第二输入端;所述第一卷积模块和所述第二卷积模块的输出端分别与所述融合模块的输入端相连,所述融合模块的输出端与所述输出模块的输入端相连,所述输出模块的输出端用于输出融合彩色图像;其中,在对所述孪生卷积网络的训练过程中,第一卷积模块和第二卷积模块不共享权重值。
- 根据权利要求1-12任一项所述的方法,其特征在于:所述基准彩色图像根据双模态视觉传感器中彩色图像传感电路采集得到的光信号的光强的电压信号生成;所述实时光强变化量根据所述双模态视觉传感器中光强变化量传感电路采集得到的光信号的光强变化量的电流信号生成。
- 一种图像处理方法,其特征在于,包括:通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像;通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量;根据各所述彩色图像以及各所述实时光强变化量,采用如权利要求1-13任一项所述的方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
- 一种图像处理装置,其特征在于,包括:融合特征获取模块,用于获取基准彩色图像,以及与所述基准彩色图像对应的至少一个实时光强变化量;融合图像生成模块,用于根据所述基准彩色图像和各所述实时光强变化量,生成融合彩色图像。
- 一种图像处理装置,其特征在于,包括:彩色图像获取模块,用于通过双模态视觉传感器中彩色图像传感电路,获取多张彩色图像;实时光强变化量获取模块,用于通过双模态视觉传感器中光强变化量传感电路,获取实时光强变化量;融合彩色图像生成模块,用于根据各所述彩色图像以及各所述实时光强变化量,采用如权利要求1-13任一项所述的方法,生成用于插入至连续两张所述彩色图像之间的至少一张融合彩色图像。
- 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1-13中任一所述的图像处理方法,或者,实现如权利要求14所述的图像处理方法。
- 一种计算机可读存储介质,其上存储有计算机程序,其特征在于,该计算机程序被处理器执行时实现如权利要求1-13中任一所述的图像处理方法,或者,实现权利要求14所述的图像处理方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011057291.X | 2020-09-29 | ||
CN202011057291.XA CN112200757B (zh) | 2020-09-29 | 2020-09-29 | 图像处理方法、装置、计算机设备及存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022068584A1 true WO2022068584A1 (zh) | 2022-04-07 |
Family
ID=74007146
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/118437 WO2022068584A1 (zh) | 2020-09-29 | 2021-09-15 | 图像处理方法、装置、计算机设备及存储介质 |
Country Status (3)
Country | Link |
---|---|
CN (1) | CN112200757B (zh) |
TW (1) | TWI773526B (zh) |
WO (1) | WO2022068584A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116958203A (zh) * | 2023-08-01 | 2023-10-27 | 北京知存科技有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112200757B (zh) * | 2020-09-29 | 2024-08-02 | 北京灵汐科技有限公司 | 图像处理方法、装置、计算机设备及存储介质 |
CN113506320B (zh) * | 2021-07-15 | 2024-04-12 | 清华大学 | 图像处理方法及装置、电子设备和存储介质 |
CN113506322B (zh) * | 2021-07-15 | 2024-04-12 | 清华大学 | 图像处理方法及装置、电子设备和存储介质 |
CN113506321B (zh) * | 2021-07-15 | 2024-07-16 | 清华大学 | 图像处理方法及装置、电子设备和存储介质 |
CN113506323B (zh) * | 2021-07-15 | 2024-04-12 | 清华大学 | 图像处理方法及装置、电子设备和存储介质 |
CN113899312B (zh) * | 2021-09-30 | 2022-06-21 | 苏州天准科技股份有限公司 | 一种影像测量设备及影像测量方法 |
CN115082371B (zh) * | 2022-08-19 | 2022-12-06 | 深圳市灵明光子科技有限公司 | 图像融合方法、装置、移动终端设备及可读存储介质 |
TWI827411B (zh) * | 2022-12-21 | 2023-12-21 | 新唐科技股份有限公司 | 微控制電路及處理方法 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100208985A1 (en) * | 2009-02-17 | 2010-08-19 | Postech Academy - Industry Foundation | Apparatus and method of processing image, and record medium for the method |
CN107038695A (zh) * | 2017-04-20 | 2017-08-11 | 厦门美图之家科技有限公司 | 一种图像融合方法及移动设备 |
CN109361855A (zh) * | 2018-10-24 | 2019-02-19 | 深圳六滴科技有限公司 | 全景图像像素亮度校正方法、装置、全景相机和存储介质 |
CN110166704A (zh) * | 2019-05-30 | 2019-08-23 | 深圳市道创智能创新科技有限公司 | 多光谱相机的校准方法及装置 |
CN110298812A (zh) * | 2019-06-25 | 2019-10-01 | 浙江大华技术股份有限公司 | 一种图像融合处理的方法及装置 |
CN111382683A (zh) * | 2020-03-02 | 2020-07-07 | 东南大学 | 一种基于彩色相机与红外热成像仪特征融合的目标检测方法 |
CN112200757A (zh) * | 2020-09-29 | 2021-01-08 | 北京灵汐科技有限公司 | 图像处理方法、装置、计算机设备及存储介质 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9363427B2 (en) * | 2013-08-28 | 2016-06-07 | Disney Enterprises, Inc. | Device and method for calibrating a temporal contrast sensor with a frame-based camera sensor |
US10198660B2 (en) * | 2016-01-27 | 2019-02-05 | Samsung Electronics Co. Ltd. | Method and apparatus for event sampling of dynamic vision sensor on image formation |
US11202006B2 (en) * | 2018-05-18 | 2021-12-14 | Samsung Electronics Co., Ltd. | CMOS-assisted inside-out dynamic vision sensor tracking for low power mobile platforms |
CN108717691B (zh) * | 2018-06-06 | 2022-04-15 | 成都西纬科技有限公司 | 一种图像融合方法、装置、电子设备及介质 |
KR102640236B1 (ko) * | 2018-07-06 | 2024-02-26 | 삼성전자주식회사 | 동적 이미지 캡처 방법 및 장치 |
CN109003270B (zh) * | 2018-07-23 | 2020-11-27 | 北京市商汤科技开发有限公司 | 一种图像处理方法、电子设备及存储介质 |
CN110956581B (zh) * | 2019-11-29 | 2022-08-02 | 南通大学 | 一种基于双通道生成-融合网络的图像模态变换方法 |
CN111083402B (zh) * | 2019-12-24 | 2020-12-01 | 清华大学 | 双模态仿生视觉传感器 |
CN111402306A (zh) * | 2020-03-13 | 2020-07-10 | 中国人民解放军32801部队 | 一种基于深度学习的微光/红外图像彩色融合方法及系统 |
-
2020
- 2020-09-29 CN CN202011057291.XA patent/CN112200757B/zh active Active
-
2021
- 2021-09-15 TW TW110134300A patent/TWI773526B/zh active
- 2021-09-15 WO PCT/CN2021/118437 patent/WO2022068584A1/zh active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100208985A1 (en) * | 2009-02-17 | 2010-08-19 | Postech Academy - Industry Foundation | Apparatus and method of processing image, and record medium for the method |
CN107038695A (zh) * | 2017-04-20 | 2017-08-11 | 厦门美图之家科技有限公司 | 一种图像融合方法及移动设备 |
CN109361855A (zh) * | 2018-10-24 | 2019-02-19 | 深圳六滴科技有限公司 | 全景图像像素亮度校正方法、装置、全景相机和存储介质 |
CN110166704A (zh) * | 2019-05-30 | 2019-08-23 | 深圳市道创智能创新科技有限公司 | 多光谱相机的校准方法及装置 |
CN110298812A (zh) * | 2019-06-25 | 2019-10-01 | 浙江大华技术股份有限公司 | 一种图像融合处理的方法及装置 |
CN111382683A (zh) * | 2020-03-02 | 2020-07-07 | 东南大学 | 一种基于彩色相机与红外热成像仪特征融合的目标检测方法 |
CN112200757A (zh) * | 2020-09-29 | 2021-01-08 | 北京灵汐科技有限公司 | 图像处理方法、装置、计算机设备及存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116958203A (zh) * | 2023-08-01 | 2023-10-27 | 北京知存科技有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
CN112200757A (zh) | 2021-01-08 |
CN112200757B (zh) | 2024-08-02 |
TWI773526B (zh) | 2022-08-01 |
TW202213269A (zh) | 2022-04-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022068584A1 (zh) | 图像处理方法、装置、计算机设备及存储介质 | |
CN110288614B (zh) | 图像处理方法、装置、设备及存储介质 | |
CN109688351B (zh) | 一种图像信号处理方法、装置及设备 | |
CN109255774B (zh) | 一种图像融合方法、装置及其设备 | |
CN113454680A (zh) | 图像处理器 | |
CN109587560A (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
CN111611947A (zh) | 一种车牌检测方法、装置、设备及介质 | |
CN105007422B (zh) | 一种相位对焦方法及用户终端 | |
CN110958469A (zh) | 视频处理方法、装置、电子设备及存储介质 | |
WO2023005115A1 (zh) | 图像处理方法、图像处理装置、电子设备和可读存储介质 | |
CN113297937B (zh) | 一种图像处理方法、装置、设备及介质 | |
CN115115526A (zh) | 图像处理方法及装置、存储介质和图形计算处理器 | |
CN109587559A (zh) | 视频处理方法、装置、电子设备以及存储介质 | |
WO2019179242A1 (en) | Image processing method and electronic device | |
CN113538223A (zh) | 噪声图像生成方法、装置、电子设备及存储介质 | |
CN104702941A (zh) | 一种白点区域表示及判定方法 | |
CN114841897B (zh) | 基于自适应模糊核估计的深度去模糊方法 | |
CN114723837B (zh) | 图像处理方法、图像处理装置、终端及可读存储介质 | |
CN113658091A (zh) | 一种图像评价方法、存储介质及终端设备 | |
US9323981B2 (en) | Face component extraction apparatus, face component extraction method and recording medium in which program for face component extraction method is stored | |
CN113034412B (zh) | 视频处理方法及装置 | |
CN112149647B (zh) | 图像处理方法、装置、设备及存储介质 | |
CN104243889B (zh) | 红外热成像机芯数字视频信号传输格式匹配方法 | |
JP2018523396A (ja) | デジタル画像変換方法、装置、記憶媒体及び機器 | |
JP5615344B2 (ja) | 色特徴を抽出するための方法および装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21874245 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 11.07.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21874245 Country of ref document: EP Kind code of ref document: A1 |