CN113077533A - Image fusion method and device and computer storage medium - Google Patents

Image fusion method and device and computer storage medium Download PDF

Info

Publication number
CN113077533A
CN113077533A CN202110298226.4A CN202110298226A CN113077533A CN 113077533 A CN113077533 A CN 113077533A CN 202110298226 A CN202110298226 A CN 202110298226A CN 113077533 A CN113077533 A CN 113077533A
Authority
CN
China
Prior art keywords
fusion
moving object
proportion
black
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110298226.4A
Other languages
Chinese (zh)
Other versions
CN113077533B (en
Inventor
瞿二平
肖亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202110298226.4A priority Critical patent/CN113077533B/en
Publication of CN113077533A publication Critical patent/CN113077533A/en
Application granted granted Critical
Publication of CN113077533B publication Critical patent/CN113077533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image fusion method, an image fusion device and a computer storage medium, wherein the image fusion method comprises the following steps: acquiring speed information of a moving object in the camera picture based on an analysis result of the camera picture; calculating a reference fusion proportion based on the brightness difference between the color pixels of the color channel and the black and white pixels of the black and white channel of the camera picture; adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain a local fusion proportion; and carrying out image fusion of the corresponding frame region according to the local fusion proportion, and carrying out image fusion of the rest regions according to the reference fusion proportion. By the method, the dynamic adjustment can be carried out in a self-adaptive mode, and the requirements for distinguishing and differentiating local areas in the shooting scene are fused.

Description

Image fusion method and device and computer storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image fusion method and apparatus, and a computer storage medium.
Background
In the related art, objects in traffic scenes are photographed by using a fusion camera, and for different traffic scenes, the fusion modes of visible light and infrared light need to be different. However, in the conventional method, data of two channels of infrared light and visible light are compared, or a photographing scene is identified by analyzing an infrared light component of the infrared light channel, so that a fusion mode of the visible light and the infrared light is determined.
However, in the above related art, the local areas in the photographed scene, such as the moving area and the static area, and the vehicle area and the pedestrian area, are fused according to the determined fusion method, which is likely to cause a problem of poor local area fusion effect.
Disclosure of Invention
The application provides an image fusion method, an image fusion device and a computer storage medium.
In order to solve the technical problem, the application adopts a technical scheme that: provided is an image fusion method including:
acquiring speed information of a moving object in a camera picture based on an analysis result of the camera picture;
calculating a reference fusion ratio based on the brightness difference between the color pixels of the color channel and the black and white pixels of the black and white channel of the camera picture;
adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain a local fusion proportion;
and carrying out image fusion of the corresponding frame region according to the local fusion proportion, and carrying out image fusion of the rest regions according to the reference fusion proportion.
The step of adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain the local fusion proportion comprises the following steps:
judging the type of a moving target of the moving object based on the speed information of the moving object;
determining a fusion scale coefficient of the black and white channel based on the type of the moving target of the moving object;
and adjusting the fusion proportion of the frame region where the moving object is located by using the fusion proportion coefficient of the black and white channel to obtain the local fusion proportion.
Wherein the moving object types comprise motor vehicles, pedestrians and non-vehicles and non-people;
the step of determining the fusion scale factor of the black and white channel based on the type of the moving target of the moving object comprises the following steps:
when the type of the moving target is a motor vehicle, setting the fusion scale coefficient of the black and white channel to be 0;
when the type of the moving target is a pedestrian, setting the fusion scale coefficient of the black and white channel to be 1;
and when the type of the moving target is non-vehicle-non-human, setting the fusion scale coefficient of the black and white channel by using the speed information of the moving object.
The calculation formula for setting the fusion scale coefficient of the black and white channel by using the speed information of the moving object is as follows:
ratio=1-speed/4,ratio>=0
wherein, ratio is the fusion proportion coefficient of the black and white channel, and speed is the speed of the moving object.
Wherein, the step of performing image fusion of the corresponding frame region according to the local fusion proportion comprises:
calculating the brightness value of the color pixel by using the reference fusion proportion;
calculating the brightness value of the black and white pixel by using the local fusion proportion;
and superposing the brightness value of the color pixel and the brightness value of the black-white pixel to obtain the brightness value of the pixel after image fusion.
Wherein the step of determining the type of the moving target of the moving object based on the speed information of the moving object includes:
analyzing the pixel displacement value of a moving object in the picture of the camera of the adjacent frame through an intelligent algorithm;
acquiring the speed of the moving object based on the pixel displacement value of the moving object;
when the speed of the moving object is greater than or equal to a first speed threshold value, judging that the type of the moving target is a motor vehicle;
when the speed of the moving object is smaller than the first speed threshold and is larger than or equal to a second speed threshold, judging that the type of the moving target is non-vehicle and non-human;
and when the speed of the moving object is smaller than the second speed threshold value, determining that the type of the moving target is a pedestrian.
Wherein the image comprises a visible light image and an infrared light image.
In order to solve the above technical problem, another technical solution adopted by the present application is: the image fusion device comprises an acquisition module, a calculation module, an adjustment module and a fusion module; wherein the content of the first and second substances,
the acquisition module is used for acquiring the speed information of a moving object in the camera picture based on the analysis result of the camera picture;
the calculation module is used for calculating a reference fusion proportion based on the brightness difference between the color pixels of the color channel and the black-white pixels of the black-white channel of the camera picture;
the adjusting module is used for adjusting the fusion proportion of the frame region where the moving object is located by utilizing the speed information of the moving object to obtain a local fusion proportion;
and the fusion module is used for carrying out image fusion of the corresponding frame region according to the local fusion proportion and carrying out image fusion of the rest regions according to the reference fusion proportion.
In order to solve the above technical problem, another technical solution adopted by the present application is: providing another image fusion device, wherein the image fusion device comprises a processor and a memory; the memory has stored therein a computer program for execution by the processor to implement the steps of the image fusion method as described above.
In order to solve the above technical problem, another technical solution adopted by the present application is: there is provided a computer storage medium, wherein the computer storage medium stores a computer program which, when executed, implements the steps of the image fusion method described above.
Different from the prior art, the beneficial effects of this application lie in: the image fusion device acquires speed information of a moving object in the camera picture based on an analysis result of the camera picture; calculating a reference fusion proportion based on the brightness difference between the color pixels of the color channel and the black and white pixels of the black and white channel of the camera picture; adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain a local fusion proportion; and carrying out image fusion of the corresponding frame region according to the local fusion proportion, and carrying out image fusion of the rest regions according to the reference fusion proportion. By the method, the dynamic adjustment can be carried out in a self-adaptive mode, and the requirements for distinguishing and differentiating local areas in the shooting scene are fused.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a schematic flowchart of an embodiment of an image fusion method provided in the present application;
FIG. 2 is a graph of a reference fusion scale based on pixel luminance differences as provided herein;
FIG. 3 is a detailed flowchart of step S103 of the image fusion method shown in FIG. 1;
FIG. 4 is a schematic structural diagram of an embodiment of an image fusion apparatus provided in the present application;
FIG. 5 is a schematic structural diagram of another embodiment of an image fusion apparatus provided in the present application;
FIG. 6 is a schematic structural diagram of an embodiment of a computer storage medium provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The existing general scheme on the market mainly determines the fusion proportion according to the brightness difference of a color channel and a black and white channel, and the requirement of a moving pedestrian and a moving vehicle on the differentiation of the fusion effect in a structured scene cannot be met. For example, the black and white channel of the moving vehicle is relatively overexposed, the information of Y (Luma, lightness) is sufficient, and not too much infrared channel information is needed; the black and white channel of the moving pedestrian is slightly overexposed, but the Y information is insufficient, and the infrared path information is needed instead. In order to realize image fusion according to differentiation requirements, the application provides an image fusion method.
Referring to fig. 1 in detail, fig. 1 is a schematic flow chart of an embodiment of an image fusion method provided in the present application. The image fusion method is applied to an image fusion device, wherein the image fusion device can be a server, a terminal device, or a system formed by the server and the terminal device in a mutual matching mode. Accordingly, each part, for example, each unit, sub-unit, module, and sub-module, included in the image fusion apparatus may be all disposed in the server, may be all disposed in the terminal device, and may be disposed in the server and the terminal device, respectively.
Further, the server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster formed by multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as a plurality of software or software modules, for example, software or software modules for providing distributed servers, or as a single software or software module, and is not limited herein.
As shown in fig. 1, the image fusion method of the present embodiment specifically includes the following steps:
s101: and acquiring the speed information of the moving object in the camera picture based on the analysis result of the camera picture.
The camera in the embodiment of the application refers to a fusion camera, and the fusion camera adopts a camera scheme of picture synthesis acquired by two sensor groups. One path of sensor collects visible light information to obtain a visible light camera picture; and the other path of sensor collects infrared light information to obtain an infrared light camera picture.
Specifically, a visible light path of a visible light camera picture, namely a camera data path which senses visible light and performs imaging depending on the visible light; the infrared light path of the infrared light camera picture is a camera data path which senses infrared light and relies on the infrared light for imaging.
The image fusion device analyzes the camera picture through an intelligent algorithm, and adjusts the fusion proportion of the area where the moving object is located by using the speed information of the moving object in the camera picture output by the intelligent algorithm. In addition, the intelligent algorithm can also output frame information of the moving object in the camera picture, namely a mark frame surrounding the moving object, and the region where the moving object is located can be regarded as all regions in the mark frame.
The visible light camera picture and the infrared light camera picture are based on the same imaging of a shooting scene, but the imaging light sources are different, and the scene content is substantially the same. The image fusion device can analyze the picture image of the visible light machine picture or the infrared light machine picture through an intelligent algorithm to obtain shooting scene information when the image is shot. The intelligent algorithm analyzes according to the frame of the fusion camera, and analyzes the static object and the dynamic object in the frame to obtain the attributes of the moving object, such as the sex, the height, the expression and the like of the moving person.
In the embodiment of the application, the image fusion device mainly acquires the motion attributes, such as the motion speed, of the moving object in the shooting scene through an intelligent algorithm.
S102: and calculating a reference fusion ratio based on the brightness difference between the color pixels of the color channel and the black and white pixels of the black and white channel of the camera picture.
The image fusion device calculates the standard fusion proportion of the color pixel and the black-and-white pixel in the camera picture through a mainstream fusion algorithm.
Specifically, the image fusion device obtains a reference fusion ratio map based on the brightness difference of the pixels as shown in fig. 2 by fusing the basic parameters of the camera, wherein the abscissa of the reference fusion ratio map is the brightness difference between the color pixels and the black-and-white pixels, and the ordinate is the fusion ratio between the color pixels and the black-and-white pixels. It should be noted that the range of the reference fusion ratio in the embodiment of the present application is [0,1], and the range of the ordinate of the reference fusion ratio map is [0,120], and the image fusion device may convert the fusion ratio in the reference fusion ratio map into the reference fusion ratio in the embodiment of the present application.
For example, the image fusion device may first calculate the luminance difference between the color pixel and the black-and-white pixel in the region where the moving object is located, then look up the corresponding fusion ratio in the reference fusion ratio map of fig. 2 based on the luminance difference between the color pixel and the black-and-white pixel, and finally convert the fusion ratio in the reference fusion ratio map into the corresponding reference fusion ratio.
The current mainstream fusion algorithm mainly determines a fusion ratio according to the brightness difference between a color pixel and a black-and-white pixel, and then fuses the whole visible light camera picture and the infrared light camera picture according to the fusion ratio. The fusion mode does not consider the differentiated fusion requirements of elements forming the camera picture, so that the fusion effect cannot reach the best effect.
Therefore, the embodiment of the application adjusts the fusion proportion of the composition elements in the camera picture through the differentiation of the camera picture elements, and realizes differentiation fusion. Please continue to refer to S103:
s103: and adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain the local fusion proportion.
The image fusion device mainly adjusts the fusion proportion of the frame region where the moving object is located through the motion attribute of the moving object output by the intelligent algorithm. In addition, the image fusion device can also adjust the fusion proportion of the frame region where the moving object is located according to manual marking or the marking of the category of the moving object by a preset neural network. For example, taking the motion speed as a reference factor for adjusting the fusion ratio as an example, the image fusion device may adjust the fusion ratio of the frame region where the moving object is located by the method shown in fig. 3, where fig. 3 is a specific flowchart of step S103 of the image fusion method shown in fig. 1.
As shown in fig. 3, step S103 may further include the following steps:
s201: and judging the type of the moving target of the moving object based on the speed information of the moving object.
Taking a visible light camera picture as an example, specifically, the image fusion device analyzes pixel displacement values of the same target in the visible light camera pictures of continuous multiple frames through an intelligent algorithm, and the pixel displacement values of adjacent camera pictures are used for representing the moving speed of a moving object in the visible light camera picture.
The moving object types of the embodiment of the application comprise motor vehicles, pedestrians and non-vehicles and non-people.
The image fusion device defaults to set an initial scene as a vehicle scene, and adjusts different intelligent scene strategies by adopting a self-learning scene in the subsequent processing process. Specifically, when the image fusion device detects that the number of moving objects in the visible light camera picture is greater than 1 and the pixel displacement value of the moving object same as that of the adjacent visible light camera picture is less than or equal to 1, the image fusion device sets the current scene as a human shooting scene, wherein the moving target type of the moving object is a pedestrian. When the image fusion device detects that the number of moving objects in the picture of the visible light camera is more than 5 and the pixel displacement value of the moving object same as that of the picture of the adjacent visible light camera is more than or equal to 3, the image fusion device sets the current scene as a vehicle shooting scene, wherein the type of a moving target of the moving object is a motor vehicle. In other cases, the image fusion device may set the current scene as a non-vehicle and non-human shooting scene, where the moving target type of the moving object is non-vehicle and non-human.
In addition, the image fusion device can also judge the type of the moving target of the moving object directly through the speed of the moving object. For example, when the speed of the moving object is greater than or equal to a first speed threshold value, the type of the moving target is determined to be a motor vehicle; when the speed of the moving object is smaller than a first speed threshold value and is larger than or equal to a second speed threshold value, judging that the type of the moving target is non-vehicle and non-human; and when the speed of the moving object is smaller than the second speed threshold value, judging that the type of the moving target is a pedestrian.
It should be noted that the determination of the speed of the moving object belongs to the determination of an empirical value, and a worker can set different pixel displacement value thresholds according to the work requirement.
S202: and determining the fusion scale coefficient of the black and white channel based on the type of the moving target of the moving object.
The image fusion device may determine the fusion scale factor of the black-and-white channel based on the type of the moving target determined in step S201.
Specifically, when the type of the moving object is a motor vehicle, the fusion scale coefficient of the black and white channel is set to 0; when the type of the moving target is a pedestrian, setting the fusion scale coefficient of the black and white channel to be 1; and when the type of the moving target is non-vehicle or non-human, setting a fusion scale coefficient of a black and white channel by using the speed information of the moving object.
When the type of the moving target is non-vehicle or non-human, the calculation formula for setting the fusion scale coefficient of the black and white channel by using the speed information of the moving object is as follows:
ratio=1-speed/4,ratio>=0
wherein, ratio is the fusion proportion coefficient of a black channel and a white channel, and speed is the speed of a moving object.
S203: and adjusting the fusion proportion of the frame region where the moving object is located by using the fusion proportion coefficient of the black and white channel to obtain the local fusion proportion.
The image fusion device adjusts the fusion proportion of the black and white channels in the frame area of the moving object according to the speed information of the moving object. Specifically, the image fusion device superimposes the fusion scale coefficient of the black-and-white channel on the basis of the reference fusion scale calculated based on the pixel difference shown in the reference fusion scale map, thereby obtaining the local fusion scale.
S104: and carrying out image fusion of the corresponding frame region according to the local fusion proportion, and carrying out image fusion of the rest regions according to the reference fusion proportion.
Wherein, for the common area, the image fusion device directly carries out image fusion according to the reference fusion proportion. And for the frame region where the moving object is located, the image fusion device calculates the brightness value of the color pixel by using the reference fusion ratio, calculates the brightness value of the black-and-white pixel by using the local fusion ratio, and finally superposes the brightness value of the color pixel and the brightness value of the black-and-white pixel to obtain the brightness value of the pixel after the final image fusion.
The image fusion mode corresponding to the local fusion proportion is represented as follows by a formula: and the pixel Y value of the frame area where the final moving object is located is as follows:
pY=pYColor*FusRatio+pYMono*(1-FusRatio)*ratio
wherein pY is a pixel Y value of a frame region in which the moving object is located, pYColor is a color pixel Y value of the frame region in which the moving object is located, pYMono is a black-and-white pixel Y value of the frame region in which the moving object is located, fusi is a reference fusion proportion obtained by looking up a table according to fig. 2, and ratio is a fusion proportion coefficient of the black-and-white channel determined in the above steps.
Through the mode, the image fusion device can dynamically adjust the fusion ratio of the dynamic black and white channel according to the movement speed of the moving object in the movement area. When a moving area has a running, the black and white channel has the problem that characters cannot be distinguished due to the fact that license plates reflect light, the overall brightness of all the black and white channels of the whole camera picture is forcibly reduced, the overall brightness of the pedestrian path is reduced, and the requirement that the black and white channels provide the structural scene and the pedestrian path is not smooth can be met. Therefore, in the overall structural scheme of the embodiment of the application, the vehicle driving position is adopted to perform fusion by using a color channel high-proportion Y channel, and other positions such as a static region including a pedestrian movement region are adopted to perform fusion by using a black-white channel high-proportion Y channel. The effect that best structural pedestrian and driving were taken into account simultaneously can be guaranteed like this, moreover because be dynamic self-adaptation and fuse the proportion adjustment, the condition that the driving disappears or appears can both guarantee the best image effect.
The image fusion device fuses visible light path imaging data of a visible light camera picture and infrared light path imaging data of an infrared light camera picture by adopting the image fusion method and matching with a homothetic fusion algorithm, and finally outputs a merged data picture, namely the camera picture which is finally displayed. The camera picture combines the advantages of visible light path imaging and infrared light path imaging, can optimize the image quality of the whole image, and ensures the image effect of the whole scene.
Specifically, the homothetic fusion algorithm fuses the pixels at the same positions of the visible light image and the infrared light image through operation. The fusion algorithm divides the YUV data of the visible light path into a Y component and a UV component, and performs algorithm fusion on the Y component of the visible light path and the Y component of the infrared light path (only the Y component of the infrared light path). The fusion method of the Y component comprises the following steps: taking a single pixel as an example, if the p-position pixel is a middle-low frequency pixel region, the Y component of the p-position pixel in the visible light channel is vis _ LM, and the Y component of the p-position pixel in the infrared light channel is nir _ LM, then the Y component of the fused p-position pixel is fusion _ LM (vis _ LM, nir _ LM), and the energy variation coefficient of the p-position is c fusion _ LM/vis _ LM; the UV component is derived from the visible path, and the final fused image pixel UV ═ c × visible path image pixel UV component. If the p-position pixel is a high-frequency pixel region, the component of the pixel Y at the p-position of the visible light channel is vis _ H, and the component of the pixel Y at the p-position of the infrared light channel is nir _ H, then the fused component of the pixel Y at the p-position is fusion _ H ═ vis _ H × + nir _ H (1-alpha). The alpha weight is determined by the ratio of the overall brightness of the visible light path to the overall brightness of the infrared light path, i.e., the energy change coefficient of the overall brightness of the visible light path to the overall brightness of the infrared light path as the position of the alpha value p is recorded as c ═ fusion _ LM/vis _ LM, the UV component is derived from the visible light path, and the final fused image pixel UV ═ c ═ visible light path image pixel UV component. And determining YUV data of the fused image so far, wherein the final fusion effect is the fusion effect which accords with the expected intelligent scene.
In this embodiment, the image fusion apparatus acquires speed information of a moving object in the camera frame based on an analysis result of the camera frame; calculating a reference fusion proportion based on the brightness difference between the color pixels of the color channel and the black and white pixels of the black and white channel of the camera picture; adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain a local fusion proportion; and carrying out image fusion of the corresponding frame region according to the local fusion proportion, and carrying out image fusion of the rest regions according to the reference fusion proportion. By the method, the dynamic adjustment can be carried out in a self-adaptive mode, and the requirements for distinguishing and differentiating local areas in the shooting scene are fused.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
In order to implement the image fusion method of the foregoing embodiment, the present application further provides an image fusion device, and specifically refer to fig. 4, where fig. 4 is a schematic structural diagram of an embodiment of the image fusion device provided in the present application.
As shown in fig. 4, the image fusion apparatus 400 of the present embodiment includes an obtaining module 41, a calculating module 42, an adjusting module 43, and a fusion module 44; wherein the content of the first and second substances,
the obtaining module 41 is configured to obtain speed information of a moving object in the camera frame based on an analysis result of the camera frame;
the calculating module 42 is configured to calculate a reference fusion ratio based on a luminance difference between a color pixel of the color channel and a black-and-white pixel of the black-and-white channel of the camera image;
the adjusting module 43 is configured to adjust the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object, so as to obtain a local fusion proportion;
the fusion module 44 is configured to perform image fusion of the corresponding frame region according to the local fusion ratio, and perform image fusion of the remaining regions according to the reference fusion ratio.
In order to implement the image fusion method of the above embodiment, the present application further provides another image fusion device, specifically please refer to fig. 5, and fig. 5 is a schematic structural diagram of another embodiment of the image fusion device provided by the present application.
As shown in fig. 5, the image fusion apparatus 500 of the present embodiment includes a processor 51, a memory 52, an input/output device 53, and a bus 54.
The processor 51, the memory 52, and the input/output device 53 are respectively connected to the bus 54, the memory 52 stores a computer program, and the processor 51 is configured to execute the computer program to implement the image fusion method according to the above-mentioned embodiment.
In the present embodiment, the processor 51 may also be referred to as a CPU (Central Processing Unit). The processor 51 may be an integrated circuit chip having signal processing capabilities. The processor 51 may also be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components. The processor 51 may also be a GPU (Graphics Processing Unit), which is also called a display core, a visual processor, and a display chip, and is a microprocessor specially used for image operation on a personal computer, a workstation, a game machine, and some mobile devices (such as a tablet computer, a smart phone, etc.). The GPU is used for converting and driving display information required by a computer system, providing a line scanning signal for a display and controlling the display of the display correctly, is an important element for connecting the display and a personal computer mainboard, and is also one of important devices for man-machine conversation. The display card is an important component in the computer host, takes charge of outputting display graphics, and is very important for people engaged in professional graphic design. A general purpose processor may be a microprocessor or the processor 51 may be any conventional processor or the like.
The present application also provides a computer storage medium, as shown in fig. 6, the computer storage medium 600 is used for storing a computer program 61, and the computer program 61 is used for implementing the method described in the embodiment of the image fusion method of the present application when being executed by a processor.
The method involved in the embodiment of the image fusion method of the present application, when implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a device, for example, a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor (processor) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. An image fusion method, characterized in that the image fusion method comprises:
acquiring speed information of a moving object in a camera picture based on an analysis result of the camera picture;
calculating a reference fusion ratio based on the brightness difference between the color pixels of the color channel and the black and white pixels of the black and white channel of the camera picture;
adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain a local fusion proportion;
and carrying out image fusion of the corresponding frame region according to the local fusion proportion, and carrying out image fusion of the rest regions according to the reference fusion proportion.
2. The image fusion method according to claim 1,
the step of adjusting the fusion proportion of the frame region where the moving object is located by using the speed information of the moving object to obtain the local fusion proportion comprises the following steps:
judging the type of a moving target of the moving object based on the speed information of the moving object;
determining a fusion scale coefficient of the black and white channel based on the type of the moving target of the moving object;
and adjusting the fusion proportion of the frame region where the moving object is located by using the fusion proportion coefficient of the black and white channel to obtain the local fusion proportion.
3. The image fusion method according to claim 2,
the moving object types comprise motor vehicles, pedestrians and non-vehicles and non-people;
the step of determining the fusion scale factor of the black and white channel based on the type of the moving target of the moving object comprises the following steps:
when the type of the moving target is a motor vehicle, setting the fusion scale coefficient of the black and white channel to be 0;
when the type of the moving target is a pedestrian, setting the fusion scale coefficient of the black and white channel to be 1;
and when the type of the moving target is non-vehicle-non-human, setting the fusion scale coefficient of the black and white channel by using the speed information of the moving object.
4. The image fusion method according to claim 3,
the calculation formula for setting the fusion proportion coefficient of the black and white channel by using the speed information of the moving object is as follows:
ratio=1-speed/4,ratio>=0
wherein, ratio is the fusion proportion coefficient of the black and white channel, and speed is the speed of the moving object.
5. The image fusion method according to claim 3,
the step of performing image fusion of the corresponding frame region according to the local fusion proportion comprises the following steps:
calculating the brightness value of the color pixel by using the reference fusion proportion;
calculating the brightness value of the black and white pixel by using the local fusion proportion;
and superposing the brightness value of the color pixel and the brightness value of the black-white pixel to obtain the brightness value of the pixel after image fusion.
6. The image fusion method according to claim 3,
the step of judging the type of the moving target of the moving object based on the speed information of the moving object comprises the following steps:
analyzing the pixel displacement value of a moving object in the picture of the camera of the adjacent frame through an intelligent algorithm;
acquiring the speed of the moving object based on the pixel displacement value of the moving object;
when the speed of the moving object is greater than or equal to a first speed threshold value, judging that the type of the moving target is a motor vehicle;
when the speed of the moving object is smaller than the first speed threshold and is larger than or equal to a second speed threshold, judging that the type of the moving target is non-vehicle and non-human;
and when the speed of the moving object is smaller than the second speed threshold value, determining that the type of the moving target is a pedestrian.
7. The image fusion method of claim 1, wherein the image comprises a visible light image and an infrared light image.
8. An image fusion device is characterized by comprising an acquisition module, a calculation module, an adjustment module and a fusion module; wherein the content of the first and second substances,
the acquisition module is used for acquiring the speed information of a moving object in the camera picture based on the analysis result of the camera picture;
the calculation module is used for calculating a reference fusion proportion based on the brightness difference between the color pixels of the color channel and the black-white pixels of the black-white channel of the camera picture;
the adjusting module is used for adjusting the fusion proportion of the frame region where the moving object is located by utilizing the speed information of the moving object to obtain a local fusion proportion;
and the fusion module is used for carrying out image fusion of the corresponding frame region according to the local fusion proportion and carrying out image fusion of the rest regions according to the reference fusion proportion.
9. An image fusion apparatus, characterized in that the image fusion apparatus comprises a processor and a memory; the memory stores a computer program, and the processor is used for executing the computer program to realize the steps of the image fusion method according to any one of claims 1-7.
10. A computer storage medium storing a computer program which, when executed, performs the steps of the image fusion method according to any one of claims 1 to 7.
CN202110298226.4A 2021-03-19 2021-03-19 Image fusion method and device and computer storage medium Active CN113077533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110298226.4A CN113077533B (en) 2021-03-19 2021-03-19 Image fusion method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110298226.4A CN113077533B (en) 2021-03-19 2021-03-19 Image fusion method and device and computer storage medium

Publications (2)

Publication Number Publication Date
CN113077533A true CN113077533A (en) 2021-07-06
CN113077533B CN113077533B (en) 2023-05-12

Family

ID=76612826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110298226.4A Active CN113077533B (en) 2021-03-19 2021-03-19 Image fusion method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN113077533B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861146A (en) * 2023-02-28 2023-03-28 季华实验室 Target-shielded processing method and device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780330A (en) * 2016-12-08 2017-05-31 中国人民解放军国防科学技术大学 A kind of super resolution ratio reconstruction method based on colored and black and white dual camera
CN109479093A (en) * 2016-07-22 2019-03-15 索尼公司 Image processing apparatus and image processing method
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111586314A (en) * 2020-05-25 2020-08-25 浙江大华技术股份有限公司 Image fusion method and device and computer storage medium
WO2020171281A1 (en) * 2019-02-22 2020-08-27 써모아이 주식회사 Visible light and infrared fusion image-based object detection method and apparatus
US20200296305A1 (en) * 2018-12-17 2020-09-17 SZ DJI Technology Co., Ltd. Image processing method and apparatus
CN112217962A (en) * 2019-07-10 2021-01-12 杭州海康威视数字技术股份有限公司 Camera and image generation method
US20210034901A1 (en) * 2018-10-15 2021-02-04 Tencent Technology (Shenzhen) Company Limited Target object recognition method and apparatus, storage medium, and electronic device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109479093A (en) * 2016-07-22 2019-03-15 索尼公司 Image processing apparatus and image processing method
CN106780330A (en) * 2016-12-08 2017-05-31 中国人民解放军国防科学技术大学 A kind of super resolution ratio reconstruction method based on colored and black and white dual camera
US20210034901A1 (en) * 2018-10-15 2021-02-04 Tencent Technology (Shenzhen) Company Limited Target object recognition method and apparatus, storage medium, and electronic device
US20200296305A1 (en) * 2018-12-17 2020-09-17 SZ DJI Technology Co., Ltd. Image processing method and apparatus
WO2020171281A1 (en) * 2019-02-22 2020-08-27 써모아이 주식회사 Visible light and infrared fusion image-based object detection method and apparatus
CN112217962A (en) * 2019-07-10 2021-01-12 杭州海康威视数字技术股份有限公司 Camera and image generation method
CN111028190A (en) * 2019-12-09 2020-04-17 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111586314A (en) * 2020-05-25 2020-08-25 浙江大华技术股份有限公司 Image fusion method and device and computer storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861146A (en) * 2023-02-28 2023-03-28 季华实验室 Target-shielded processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113077533B (en) 2023-05-12

Similar Documents

Publication Publication Date Title
EP3477931B1 (en) Image processing method and device, readable storage medium and electronic device
US20160094770A1 (en) Image Processing Method and Apparatus, and Terminal
CN111586314B (en) Image fusion method and device and computer storage medium
CN108717530B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN113518185B (en) Video conversion processing method and device, computer readable medium and electronic equipment
CN106485720A (en) Image processing method and device
CN108431751B (en) Background removal
CN106570838A (en) Image brightness optimization method and device
CN112351195B (en) Image processing method, device and electronic system
CN114677394A (en) Matting method, matting device, image pickup apparatus, conference system, electronic apparatus, and medium
CN111311481A (en) Background blurring method and device, terminal equipment and storage medium
CN115082350A (en) Stroboscopic image processing method and device, electronic device and readable storage medium
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN113077533B (en) Image fusion method and device and computer storage medium
CN111917986A (en) Image processing method, medium thereof, and electronic device
CN111970501A (en) Pure color scene AE color processing method and device, electronic equipment and storage medium
CN111539975A (en) Method, device and equipment for detecting moving target and storage medium
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
EP4090006A2 (en) Image signal processing based on virtual superimposition
CN116263942A (en) Method for adjusting image contrast, storage medium and computer program product
EP2860975A1 (en) Method for processing at least one disparity map, corresponding electronic device and computer program product
CN113824894A (en) Exposure control method, device, equipment and storage medium
CN112488933A (en) Video detail enhancement method and device, mobile terminal and storage medium
CN112740264A (en) Design for processing infrared images
Piniarski et al. Efficient HDR tone-mapping for ADAS applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant