CN112950502B - Image processing method and device, electronic equipment and storage medium - Google Patents

Image processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112950502B
CN112950502B CN202110221009.5A CN202110221009A CN112950502B CN 112950502 B CN112950502 B CN 112950502B CN 202110221009 A CN202110221009 A CN 202110221009A CN 112950502 B CN112950502 B CN 112950502B
Authority
CN
China
Prior art keywords
image
pixel
light images
motion
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110221009.5A
Other languages
Chinese (zh)
Other versions
CN112950502A (en
Inventor
田毅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110221009.5A priority Critical patent/CN112950502B/en
Publication of CN112950502A publication Critical patent/CN112950502A/en
Priority to PCT/CN2021/137515 priority patent/WO2022179251A1/en
Application granted granted Critical
Publication of CN112950502B publication Critical patent/CN112950502B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method and device, electronic equipment and a storage medium, wherein the image processing method comprises the following steps: acquiring a plurality of visible light images and a plurality of invisible light images; determining image registration parameters based on the plurality of invisible light images; and registering the plurality of visible light images based on the image registration parameters.

Description

Image processing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
Currently, there are two main types of methods for video noise reduction: one is a spatial domain noise reduction method, and the other is a time domain noise reduction method. The spatial domain noise reduction method only needs to process the current video frame, while the temporal domain noise reduction method needs to process the current video frame with reference to video frames (simply referred to as reference frames) other than the current video frame. For the time domain noise reduction method, the reference frame and the current video frame need to be registered before noise reduction processing is carried out, and the current registration method has the problem of higher registration error rate.
Disclosure of Invention
In order to solve the technical problems, embodiments of the present application provide an image processing method and apparatus, an electronic device, and a storage medium.
The embodiment of the application provides an image processing method, which comprises the following steps:
acquiring a plurality of visible light images and a plurality of invisible light images;
determining image registration parameters based on the plurality of invisible light images;
and registering the plurality of visible light images based on the image registration parameters.
In some optional embodiments of the present application, the acquiring a plurality of visible light images and a plurality of invisible light images includes:
the method comprises the steps of acquiring a plurality of visible light images acquired by a first image sensor and a plurality of invisible light images acquired by a second image sensor, wherein the plurality of visible light images and the plurality of invisible light images have a corresponding relation according to acquisition time.
In some optional embodiments of the present application, the plurality of visible light images includes a first image and a second image, and the plurality of invisible light images includes a third image and a fourth image, wherein the first image corresponds to the third image, and the second image corresponds to the fourth image;
The determining an image registration parameter based on the plurality of invisible light images includes:
extracting a first feature point set from the third image and a second feature point set from the fourth image;
determining at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set;
image registration parameters are determined based on the coordinate information of the at least one pair of feature points.
In some optional embodiments of the present application, the method further comprises:
adjusting the image registration parameters based on calibration data; wherein the calibration data is determined based on a relative positional relationship of the first image sensor and the second image sensor.
In some optional embodiments of the present application, the registering the plurality of visible light images based on the image registration parameter includes:
and transforming the second image based on the image registration parameters to obtain a fifth image registered with the first image.
In some optional embodiments of the present application, the method further comprises:
and carrying out image fusion processing on the fifth image and the first image to obtain a sixth image, wherein the sixth image is an image of the first image after noise is removed.
In some optional embodiments of the present application, the performing image fusion processing on the fifth image and the first image includes:
and correspondingly averaging the pixel values of all the pixel points in the fifth image and the pixel values of all the pixel points in the first image to obtain the pixel values of all the pixel points in the sixth image.
In some optional embodiments of the present application, the performing image fusion processing on the fifth image and the first image includes:
generating a motion mask image for determining a motion region and a non-motion region;
determining the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image;
and correspondingly averaging the pixel values of all the pixel points in the non-motion area in the fifth image and the pixel values of all the pixel points in the non-motion area in the first image to obtain the pixel values of all the pixel points in the non-motion area in the sixth image.
In some optional embodiments of the present application, the generating a motion mask image includes:
correspondingly calculating difference values between the pixel values of all the pixel points in the fifth image and the pixel values of all the pixel points in the first image to obtain the pixel values of all the pixel points in the motion mask image;
The region formed by the pixel points with the pixel values larger than or equal to the threshold value in the motion mask image is a motion region, and the region formed by the pixel points with the pixel values smaller than the threshold value in the motion mask image is a non-motion region.
In some optional embodiments of the present application, the invisible light image is an infrared image.
An embodiment of the present application provides an image processing apparatus, including:
an acquisition unit configured to acquire a plurality of visible light images and a plurality of invisible light images;
a parameter determination unit configured to determine an image registration parameter based on the plurality of invisible light images;
and the image registration unit is used for registering the plurality of visible light images based on the image registration parameters.
In some optional embodiments of the present application, the acquiring unit is configured to acquire a plurality of visible light images acquired by the first image sensor and a plurality of invisible light images acquired by the second image sensor, where the plurality of visible light images and the plurality of invisible light images have a corresponding relationship according to an acquisition time.
In some optional embodiments of the present application, the plurality of visible light images includes a first image and a second image, and the plurality of invisible light images includes a third image and a fourth image, wherein the first image corresponds to the third image, and the second image corresponds to the fourth image;
The parameter determining unit is used for extracting a first characteristic point set from the third image and extracting a second characteristic point set from the fourth image; determining at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set; image registration parameters are determined based on the coordinate information of the at least one pair of feature points.
In some optional embodiments of the present application, the parameter determining unit is further configured to adjust the image registration parameter based on calibration data; wherein the calibration data is determined based on a relative positional relationship of the first image sensor and the second image sensor.
In some optional embodiments of the present application, the image registration unit is configured to transform the second image based on the image registration parameter to obtain a fifth image registered with the first image.
In some optional embodiments of the present application, the apparatus further comprises:
and the image fusion unit is used for carrying out image fusion processing on the fifth image and the first image to obtain a sixth image, wherein the sixth image is an image of the first image after noise is removed.
In some optional embodiments of the present application, the image fusion unit is configured to average the pixel value of each pixel point in the fifth image and the pixel value of each pixel point in the first image correspondingly, so as to obtain the pixel value of each pixel point in the sixth image.
In some optional embodiments of the present application, the image fusion unit is configured to generate a motion mask image, where the motion mask image is used to determine a motion region and a non-motion region; determining the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image; and correspondingly averaging the pixel values of all the pixel points in the non-motion area in the fifth image and the pixel values of all the pixel points in the non-motion area in the first image to obtain the pixel values of all the pixel points in the non-motion area in the sixth image.
In some optional embodiments of the present application, the image fusion unit is configured to obtain a difference value corresponding to a pixel value of each pixel point in the fifth image and a pixel value of each pixel point in the first image, so as to obtain a pixel value of each pixel point in the motion mask image; the region formed by the pixel points with the pixel values larger than or equal to the threshold value in the motion mask image is a motion region, and the region formed by the pixel points with the pixel values smaller than the threshold value in the motion mask image is a non-motion region.
In some optional embodiments of the present application, the invisible light image is an infrared image.
The embodiment of the application also provides electronic equipment, which comprises a memory and a processor, wherein the memory stores a computer program, and the computer program when executed by the processor causes the processor to execute the image processing method described in the embodiment.
The embodiment of the application also provides a computer readable storage medium, on which a computer program is stored, which is characterized in that the computer program, when executed by a processor, implements the image processing method described in the above embodiment.
According to the technical scheme, the image registration parameters are determined based on the plurality of invisible light images, compared with the method of directly determining the image registration parameters based on the plurality of visible light images, the method has the advantages that the dark part information of the invisible light images is richer, the collected characteristics of real objects are more accurate, the image registration parameters are calculated more accurately, the plurality of visible light images are registered through the image registration parameters, the registration accuracy of the plurality of visible light images is improved, and guarantee is provided for the follow-up noise reduction treatment of the visible light images.
Drawings
FIG. 1 is a flow chart of time domain noise reduction provided in an embodiment of the present application;
fig. 2 is a schematic flow chart of an image processing method according to an embodiment of the present application;
fig. 3 is a schematic diagram of a correspondence between a visible light image sequence and an invisible light image sequence according to an embodiment of the present disclosure;
fig. 4 is a flowchart of image fusion according to a motion mask image provided in an embodiment of the present application;
FIG. 5 is a schematic diagram I of the principles provided by the embodiments of the present application;
FIG. 6 is a schematic flow chart of calculating a registration transformation matrix provided in an embodiment of the present application;
FIG. 7 is a schematic diagram II according to the embodiment of the present application;
FIG. 8 is a schematic illustration of determining registration transformation parameters for a split region provided by an embodiment of the present application;
fig. 9 is a schematic structural composition diagram of an image processing apparatus provided in the embodiment of the present application;
fig. 10 is a schematic structural composition diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Various exemplary embodiments of the present application will now be described in detail with reference to the accompanying drawings. It should be noted that: the relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present application unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the application, its application, or uses.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but are intended to be part of the specification where appropriate.
It should be noted that: like reference numerals and letters denote like items in the following figures, and thus once an item is defined in one figure, no further discussion thereof is necessary in subsequent figures.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following description of related technologies of the embodiments of the present application should be described, where the following description of related technologies is used for understanding the technical solutions of the embodiments of the present application, and is not meant to limit the technical solutions of the embodiments of the present application.
Currently, there are two main types of methods for video noise reduction: one is a spatial domain noise reduction method, and the other is a time domain noise reduction method. The spatial domain noise reduction method is a noise reduction method only aiming at a single frame of image, specifically, the neighborhood analysis and the processing are carried out in the single frame of image, and noise filtering is realized, and the spatial domain noise reduction method can be realized by a bilateral filtering algorithm or a Non-Local mean (Non-Local Means) algorithm as an example; the time domain noise reduction method is a noise reduction method for analyzing and processing information of video frames (simply referred to as reference frames) except the current video frame, for example, a previous frame of the current frame is used as the reference frame, because the time domain noise is flickering in the time domain and the real information is unchanged, the real image information and the noise can be better distinguished by adding the reference frame, and thus the noise can be removed more accurately.
FIG. 1 is a flow chart of a more typical video temporal denoising, as shown in FIG. 1, of image registering a reference frame such that the registered reference frame is aligned with a current frame; and carrying out image fusion processing on the registered reference frame and the current frame, namely, realizing denoising processing and obtaining a denoising output frame. It should be noted that the denoising output frame may be understood as a frame image after denoising the current frame.
In the flowchart shown in fig. 1, this step of image registration is a key step. Since in consecutive video frames, the shooting position of each frame may be moved, and there may be a moving object in each frame, then there may be an offset in the image information in two adjacent frames, and if the relative positions of an object in two adjacent frames are not identical, the subsequent image fusion process cannot distinguish the same point on the two adjacent frames. The image registration algorithm is a key for processing the problem, and can analyze the typical characteristics of each of two adjacent frames, and perform position alignment on the image information of the two adjacent frames according to characteristic matching.
In some schemes, the image registration algorithm adopts a scheme of matching feature points, wherein the feature points are points with obvious features in the image, such as corner points. The scheme has the defects that under a darker scene, such as a black street under a night scene, picture information in a video is very dark, real characteristic points in the picture are difficult to extract, noise under the night scene is very large, judgment of real information of an image is interfered, image registration errors are caused, and therefore the problems of serious ghosts and the like can occur after the video is subjected to time domain noise reduction.
Therefore, the following technical scheme of the embodiment of the application is provided, in the technical scheme of the embodiment of the application, the first image sensor and the second image sensor are utilized to collect the visible light image sequence and the invisible light image sequence simultaneously, and the image registration parameters suitable for the visible light image sequence are determined through the invisible light image sequence, so that the image registration accuracy of the visible light image sequence can be improved, and the signal to noise ratio of the visible light image sequence can be assisted to be improved.
The image processing method provided by the embodiment of the application is applied to an image processing device, and the image processing device can be arranged on electronic equipment. In some embodiments, the electronic device is, for example: cell phones, tablet computers, wearable devices, interactive advertising machines, gaming machines, desktops, all-in-one machines, vehicle terminals, and the like.
The image processing method and the image processing apparatus provided in the embodiments of the present application are described below.
Fig. 2 is a flowchart of an image processing method according to an embodiment of the present application, as shown in fig. 2, where the image processing method includes the following steps:
step 201: a plurality of visible light images and a plurality of invisible light images are acquired.
In the embodiment of the application, the electronic device is provided with a first image sensor and a second image sensor, wherein the first image sensor is used for collecting visible light images, and the second image sensor is used for collecting invisible light images. In some alternative embodiments, the invisible light image is an infrared image.
In some alternative embodiments, the first image Sensor is a Complementary Metal Oxide Semiconductor (CMOS) Sensor (Sensor) and the second image Sensor is an Infrared (IR) Sensor (Sensor). The CMOS Sensor is used to collect (may also be referred to as capturing or shooting) a visible light image, specifically, the CMOS Sensor collects a visible light band in a scene, where the visible light band is red (R), green (G), and blue (B) bands, and the visible light band is processed by a processor (such as an image signal processor (Image Signal Processing, ISP)) to present a color image, which is called a visible light image, and a data format of the visible light image may be a YUV format, for example. IR Sensor is used to collect invisible images, specifically IR Sensor collects infrared light (or referred to as infrared) with a frequency lower than red light in a scene, and since substances higher than absolute zero can generate infrared, IR Sensor can collect infrared generated by objects in the environment even in extremely dark environment, and it is visible that the infrared image collected by IR Sensor can contain object information in darker environment.
In this embodiment of the present application, acquiring a plurality of visible light images and a plurality of invisible light images includes: the method comprises the steps of acquiring a plurality of visible light images acquired by a first image sensor and a plurality of invisible light images acquired by a second image sensor, wherein the plurality of visible light images and the plurality of invisible light images have a corresponding relation according to acquisition time.
Here, the plurality of visible light images and the plurality of invisible light images have a correspondence relationship according to the acquisition time, which means that: the visible light image and the invisible light image which are collected at the same time or in the same time period have a corresponding relationship. For example, referring to fig. 3, the first image sensor and the second image sensor acquire an image sequence at the same time, resulting in a visible light image sequence (i.e., an original image sequence) and an invisible light image sequence (i.e., an infrared image sequence). At time t1, the first image sensor collects a visible light image 1, the second image sensor collects an invisible light image 1, and the visible light image 1 corresponds to the invisible light image 1. At time t2, the first image sensor collects a visible light image 2, the second image sensor collects an invisible light image 2, and the visible light image 2 corresponds to the invisible light image 2. At time t3, the first image sensor collects a visible light image 3, the second image sensor collects an invisible light image 3, and the visible light image 3 corresponds to the invisible light image 3. And the other way around, the plurality of visible light images acquired by the first image sensor and the plurality of invisible light images acquired by the second image sensor have a corresponding relation according to the acquisition time.
Step 202: image registration parameters are determined based on the plurality of non-visible light images.
Here, the signal to noise ratio of the visible light image (i.e., the original image) acquired by the first image sensor under a darker scene (such as a night scene) is relatively poor, the visibility of dark information in the visible light image is low, and if the plurality of visible light images output by the first image sensor are directly registered (i.e., relative position registration), the registration error rate may be high; the invisible light image collected by the second image sensor has higher visibility of dark part information no matter in a brighter scene or a darker scene, and even in a scene which is difficult to perceive by vision, the second image sensor can still collect real information of the scene; since the first image sensor and the second image sensor acquire images at the same time, the relative positional relationship of the plurality of visible light images approximates to the relative positional relationship of the plurality of invisible light images.
In summary, an image registration parameter may be determined based on a plurality of non-visible light images, and the plurality of visible light images may then be registered using the image registration parameter. The following description will be made with reference to "multiple" as "two", and it should be noted that "more than two" solutions are equally applicable to the technical solutions of the embodiments of the present application.
In some alternative embodiments, the plurality of visible light images includes a first image and a second image, and the plurality of invisible light images includes a third image and a fourth image, wherein the first image corresponds to the third image, and the second image corresponds to the fourth image. As an example, the first image sensor and the second image sensor acquire a first image and a third image respectively at the same time (e.g., a first time), so that the first image and the third image correspond; the first image sensor and the second image sensor acquire a second image and a fourth image respectively at the same time (e.g., a second time), so that the second image and the fourth image correspond.
In the above scheme, the acquisition time of the first image is located after the acquisition time of the second image, and the acquisition time of the third image is located after the acquisition time of the fourth image. As an example, the first image and the third image are a visible light image and an invisible light image, respectively, which are currently acquired by the first image sensor, and the second image and the fourth image are a visible light image and an invisible light image, respectively, which are previously acquired by the first image sensor. As an example, the second image is a previous frame image of the first image, which may be referred to as a first current frame, and the second image as a first reference frame; the fourth image is a previous frame image of the third image, which may be referred to as a second current frame, and the fourth image as a second reference frame. The second current frame is, for example, an infrared current frame, and the second reference frame is, for example, an infrared reference frame.
In this embodiment of the present application, the image registration parameter refers to an image registration parameter of one image relative to another image, for example, the image registration parameter refers to an image registration parameter of an image a relative to an image B, by multiplying the image a by the image registration parameter, a registered image a may be obtained, and the registered image a and the registered image B are aligned. Because the plurality of visible light images acquired by the first image sensor and the plurality of invisible light images acquired by the second image sensor have a corresponding relation according to the acquisition time, the image registration parameters corresponding to the plurality of invisible light images are consistent with the image registration parameters corresponding to the plurality of visible light images, so that the image registration parameters can be determined according to the plurality of invisible light images, and then the image registration parameters are applied to the plurality of visible light images, thereby realizing the registration of the plurality of visible light images. The following describes how to determine image registration parameters from a plurality of invisible light images.
In the embodiment of the application, a first feature point set is extracted from the third image, and a second feature point set is extracted from the fourth image; determining at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set; image registration parameters are determined based on the coordinate information of the at least one pair of feature points.
For example: the third image is an infrared current frame, the fourth image is an infrared reference frame, and the infrared reference frame may be a previous frame image of the infrared current frame. 1) And extracting characteristic points of the infrared current frame and the infrared reference frame to obtain a first characteristic point set in the infrared current frame and a second characteristic point set in the infrared reference frame. Here, the number of feature points included in the first feature point set and the second feature point set may be the same or different, and the technical solution of the embodiment of the present application is not limited thereto. In specific implementation, the extraction of the feature points can be realized by adopting a Harris algorithm or a feature point extraction algorithm such as a Scale-invariant feature transform (SIFT) algorithm. As an example, the principle of the feature point extraction algorithm is: judging whether the sitting position of each pixel point in the image is a point with obvious characteristics or not according to the neighborhood information of each pixel point in the image, and judging whether the point belongs to the characteristic point or not if the sitting position of each pixel point is a point positioned on the edge angle of the object or not. 2) Matching the characteristic points extracted from the infrared current frame and the infrared reference frame, and calculating image registration parameters according to the matched characteristic points. Here, the image registration parameters may be embodied as a matrix, referred to as a registration transformation matrix. The registration transformation matrix is a 3×3 matrix, i.e., the registration transformation matrix includes 9 parameters. As an example, the registration transformation matrix is an affine transformation matrix, the last line of which may be (0, 1), one example of which is given below:
Wherein a, b, c, d, e, f are parameters to be determined in the registration transformation matrix.
Assuming that the feature point a in the first feature point set matches the feature point a in the second feature point set, the coordinates of the feature point a are (x 1, y1, 1), and the coordinates of the feature point a are (x 2, y2, 1), then the coordinates of the two feature points satisfy the following formula:
fitting the pairs of matched feature points to the formula can determine each parameter in the registration transformation matrix, thereby determining the registration transformation matrix. In particular, algorithms such as random sample consensus (Random Sample Consensus, RANSAC) algorithms may be employed to implement matching feature points and to calculate the registration transformation matrix. The principle of the RANSAC algorithm is: and randomly extracting the characteristic points in the two frames of images, matching coordinate vectors corresponding to the characteristic points, and fitting a registration transformation matrix according to the coordinate vectors of the matched characteristic points.
In some alternative embodiments, the first image sensor and the second image sensor share the same lens assembly (simply referred to as a lens group), and in this case, it may be understood that the positions of the first image sensor and the second image sensor on the electronic device are the same, and the visible light image collected by the first image sensor and the invisible light image collected by the second image sensor have no positional deviation, so that the visible light image may be registered directly according to the image registration parameter determined by the invisible light image.
In some alternative embodiments, the first image sensor and the second image sensor use different lens assemblies, respectively, and in this case, it may be understood that there is a small deviation between the positions of the first image sensor and the second image sensor on the electronic device, and there is a positional deviation between the visible light image collected by the first image sensor and the invisible light image collected by the second image sensor, so that in order to compensate for such a deviation, an image registration parameter determined by the invisible light image needs to be adjusted. Specifically, based on calibration data, adjusting the image registration parameters; wherein the calibration data is determined based on a relative positional relationship of the first image sensor and the second image sensor. Here, the calibration data may be calibration data of the electronic device when shipped, where the calibration data relates to a relative positional relationship between the first image sensor and the second image sensor, and the image registration parameter is adjusted by the calibration data, so that the invisible light image is aligned to the visible light image, and thus the image registration parameter that can be used for the visible light image is obtained.
Step 203: and registering the plurality of visible light images based on the image registration parameters.
In some alternative embodiments, the image registration parameters are determined based on a third image and a fourth image of the plurality of non-visible light images, wherein the third image corresponds to a first image of the plurality of visible light images and the fourth image corresponds to a second image of the plurality of visible light images. And transforming the second image based on the image registration parameters to obtain a fifth image registered with the first image. Here, the fifth image refers to an image obtained by transforming the second image, which is aligned with the first image.
For example: the first image is a current frame output by the CMOS Sensor, the second image is a reference frame output by the CMOS Sensor, and the reference frame can be a previous frame image of the current frame. The reference frame is transformed according to the registration transformation matrix to obtain the reference frame aligned with the current frame. Specifically, the coordinates of each pixel point in the reference frame are multiplied by the registration transformation matrix to obtain new coordinates, transformation is completed after each pixel point is rearranged according to the new coordinates, and at this time, the alignment operation is completed between the current frame and the reference frame.
After the registration of the first image and the second image is completed by the technical scheme of the embodiment of the application, the first image and the registered second image (namely, the fifth image) can be fused, so that the time domain denoising is completed. Specifically, performing image fusion processing on the fifth image and the first image to obtain a sixth image, wherein the sixth image is an image of the first image after noise is removed.
In some alternative embodiments, the image fusion processing of the fifth image and the first image may be performed in the following manner:
mode one: and correspondingly averaging the pixel values of all the pixel points in the fifth image and the pixel values of all the pixel points in the first image to obtain the pixel values of all the pixel points in the sixth image.
For example: assuming that the pixels of the image are n×m, N, and M are positive integers, the pixel value of the pixel point with coordinates (xi, yj, 1) in the fifth image and the pixel value of the pixel point with coordinates (xi, yj, 1) in the first image are averaged to obtain the pixel value of the pixel point with coordinates (xi, yj, 1) in the sixth image. Wherein i is a positive integer of 1 or more and N or less, and j is a positive integer of 1 or more and M or less.
Mode two: generating a motion Mask (Mask) image for determining a motion region and a non-motion region; determining the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image; and correspondingly averaging the pixel values of all the pixel points in the non-motion area in the fifth image and the pixel values of all the pixel points in the non-motion area in the first image to obtain the pixel values of all the pixel points in the non-motion area in the sixth image.
Here, for more accurate denoising, motion detection may be performed on the image, specifically, by comparing the fifth image and the first image, a moving region and a non-moving region in the image are determined. In particular, the mask image is generated to represent which of the images are moving regions and which are non-moving regions.
In some alternative embodiments, the mask image may be generated by: correspondingly calculating difference values between the pixel values of all the pixel points in the fifth image and the pixel values of all the pixel points in the first image to obtain the pixel values of all the pixel points in the motion mask image; the region formed by the pixel points with the pixel values larger than or equal to the threshold value in the motion mask image is a motion region, and the region formed by the pixel points with the pixel values smaller than the threshold value in the motion mask image is a non-motion region.
For example: assuming that the pixels of the image are n×m, N and M are positive integers, i is a positive integer greater than or equal to 1 and less than or equal to N, j is a positive integer greater than or equal to 1 and less than or equal to M, if the coordinates (xi, yj, 1) are located in the motion region, the pixel value of the pixel point of the sixth image having the coordinates (xi, yj, 1) is equal to the pixel value of the pixel point of the first image having the coordinates (xi, yj, 1); if the coordinates (xi, yj, 1) are located in the non-moving region, the pixel value of the pixel point with the coordinates (xi, yj, 1) in the sixth image is equal to the average value of the pixel point with the coordinates (xi, yj, 1) in the fifth image and the pixel value of the pixel point with the coordinates (xi, yj, 1) in the first image.
In some alternative embodiments, the mask image may be generated by: obtaining a difference value corresponding to the pixel value of each pixel point in the fifth image and the pixel value of each pixel point in the first image, comparing the difference value with a threshold value, setting the pixel value of the corresponding pixel point in the motion mask image to be 1 if the difference value is greater than or equal to the threshold value, and setting the pixel value of the corresponding pixel point in the motion mask image to be 0 if the difference value is less than the threshold value; the region formed by the pixel points with the pixel value of 1 in the motion mask image is a motion region, and the region formed by the pixel points with the pixel value of 0 in the motion mask image is a non-motion region.
After the moving area and the non-moving area in the image are determined through the scheme, the image fusion mode of the fifth image and the first image is as follows: the motion area only adopts the pixel value of each pixel point of the first image, and the non-motion area adopts the pixel value of each pixel point of the first image to be correspondingly averaged with the pixel value of each pixel point of the fifth image.
For example: referring to fig. 4, the second image is subjected to registration transformation to obtain a fifth image, the fifth image is aligned with the first image, a motion mask image can be obtained according to a difference value between the fifth image and the first image, and image fusion processing is performed on the first image and the fifth image by referring to the motion mask image, wherein the image fusion mode of the fifth image and the first image is as follows: the motion area only adopts the pixel value of each pixel point of the first image, and the non-motion area adopts the pixel value of each pixel point of the first image to be correspondingly averaged with the pixel value of each pixel point of the fifth image.
The following describes the technical solutions of the embodiments of the present application with reference to fig. 5 to 7. In the embodiment related to fig. 5 to 7, the first image Sensor is taken as a CMOS Sensor, the first image and the second image collected by the first image Sensor are respectively referred to as a current frame and a reference frame, the second image Sensor is taken as an IR Sensor, and the third image and the fourth image collected by the second image Sensor are respectively referred to as an infrared current frame and an infrared reference frame.
Referring to fig. 5, the cmos Sensor outputs an original image sequence (i.e., an original video), and the IR Sensor outputs an infrared image sequence. And performing feature point matching on the IR Sensor output infrared current frame and the infrared reference frame, and calculating a registration transformation matrix according to the coordinate information of the matched feature points. The reference frame output by the CMOS Sensor is transformed through the registration transformation matrix to obtain a registered reference frame, wherein the registered reference frame is aligned with the current frame output by the CMOS Sensor. And carrying out image fusion processing on the current frame output by the CMOS Sensor and the registered reference frame, thereby obtaining a denoising output frame and finishing denoising processing of the current frame. The current frame is updated and changed continuously along with time, so that continuous video denoising frames can be output.
For the calculation of the registration transformation matrix in fig. 5, referring to fig. 6, feature points of the infrared current frame and the infrared reference frame are extracted respectively, then feature points in the infrared current frame and feature points in the infrared reference frame are matched, and the registration transformation matrix is calculated based on coordinate information of the matched feature points. The flow shown in fig. 7 can be obtained by combining the flow shown in fig. 6 with the flow shown in fig. 5, and the flow shown in fig. 7 is roughly divided into two major parts as shown in fig. 7: part of the registration transformation matrix is infrared image processing, namely, a registration transformation matrix is determined through an infrared image sequence; the other part is original image processing, namely the original image sequence is registered through a registration transformation matrix and image fusion processing is carried out, so that the noise reduction of the video is completed.
According to the technical scheme, when the registration transformation parameters are determined, the registration transformation parameters can be determined without distinguishing the image areas, namely, the whole image corresponds to one registration transformation parameter, and in the case, each pixel point of the whole image corresponds to the same registration transformation parameter. Without being limited thereto, the registration transformation parameters may be determined by differentiating the regions, for example, referring to fig. 8, the image is divided into 2 regions, region 1 corresponds to registration transformation parameter 1, region 2 corresponds to registration transformation parameter 2, where the method of determining the registration transformation parameter corresponding to each region may refer to the foregoing scheme, and then each pixel point in region 1 corresponds to registration transformation parameter 1, and each pixel point in region 2 corresponds to registration transformation parameter 2. After the registration transformation parameters of different areas are determined, the second image acquired by the first sensor is required to be subjected to registration transformation in different areas, specifically, the coordinates of each pixel point in the first area of the second image are transformed according to the registration transformation parameters 1, and the coordinates of each pixel point in the second area of the second image are transformed according to the registration transformation parameters 2.
As a modification of the above technical solution in the embodiments of the present application, when the above technical solution in the embodiments of the present application is adopted to determine the registration transformation parameters, only a partial area of the image may be targeted, and another partial area of the image may be determined by using a visible light image. Still referring to fig. 8, the image is divided into 2 regions, region 1 corresponds to registration transformation parameter 1, and the registration transformation parameter 1 is determined based on matching feature points of region 1 in the plurality of invisible light images; region 2 corresponds to registration transformation parameter 2, which registration transformation parameter 2 is determined based on matching feature points of region 2 in the plurality of visible light images. In one application scenario, region 1 may be a dark region in an image and region 2 may be a bright region in an image. Here, the division of the dark area and the bright area is based on the visible light image, and in a specific implementation, the brightness value of each pixel point of the current frame in the visible light image can be analyzed, so as to divide the dark area and the brightness area, the corresponding registration transformation parameters are determined in an auxiliary manner by the invisible light image for the dark area, and the corresponding registration transformation parameters are determined directly by the visible light image for the bright area. It should be noted that, the manner of determining the registration transformation parameters of a certain area may refer to the description of the related schemes, specifically, the feature points of the two images are extracted from the area, the feature points are matched, and the registration transformation parameters corresponding to the area are calculated based on the coordinate information of the matched feature points.
As a modification of the above technical solution in this embodiment, in the image fusion processing stage, more frames may be adopted to perform image fusion processing instead of two frames, for this case, registration needs to be performed on multiple frames of visible light images, for example, registration needs to be performed on L frames of visible light images, where L is an integer greater than 2, then analysis needs to be performed on L frames of invisible light images corresponding to L frames of visible light images, L-1 registration transformation parameters are determined, taking l=3 as an example, registration transformation parameters between the invisible light images 1 and 2 and registration transformation parameters between the invisible light images 1 and 3 can be determined, so that registration between the visible light images 1 and 2 can be achieved through the registration transformation parameters between the invisible light images 1 and 2, registration between the visible light images 1 and 3 can be achieved through the registration transformation parameters between the invisible light images 1 and 3, and thus, the effect of the visible light images 1 and 3 can be further achieved through the registration transformation parameters between the invisible light images 1 and 3. As an example, the invisible light image 1 may be a current infrared frame, the invisible light image 2 may be an infrared frame that is a previous frame to the current infrared frame, the invisible light image 3 may be an infrared frame that is a previous two frames of the current infrared frame, the visible light image 1 may be a current frame, the visible light image 2 may be a previous frame to the current frame, and the visible light image 3 may be a previous two frames of the current frame.
According to the technical scheme, the registration transformation parameters are obtained by analyzing the invisible light image output by the second image sensor, compared with the original image output by the first image sensor, the registration transformation parameters are calculated, dark part information of the invisible light image is richer, the acquired real object features are more accurate, the calculated registration transformation parameters are more accurate, and after the visible light image is registered by the registration transformation parameters, ghost or smear phenomena can be avoided when multi-frame denoising is carried out subsequently. For darker scenes, the noise point of video shooting is larger, the information of the dark part is unclear, the noise and the recovery information are needed to be removed through multi-frame denoising between adjacent frames, the problem of low noise point and low visibility of the dark part can obstruct the effect of multi-frame denoising, the utilization of an invisible light image just compensates for the defect, accurate registration transformation parameters can be calculated even in a very dark environment, and time domain denoising is promoted to be better.
Fig. 9 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 9, the image processing apparatus includes:
An acquisition unit 901 for acquiring a plurality of visible light images and a plurality of invisible light images;
a parameter determining unit 902, configured to determine an image registration parameter based on the plurality of invisible light images;
an image registration unit 903, configured to register the plurality of visible light images based on the image registration parameter.
In some optional embodiments of the present application, the acquiring unit 901 is configured to acquire a plurality of visible light images acquired by a first image sensor and a plurality of invisible light images acquired by a second image sensor, where the plurality of visible light images and the plurality of invisible light images have a corresponding relationship according to an acquisition time.
In some optional embodiments of the present application, the plurality of visible light images includes a first image and a second image, and the plurality of invisible light images includes a third image and a fourth image, wherein the first image corresponds to the third image, and the second image corresponds to the fourth image;
the parameter determining unit 902 is configured to extract a first feature point set from the third image and extract a second feature point set from the fourth image; determining at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set; image registration parameters are determined based on the coordinate information of the at least one pair of feature points.
In some optional embodiments of the present application, the parameter determining unit 902 is further configured to adjust the image registration parameter based on calibration data; wherein the calibration data is determined based on a relative positional relationship of the first image sensor and the second image sensor.
In some optional embodiments of the present application, the image registration unit 903 is configured to transform the second image based on the image registration parameter to obtain a fifth image registered with the first image.
In some optional embodiments of the present application, the apparatus further comprises:
and an image fusion unit 904, configured to perform image fusion processing on the fifth image and the first image to obtain a sixth image, where the sixth image is an image of the first image after noise is removed.
In some optional embodiments of the present application, the image fusion unit 904 is configured to correspondingly average a pixel value of each pixel point in the fifth image and a pixel value of each pixel point in the first image, so as to obtain a pixel value of each pixel point in the sixth image.
In some optional embodiments of the present application, the image fusion unit 904 is configured to generate a motion mask image, where the motion mask image is used to determine a motion region and a non-motion region; determining the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image; and correspondingly averaging the pixel values of all the pixel points in the non-motion area in the fifth image and the pixel values of all the pixel points in the non-motion area in the first image to obtain the pixel values of all the pixel points in the non-motion area in the sixth image.
In some optional embodiments of the present application, the image fusion unit 904 is configured to obtain a difference value corresponding to a pixel value of each pixel point in the fifth image and a pixel value of each pixel point in the first image, so as to obtain a pixel value of each pixel point in the motion mask image; the region formed by the pixel points with the pixel values larger than or equal to the threshold value in the motion mask image is a motion region, and the region formed by the pixel points with the pixel values smaller than the threshold value in the motion mask image is a non-motion region.
In some optional embodiments of the present application, the invisible light image is an infrared image.
It will be appreciated by those skilled in the art that the implementation functions of the respective units in the image processing apparatus shown in fig. 9 can be understood with reference to the foregoing description of the image processing method. The functions of the respective units in the image processing apparatus shown in fig. 9 may be realized by a program running on a processor or by a specific logic circuit.
The training device of the neural network according to the embodiment of the present application may also be stored in a storage medium (e.g., a computer readable storage medium) if implemented in the form of a software functional module and sold or used as a separate product. Based on such understanding, the technical solutions of the embodiments of the present application may be essentially or partly contributing to the prior art, and the computer software product may be stored in a storage medium, and include several instructions to cause an electronic device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a magnetic disk, an optical disk, or other various media capable of storing program codes. Thus, embodiments of the present application are not limited to any specific combination of hardware and software.
Accordingly, embodiments of the present application also provide a computer program product having stored therein computer executable instructions that when executed enable the implementation of the above-described methods of embodiments of the present application.
Fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application, as shown in fig. 10, the electronic device may include one or more (only one is shown in the figure) processors 1002 (the processor 1002 may include, but is not limited to, a microprocessor (MCU, micro Controller Unit), a processing device such as a programmable logic device (FPGA, field Programmable Gate Array)), a memory 1004 for storing data, and a transmission device 1006 for communication functions. It will be appreciated by those of ordinary skill in the art that the configuration shown in fig. 10 is merely illustrative and is not intended to limit the configuration of the electronic device described above. For example, the electronic device may also include more or fewer components than shown in FIG. 10, or have a different configuration than shown in FIG. 10.
The memory 1004 may be used to store software programs and modules of application software, such as program instructions/modules corresponding to the methods in the embodiments of the present application, and the processor 1002 executes the software programs and modules stored in the memory 1004, thereby performing various functional applications and data processing, i.e., implementing the methods described above. Memory 1004 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 1004 may further include memory located remotely from the processor 1002, which may be connected to the electronic device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission means 1006 is for receiving or transmitting data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the electronic device. In one example, the transmission means 1006 includes a network adapter (NIC, network Interface Controller) that can be connected to other network devices via a base station to communicate with the internet. In one example, the transmission device 1006 may be a Radio Frequency (RF) module for communicating with the internet wirelessly.
The technical solutions described in the embodiments of the present application may be arbitrarily combined without any conflict.
In several embodiments provided in the present application, it should be understood that the disclosed method and intelligent device may be implemented in other manners. The above described device embodiments are only illustrative, e.g. the division of the units is only one logical function division, and there may be other divisions in practice, such as: multiple units or components may be combined or may be integrated into another system, or some features may be omitted, or not performed. In addition, the various components shown or discussed may be coupled or directly coupled or communicatively coupled to each other via some interface, whether indirectly coupled or communicatively coupled to devices or units, whether electrically, mechanically, or otherwise.
The units described as separate units may or may not be physically separate, and units displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units; some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in each embodiment of the present application may be integrated in one second processing unit, or each unit may be separately used as one unit, or two or more units may be integrated in one unit; the integrated units may be implemented in hardware or in hardware plus software functional units.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think about changes or substitutions within the technical scope of the present application, and the changes and substitutions are intended to be covered by the scope of the present application.

Claims (18)

1. An image processing method, the method comprising:
acquiring a plurality of visible light images and a plurality of invisible light images;
Determining image registration parameters based on the plurality of invisible light images;
registering the plurality of visible light images based on the image registration parameters; wherein, obtain a plurality of visible light images and a plurality of invisible light images, include:
acquiring a plurality of visible light images acquired by a first image sensor and a plurality of invisible light images acquired by a second image sensor, wherein the plurality of visible light images and the plurality of invisible light images have a corresponding relation according to acquisition time;
the plurality of visible light images comprise a first image and a second image, the plurality of invisible light images comprise a third image and a fourth image, wherein the first image corresponds to the third image, and the second image corresponds to the fourth image;
the determining an image registration parameter based on the plurality of invisible light images includes:
extracting a first feature point set from the third image and a second feature point set from the fourth image;
determining at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set;
image registration parameters are determined based on the coordinate information of the at least one pair of feature points.
2. The method according to claim 1, wherein the method further comprises:
adjusting the image registration parameters based on calibration data; wherein the calibration data is determined based on a relative positional relationship of the first image sensor and the second image sensor.
3. The method of claim 2, wherein the registering the plurality of visible light images based on the image registration parameters comprises:
and transforming the second image based on the image registration parameters to obtain a fifth image registered with the first image.
4. A method according to claim 3, characterized in that the method further comprises:
and carrying out image fusion processing on the fifth image and the first image to obtain a sixth image, wherein the sixth image is an image of the first image after noise is removed.
5. The method of claim 4, wherein performing an image fusion process on the fifth image and the first image comprises:
and correspondingly averaging the pixel values of all the pixel points in the fifth image and the pixel values of all the pixel points in the first image to obtain the pixel values of all the pixel points in the sixth image.
6. The method of claim 4, wherein performing an image fusion process on the fifth image and the first image comprises:
generating a motion mask image for determining a motion region and a non-motion region;
determining the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image;
and correspondingly averaging the pixel values of all the pixel points in the non-motion area in the fifth image and the pixel values of all the pixel points in the non-motion area in the first image to obtain the pixel values of all the pixel points in the non-motion area in the sixth image.
7. The method of claim 6, wherein generating a motion mask image comprises:
correspondingly calculating difference values between the pixel values of all the pixel points in the fifth image and the pixel values of all the pixel points in the first image to obtain the pixel values of all the pixel points in the motion mask image;
the region formed by the pixel points with the pixel values larger than or equal to the threshold value in the motion mask image is a motion region, and the region formed by the pixel points with the pixel values smaller than the threshold value in the motion mask image is a non-motion region.
8. The method according to any one of claims 1 to 7, wherein the invisible light image is an infrared image.
9. An image processing apparatus, characterized in that the apparatus comprises:
an acquisition unit configured to acquire a plurality of visible light images and a plurality of invisible light images;
a parameter determination unit configured to determine an image registration parameter based on the plurality of invisible light images;
the image registration unit is used for registering the plurality of visible light images based on the image registration parameters;
the acquisition unit is used for acquiring a plurality of visible light images acquired by the first image sensor and a plurality of invisible light images acquired by the second image sensor, wherein the plurality of visible light images and the plurality of invisible light images have a corresponding relation according to the acquisition time; the plurality of visible light images comprise a first image and a second image, the plurality of invisible light images comprise a third image and a fourth image, wherein the first image corresponds to the third image, and the second image corresponds to the fourth image;
the parameter determining unit is used for extracting a first characteristic point set from the third image and extracting a second characteristic point set from the fourth image; determining at least one pair of feature points with a matching relationship from the first feature point set and the second feature point set; image registration parameters are determined based on the coordinate information of the at least one pair of feature points.
10. The apparatus according to claim 9, wherein the parameter determination unit is further configured to adjust the image registration parameter based on calibration data; wherein the calibration data is determined based on a relative positional relationship of the first image sensor and the second image sensor.
11. The apparatus according to claim 9, wherein the image registration unit is configured to transform the second image based on the image registration parameters to obtain a fifth image registered with the first image.
12. The apparatus of claim 11, wherein the apparatus further comprises:
and the image fusion unit is used for carrying out image fusion processing on the fifth image and the first image to obtain a sixth image, wherein the sixth image is an image of the first image after noise is removed.
13. The apparatus according to claim 12, wherein the image fusion unit is configured to average the pixel value of each pixel in the fifth image and the pixel value of each pixel in the first image correspondingly, so as to obtain the pixel value of each pixel in the sixth image.
14. The apparatus according to claim 12, wherein the image fusion unit is configured to generate a motion mask image, the motion mask image being used to determine a motion region and a non-motion region; determining the pixel value of each pixel point in the motion area in the first image as the pixel value of each pixel point in the motion area in the sixth image; and correspondingly averaging the pixel values of all the pixel points in the non-motion area in the fifth image and the pixel values of all the pixel points in the non-motion area in the first image to obtain the pixel values of all the pixel points in the non-motion area in the sixth image.
15. The apparatus according to claim 14, wherein the image fusion unit is configured to obtain a pixel value of each pixel in the motion mask image by correspondingly determining a difference between a pixel value of each pixel in the fifth image and a pixel value of each pixel in the first image; the region formed by the pixel points with the pixel values larger than or equal to the threshold value in the motion mask image is a motion region, and the region formed by the pixel points with the pixel values smaller than the threshold value in the motion mask image is a non-motion region.
16. The apparatus according to any one of claims 9 to 15, wherein the invisible light image is an infrared image.
17. An electronic device comprising a memory and a processor, wherein the memory stores a computer program that, when executed by the processor, causes the processor to perform the method of any one of claims 1 to 8.
18. A computer readable storage medium, on which a computer program is stored, which computer program, when being executed by a processor, implements the method according to any one of claims 1 to 8.
CN202110221009.5A 2021-02-26 2021-02-26 Image processing method and device, electronic equipment and storage medium Active CN112950502B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110221009.5A CN112950502B (en) 2021-02-26 2021-02-26 Image processing method and device, electronic equipment and storage medium
PCT/CN2021/137515 WO2022179251A1 (en) 2021-02-26 2021-12-13 Image processing method and apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110221009.5A CN112950502B (en) 2021-02-26 2021-02-26 Image processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112950502A CN112950502A (en) 2021-06-11
CN112950502B true CN112950502B (en) 2024-02-13

Family

ID=76246713

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110221009.5A Active CN112950502B (en) 2021-02-26 2021-02-26 Image processing method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112950502B (en)
WO (1) WO2022179251A1 (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112950502B (en) * 2021-02-26 2024-02-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and storage medium
CN115361533B (en) * 2022-08-19 2023-04-18 深圳市汇顶科技股份有限公司 Image data processing method and electronic device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN110490811A (en) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 Image noise reduction apparatus and image denoising method
CN111968057A (en) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 Image noise reduction method and device, storage medium and electronic device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107918929B (en) * 2016-10-08 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of image interfusion method, apparatus and system
CN112950502B (en) * 2021-02-26 2024-02-13 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106548489A (en) * 2016-09-20 2017-03-29 深圳奥比中光科技有限公司 The method for registering of a kind of depth image and coloured image, three-dimensional image acquisition apparatus
CN110490811A (en) * 2019-05-31 2019-11-22 杭州海康威视数字技术股份有限公司 Image noise reduction apparatus and image denoising method
CN111968057A (en) * 2020-08-24 2020-11-20 浙江大华技术股份有限公司 Image noise reduction method and device, storage medium and electronic device

Also Published As

Publication number Publication date
WO2022179251A1 (en) 2022-09-01
CN112950502A (en) 2021-06-11

Similar Documents

Publication Publication Date Title
EP0932114B1 (en) A method of and apparatus for detecting a face-like region
US9390511B2 (en) Temporally coherent segmentation of RGBt volumes with aid of noisy or incomplete auxiliary data
JP7077395B2 (en) Multiplexed high dynamic range image
CN112950502B (en) Image processing method and device, electronic equipment and storage medium
CN104392416B (en) Video stitching method for sports scene
KR101524548B1 (en) Apparatus and method for alignment of images
Nandoriya et al. Video reflection removal through spatio-temporal optimization
WO2017027212A1 (en) Machine vision feature-tracking system
CN110969667A (en) Multi-spectrum camera external parameter self-correction algorithm based on edge features
CN112802033B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN107635099B (en) Human body induction double-optical network camera and security monitoring system
CN110569840B (en) Target detection method and related device
CN115965889A (en) Video quality assessment data processing method, device and equipment
CN115345774A (en) Method and system for fusing infrared image and visible light image
CN113159229B (en) Image fusion method, electronic equipment and related products
CN104813341A (en) Image processing apparatus using differential camera
Huang et al. Image registration among UAV image sequence and Google satellite image under quality mismatch
US11044399B2 (en) Video surveillance system
CN103632131B (en) Apparatus and method for extracting object
Khan et al. A ghostfree contrast enhancement method for multiview images without depth information
CN115830064B (en) Weak and small target tracking method and device based on infrared pulse signals
US20130235198A1 (en) Image processing
JPWO2017029784A1 (en) Image registration system, method and recording medium
CN108076365B (en) Human body posture recognition device
CN110930340B (en) Image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant