CN116630172A - Image processing method and device, electronic equipment and computer readable storage medium - Google Patents

Image processing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN116630172A
CN116630172A CN202211185543.6A CN202211185543A CN116630172A CN 116630172 A CN116630172 A CN 116630172A CN 202211185543 A CN202211185543 A CN 202211185543A CN 116630172 A CN116630172 A CN 116630172A
Authority
CN
China
Prior art keywords
image
pixel
camera
filtering
generate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211185543.6A
Other languages
Chinese (zh)
Inventor
牛兵兵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202211185543.6A priority Critical patent/CN116630172A/en
Publication of CN116630172A publication Critical patent/CN116630172A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application discloses an image processing method, an image processing apparatus, an electronic device, and a nonvolatile computer-readable storage medium. The image processing method comprises the steps of obtaining a first image shot by a first camera and a second image shot by a second camera, wherein the view field ranges of the first camera and the second camera are at least partially overlapped; filtering the first image according to the first image and the second image to generate a target image, wherein filtering the first image according to the first image and the second image to generate the target image comprises the steps of acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel; or filtering the first pixel according to pixels in the adjacent area range of the second pixel; or filtering the first pixel according to the pixels in the first pixel adjacent area range and the pixels in the second pixel adjacent area range to generate the target image.

Description

Image processing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image processing technology, and more particularly, to an image processing method, an image processing apparatus, an electronic device, and a non-volatile computer readable storage medium.
Background
In existing digital image processing (Image Signal Processor, isp) schemes, denoising of images and video is very important. In the existing implementation, the general denoising method mainly comprises two types of single-frame and multi-frame filtering. Whether single-frame denoising or multi-frame denoising, the denoising method has the advantages that effective information used by denoising is limited, and the denoising effect is poor.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, an electronic device and a nonvolatile computer readable storage medium.
The image processing method comprises the steps of obtaining a first image shot by a first camera and a second image shot by a second camera, wherein the view field ranges of the first camera and the second camera are at least partially overlapped; and filtering the first image from the first image and the second image to generate a target image, wherein filtering the first image from the first image and the second image to generate a target image comprises: acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or filtering the first pixel according to the pixels in the adjacent area range of the second pixel corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
The image processing device of the embodiment of the application comprises an acquisition module and a filtering module. The acquisition module is used for acquiring a first image shot by the first camera and a second image shot by the second camera, and the field of view ranges of the first camera and the second camera are at least partially overlapped. The filtering module is configured to filter the first image according to the first image and the second image to generate a target image, where filtering the first image according to the first image and the second image to generate a target image includes: acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or filtering the first pixel according to the pixels in the adjacent area range of the second pixel corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
The electronic device of the embodiment of the application comprises a processor. The processor is used for acquiring a first image shot by the first camera and a second image shot by the second camera, and the field of view ranges of the first camera and the second camera are at least partially overlapped; and filtering the first image from the first image and the second image to generate a target image, wherein filtering the first image from the first image and the second image to generate a target image comprises: acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or filtering the first pixel according to the pixels in the adjacent area range of the second pixel corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
The non-transitory computer-readable storage medium of the embodiment of the present application contains a computer program which, when executed by one or more processors, causes the processors to execute the following image processing method: acquiring a first image shot by a first camera and a second image shot by a second camera, wherein the field of view ranges of the first camera and the second camera are at least partially overlapped; and filtering the first image from the first image and the second image to generate a target image, wherein filtering the first image from the first image and the second image to generate a target image comprises: acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or filtering the first pixel according to the pixels in the adjacent area range of the second pixel corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
In the image processing method, the image processing device, the electronic equipment and the nonvolatile computer readable storage medium of the embodiment of the application, as the first image shot by the first camera and the second image shot by the second camera, the parallax range of which is at least partially overlapped, are acquired, when the first image is filtered, the two images of the first image acquired by the different first cameras and the second image acquired by the second camera are filtered, namely, in the process of filtering the first image, more image information can be utilized, so that the denoising effect of the generated target image can be ensured to be better.
Additional aspects and advantages of embodiments of the application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of embodiments of the application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings, in which:
FIG. 1 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to some embodiments of the application
FIG. 3 is a schematic plan view of an electronic device in accordance with certain embodiments of the application;
FIG. 4 is a schematic view of a scenario of an image processing method of some embodiments of the present application;
FIG. 5 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 6 is a schematic view of a scenario of an image processing method of some embodiments of the present application;
FIG. 7 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 8 is a schematic view of a scenario of an image processing method of some embodiments of the present application;
FIG. 9 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 10 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 11 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 12 is a flow chart of an image processing method according to some embodiments of the present application;
FIG. 13 is a schematic view of a scenario of an image processing method of some embodiments of the present application;
FIG. 14 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 15 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 16 is a flow chart of an image processing method of some embodiments of the present application;
FIG. 17 is a schematic diagram of a connection state of a non-transitory computer readable storage medium and a processor according to some embodiments of the application.
Detailed Description
Embodiments of the present application are described in detail below, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to like or similar elements or elements having like or similar functions throughout. The embodiments described below by referring to the drawings are exemplary only for explaining the embodiments of the present application and are not to be construed as limiting the embodiments of the present application.
Referring to fig. 1 and 3, an embodiment of the present application provides an image processing method. The image processing method includes the steps of:
011: acquiring a first image shot by the first camera 20 and a second image shot by the second camera 30, wherein the field of view ranges of the first camera 20 and the second camera 30 are at least partially overlapped; a kind of electronic device with high-pressure air-conditioning system
012: the first image is filtered according to the first image and the second image to generate a target image.
Referring to fig. 2, an image processing apparatus 10 is provided according to an embodiment of the present application. The image processing apparatus 10 includes an acquisition module 11, and a filtering module 12. The image processing method of the embodiment of the present application is applicable to the image processing apparatus 10. Wherein the obtaining module 11 and the filtering module 12 are respectively configured to perform step 011 and step 012. That is, the acquiring module 11 is configured to acquire a first image captured by the first camera 20 and a second image captured by the second camera 30, where the field of view ranges of the first camera 20 and the second camera 30 at least partially overlap. The filtering module 12 is configured to filter the first image according to the first image and the second image to generate a target image.
Referring to fig. 3, an embodiment of the application further provides an electronic device 100. The image processing method of the embodiment of the present application is applicable to the electronic apparatus 100. The electronic device 100 includes a processor 40. The processor 40 is used for step 011 and step 012. That is, the processor 40 is configured to obtain a first image captured by the first camera 20 and a second image captured by the second camera 30, where the field of view ranges of the first camera 20 and the second camera 30 at least partially overlap; and filtering the first image according to the first image and the second image to generate a target image.
The electronic device includes a housing 50, and the housing 50 may be used to mount functional modules such as a display device, an imaging device, a power supply device, and a communication device of the electronic device 100, so that the housing 50 provides protection such as dust protection, drop protection, and water protection for the functional modules. The electronic device 100 may be a cell phone, tablet computer, digital camera, notebook computer, smart watch, head display device, game console, etc. As shown in fig. 3, the embodiment of the present application is described by taking the electronic device 100 as an example of a mobile phone, and it is to be understood that the specific form of the electronic device 100 is not limited to the mobile phone.
Specifically, referring to fig. 3, the electronic device 100 further includes a first camera 20 and a second camera 30. The first camera 20 is used for capturing a first image and a second image. The first camera 20 may be a primary camera of the electronic device 100 and the second camera 30 may be a secondary camera of the electronic device 100. The first camera 20 and the second camera 30 are located on the same side of the electronic device 100. For example, when the first camera 20 and the second camera 30 are located on the front side at the same time, the first camera and the second camera are front cameras; also for example, when located at the rear side at the same time, the first camera 20 and the second camera 30.
Wherein the field of view ranges of the first camera 20 and the second camera 30 at least partially coincide. It will be appreciated that the first image captured by the first camera 20 and the second image captured by the second camera 30 at least partially overlap.
Further, after the processor 40 obtains the first image and the second image, the processor 40 may filter the first image according to the first image and the second image to generate the target image. The filtering refers to a process of suppressing noise of the image under the condition of retaining the detail characteristics of the image as much as possible. It will be appreciated that the first image is filtered, i.e. de-noised, i.e. the generated target image is the de-noised first image.
More specifically, at the processor 40, where the first image is filtered based on the first image and the second image, the processor 40 may first calculate the signal-to-noise ratios of the first image and the second image to determine the image of the first image and the second image where the signal-to-noise ratio is higher. The higher the signal-to-noise ratio of the image is, the better the quality of the representative image is, and the noise of the image is less.
Thus, after the processor 40 determines the higher of the signal-to-noise ratios of the first image and the second image, the processor 40 may select the higher signal-to-noise ratio image to filter the first image to generate the target image.
In one embodiment, when the signal-to-noise ratio of the first image is higher than the signal-to-noise ratio of the second image, it is stated that the quality of the first image is higher than the quality of the second image. At this time, the processor 40 may filter the first image according to the image information of the first image itself to generate the target image.
As shown in the left diagram of fig. 4, when the processor 40 filters the first image according to the image information of the first image itself, the pixel values of the pixels Q1 in the first image P1 may be determined by calculating the euclidean distance between the pixels Q1 and the 8 pixels Q2 around the pixel Q1 to determine the similarity between the pixels Q1 and each pixel Q2, so as to determine the weight of each pixel Q2 according to the similarity between each pixel Q2 and the pixel Q1, and finally, perform weighted summation according to the pixel value of each pixel Q2 and the weight corresponding to each pixel Q2 to obtain the pixel value of the denoised pixel Q1, so that the processor 40 may generate the target image.
In another embodiment, when the signal-to-noise ratio of the first image is lower than the signal-to-noise ratio of the second image, it is said that the quality of the second image is higher than the quality of the first image. At this time, the processor 40 may filter the first image according to the image information of the second image to generate the target image.
As shown in fig. 4, when the processor 40 filters the first image according to the second image, the processor 40 may first acquire each pixel in the first image P1 and a corresponding pixel in the second image P2, for example, the pixel Q1 in the first image P1 corresponds to the pixel Q3 in the second image P2. Next, the processor 40 may calculate the euclidean distance between the 8 pixels Q4 around the pixel Q3 and the pixel Q3 to determine the similarity between the pixel Q3 and each pixel Q4, so as to determine the weight of each pixel Q4 according to the similarity between each pixel Q4 and each pixel Q3, and finally perform weighted summation according to the pixel value of each pixel Q4 and the weight corresponding to each pixel Q4 to serve as the pixel value of the pixel Q1 in the first image corresponding to the pixel Q3, that is, the pixel value of the denoised pixel Q1, so that the processor 40 may generate the target image.
In yet another embodiment, when the signal-to-noise ratio of the first image is equal to the signal-to-noise ratio of the second image, the quality of the first image and the quality of the second image are similar, and at this time, the processor 40 may filter the first image according to the image information of the first image and the second image to generate the target image.
As shown in fig. 4, when the processor 40 filters the first image according to the first image and the second image, the processor 40 may obtain euclidean distances of 8 pixels Q2 around the pixel Q1 in the first image P1 from the pixel Q1 to determine the similarity of each pixel Q2 to the pixel Q1, thereby determining the weight of each pixel Q2 according to the similarity of each pixel Q2 to the pixel Q1. Similarly, the processor 40 may obtain euclidean distances between 8 pixels Q4 and Q3 around the pixel Q3 corresponding to the pixel Q1 in the second image P2 to determine the similarity between each pixel Q4 and the pixel Q3, so as to determine the weight of each pixel Q4 according to the similarity between each pixel Q4 and the pixel Q3.
In this way, the processor 40 may calculate the pixel value of the denoised pixel Q1 according to the pixel value of each pixel Q2, the weight corresponding to each pixel Q2, the pixel value of each pixel Q4, and the weight corresponding to each pixel Q4. For example, the sum of the products of the pixel value of each pixel Q2 and the corresponding weight is A1, the product of the pixel value of each pixel Q4 and the corresponding weight is A2, and the pixel value of the pixel Q1 is (a1+a2)/2.
It will be appreciated that the processor 40 performs filtering processing on the first image through the first image and the second image, so as to ensure that when each pixel in the first image is filtered, image information of images shot by different cameras can be used, that is, more effective information can be utilized in the denoising processing process, so as to ensure that the denoising effect is better.
In the image processing method, the image processing apparatus 10 and the electronic device 100 according to the embodiments of the present application, since the first image captured by the first camera 20 and the second image captured by the second camera 30, in which parallax ranges are at least partially overlapped, are acquired, when the first image is filtered, the two images of the first image captured by the first camera and the second image captured by the second camera, which are different, are filtered, that is, in the process of filtering the first image, more image information can be utilized, so that a good denoising effect of the generated target image can be ensured.
Referring to fig. 2, 3 and 5, in some embodiments, step 012: filtering the first image according to the first image and the second image to generate a target image, comprising the steps of:
0121: acquiring a first pixel in a first image and a second pixel corresponding to the first pixel in a second image;
0122: filtering the first pixel according to pixels in the range of the adjacent area of the first pixel to generate a target image; or (b)
0123: filtering the first pixel according to the pixels in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image; or (b)
0124: the first pixel is filtered according to the pixels in the first pixel adjacent area range and the pixels in the second pixel adjacent area range corresponding to the first pixel, so as to generate a target image.
In certain embodiments, the filtering module 12 is configured to perform steps 0121, 0122, 0123, and 0124. That is, the filtering module 12 is configured to obtain a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel to generate a target image; or filtering the first pixel according to the pixels in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel to generate the target image.
In certain embodiments, processor 40 is configured to perform steps 0121 and 0122. That is, the processor 40 is configured to obtain a first pixel in a first image and a second pixel corresponding to the first pixel in a second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel to generate a target image; or filtering the first pixel according to the pixels in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel to generate the target image.
Specifically, during the process of the processor 40 filtering the first graphic according to the first image and the second image, the processor 40 may first acquire the first pixels in the first image and the corresponding second pixels of each first pixel in the second image.
Note that, the second pixel corresponding to the first pixel in the second image does not mean that the position of the first pixel in the first image is the same as the position of the second pixel in the second image. Since the field of view ranges of the first camera 20 and the second camera 30 at least partly coincide, i.e. the first image and the second image at least partly coincide. The second pixels corresponding to the first pixels in the second image are the first pixels and the second pixels corresponding to the overlapping portions of the first image and the second image.
It will be appreciated that the processor 40 is capable of acquiring a corresponding second pixel in the second image for each first pixel in the first image.
Next, the processor 40 may filter the first pixel according to pixels within the vicinity of the first pixel to generate a target image. The processor 40 may also filter the first pixels according to pixels within a range of a vicinity of the second pixels corresponding to the first pixels to generate the target image. The processor 40 may also filter the first pixels to generate the target image based on the pixels within the first pixel neighborhood and the second pixels within the second pixel neighborhood corresponding to the first pixels.
The adjacent area range may be an area of a predetermined size, for example, an area of 3*3 size centered on the first pixel or the second pixel, or an area of 5*5 size. The adjacent area may be an area of size N by N centered on the first pixel or the second pixel, and an area of size M by M surrounding the first pixel or the second pixel. N and M are positive integers, and N can be greater than or equal to M, or less than or equal to M.
As shown in fig. 6, a corresponding second pixel of the first pixel Q5 in the second image P4 in the first image P3 is Q7. The area of the first image P3 adjacent to the first pixel Q5 may be an area Y1 having a size of 3*3 about the first pixel Q5. The area of the second image P4 adjacent to the second pixel Q7 may be an area Y2 having a size of 3*3 about the second pixel Q7.
In one embodiment, when the processor 40 filters the first pixel Q5 according to pixels within the vicinity Y1 of the first pixel Q5, the processor 40 may filter the first pixel Q5 according to all the first pixels Q6 within the vicinity Y1 to generate the target object. The first pixels Q6 are first pixels within a range of n×n centered on the first pixel Q5, for example, when N is 3, the number of the first pixels Q6 is 8.
Specifically, the processor 40 may calculate the euclidean distance between each first pixel Q6 and each first pixel Q5 to determine the similarity between each first pixel Q6 and each first pixel Q5, thereby determining the weight corresponding to each first pixel Q6, and finally, performing weighted summation through the pixel value of each first pixel Q6 and the corresponding weight, thereby obtaining the pixel value of the filtered first pixel Q5. The smaller the euclidean distance between the first pixel Q6 and the first pixel Q5 is, the higher the similarity between the first pixel Q6 and the first pixel Q5 is, and the larger the weight is.
In another embodiment, when the processor 40 filters the first pixel Q5 according to the pixels within the second pixel Q7 adjacent area Y2 corresponding to the first pixel Q5, the processor 40 may filter the first pixel Q5 according to all the second pixels Q8 within the adjacent area Y2 to generate the target image. Similarly, the second pixels Q8 are first pixels within n×n with the second pixels Q7 as the center, for example, when N is 3, the number of the second pixels Q8 is 8.
Specifically, the processor 40 may calculate the euclidean distance between each of the second pixels Q8 and the second pixel Q7 to determine the similarity between each of the second pixels Q8 and the second pixel Q7, thereby determining the weight corresponding to each of the second pixels Q8.
Finally, through the pixel value of each second pixel Q8 and the corresponding weight, the pixel value of the updated second pixel Q7 is finally obtained, and the pixel value of the updated second pixel Q7 is used for updating the first pixel Q5, namely, the pixel value of the filtered first pixel Q5 is the pixel value of the updated second pixel Q7. The smaller the euclidean distance between the second pixel Q8 and the second pixel Q7 is, the higher the similarity between the second pixel Q8 and the second pixel Q7 is, and the larger the weight is.
In still another embodiment, when the processor 40 filters the first pixel Q5 according to the pixels within the neighboring area Y1 of the first pixel Q5 and the pixels within the neighboring area Y2 of the second pixel Q7 corresponding to the first pixel Q5, the processor 40 may filter the first pixel Q5 according to all the first pixels Q6 within the neighboring area Y1 and all the second pixels Q8 within the neighboring area Y1 to generate the target image.
Similarly, the processor 40 may calculate the euclidean distance of each first pixel Q6 and each first pixel Q5, and the euclidean distance of each second pixel Q8 and each second pixel Q7, respectively, so as to determine the weight corresponding to each first pixel Q6, and the weight corresponding to each second pixel Q8.
Finally, the first pixel value X is obtained by carrying out weighted summation according to the pixel value of each first pixel Q6 and the corresponding weight, and the second pixel value Y is obtained by carrying out weighted summation according to the pixel value of the second pixel Q8 and the corresponding weight.
The pixel value of the filtered first pixel may be (x+y)/2, and the pixel value of the filtered first pixel may also be ax+by, where a and b are weights corresponding to the first pixel value X and the second pixel value Y, and may be set manually or according to the signal-to-noise ratio of the first image and the second image. If the signal-to-noise ratio of the first image is greater than the signal-to-noise ratio of the second image, a is greater than b.
Referring to fig. 2, 3 and 7, in certain embodiments, step 012: filtering the first image from the first image and the second image to generate a target image, further comprising the steps of:
0125: acquiring a first image block containing a first pixel to be filtered and a second image block containing a second pixel corresponding to the first pixel to be filtered;
0126: determining a first weight of each first pixel in the first image block according to the Euclidean distance between the first pixel in the first image block and the first pixel to be filtered;
0127: and determining a second weight of each second pixel in the second image block according to the Euclidean distance between each second pixel in the second image block and the second pixel corresponding to the first pixel to be filtered.
Wherein, step 0122: filtering the first pixel according to pixels in the vicinity of the first pixel to generate a target image, comprising the steps of:
01221: and under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is larger than a first preset ratio, determining the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel.
Step 0123: filtering the first pixel according to the pixel in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image, wherein the method comprises the following steps:
01231: and under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is smaller than a second preset ratio, determining the pixel value of the first pixel to be filtered according to each second pixel in the second image block and the second weight corresponding to each second pixel.
Step 0124: filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel to generate a target image, comprising the steps of:
01241: and under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is in a section [ second preset ratio, first preset ratio ], determining the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel, and each second pixel in the second image block and the second weight corresponding to each second pixel.
In certain embodiments, the filtering module 12 is configured to perform steps 0125, 0126, 0127, 0121, 01231, and 01241. The filtering module 12 is configured to obtain a first image block including a first pixel to be filtered, and a second image block including a second pixel corresponding to the first pixel to be filtered; determining a first weight of each first pixel in the first image block according to the Euclidean distance between the first pixel in the first image block and the first pixel to be filtered; determining a second weight of each second pixel in the second image block according to the Euclidean distance between each second pixel in the second image block and the second pixel corresponding to the first pixel to be filtered; under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is larger than a first preset ratio, determining the pixel value of a first pixel to be filtered according to each first pixel in the first image block and a first weight corresponding to each first pixel; under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is smaller than a second preset ratio, determining the pixel value of the first pixel to be filtered according to each second pixel in the second image block and a second weight corresponding to each second pixel; and under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is in a section [ second preset ratio, first preset ratio ], determining the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel, and each second pixel in the second image block and the second weight corresponding to each second pixel.
In certain embodiments, processor 40 is configured to perform steps 0125, 0126, 0127, 0121, 01231, and 01241. That is, the processor 40 is configured to acquire a first image block including a first pixel to be filtered and a second image block including a second pixel corresponding to the first pixel to be filtered; determining a first weight of each first pixel in the first image block according to the Euclidean distance between the first pixel in the first image block and the first pixel to be filtered; determining a second weight of each second pixel in the second image block according to the Euclidean distance between each second pixel in the second image block and the second pixel corresponding to the first pixel to be filtered; under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is larger than a first preset ratio, determining the pixel value of a first pixel to be filtered according to each first pixel in the first image block and a first weight corresponding to each first pixel; under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is smaller than a second preset ratio, determining the pixel value of the first pixel to be filtered according to each second pixel in the second image block and a second weight corresponding to each second pixel; and under the condition that the ratio of the signal to noise ratio of the first image block to the second image block is in a section [ second preset ratio, first preset ratio ], determining the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel, and each second pixel in the second image block and the second weight corresponding to each second pixel.
Specifically, during the process of filtering each first pixel according to the first pixels around each first pixel and/or the second pixels around the second pixels corresponding to each first pixel, the processor 40 may first obtain the first image block including the first pixels to be filtered and the second image block including the second pixels corresponding to the first pixels to be filtered.
For example, as shown in fig. 8, a first image block including a first pixel N1 to be filtered in the first image A1 is M1, and a second image block including a second pixel N2 corresponding to the first pixel N1 to be filtered in the second image A2 is M2.
Next, the processor 40 may determine a first weight value of each first pixel N3 in the first image block M1 according to the euclidean distance of each first pixel N3 in the first image block M1 and the first pixel N1 to be filtered. Similarly, the processor 40 may determine the second weight of each second pixel N4 in the second image block M2 according to the euclidean distance between each second pixel N4 in the second image block M2 and the second pixel N2. The first weight and the second weight are inversely proportional to the Euclidean distance, namely the smaller the Euclidean distance is, the larger the first weight and the second weight are.
Further, the processor 40 may determine the magnitude of the signal-to-noise ratio of the first image block and the second image block to determine the image quality of the first image block and the image quality of the second image. For example, when the signal-to-noise ratio of a first image block is greater than a second image block, then it is stated that the image quality of the first image block is greater than the image quality of the second image.
Specifically, the electronic device 100 may have a first preset ratio and a second preset ratio stored in advance, where the first preset ratio is greater than the second preset ratio. The first preset ratio may be any number greater than 1, such as 1.05, 1.1, etc. The second preset ratio may be any number less than 1, such as 0.95, 0.9, etc.
When the ratio of the signal to noise ratio of the first image block to the signal to noise ratio of the second image block is larger than the first preset ratio, the signal to noise ratio of the first image block is larger than the signal to noise ratio of the second image block, namely the image quality of the first image block is higher than the image quality of the second image block. At this time, to ensure the image quality of the finally generated target image, the first image block needs to be filtered by the first image block, and the processor 40 may determine the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel. As shown in fig. 8, the pixel value of the first pixel N1 may be the sum of products of the pixel values of the other 8 first pixels N3 in the first image block M1 and the corresponding first weights. Alternatively, the pixel value of the first pixel N1 may be an average value of the sum of products of the pixel values of the other 8 first pixels N3 and the corresponding first weights in the first image block M1. Specifically, regarding the value range of the first weights, if the sum of all the first weights is 1, the pixel value of the first pixel N1 is the sum of products of the pixel values of the other 8 first pixels N3 in the first image block M1 and the corresponding first weights. If the sum of all the first weights is greater than 1, the pixel value of the first pixel N1 is an average value of the sum of products of the pixel values of the other 8 first pixels N3 in the first image block M1 and the corresponding first weights.
When the ratio of the signal to noise ratio of the first image block to the second image block is smaller than the second preset ratio, it is indicated that the signal to noise ratio of the first image block is smaller than the signal to noise ratio of the second image block, i.e. the image quality of the first image block is smaller than the image quality of the second image block. At this time, in order to ensure the image quality of the finally generated target image, the first image block is subjected to the filtering process through the second image block. The processor 40 may determine the pixel value of the first pixel to be filtered according to each second pixel in the second image block and the second weight corresponding to each second pixel. As shown in fig. 8, the pixel value of the first pixel N1 may be the sum of products of the pixel values of the other 8 second pixels N4 in the second image block M2 and the corresponding second weights. Alternatively, the pixel value of the first pixel N1 may be an average value of the sum of products of the pixel values of the other 8 second pixels N4 in the second image block M2 and the corresponding second weights. Specifically, regarding the value range of the second weights, if the sum of all the second weights is 1, the pixel value of the first pixel N1 is the sum of products of the pixel values of the other 8 second pixels N4 in the second image block M2 and the corresponding second weights. If the sum of all the second weights is greater than 1, the pixel value of the first pixel N1 is the average value of the sum of the products of the pixel values of the other 8 second pixels N4 in the second image block M2 and the corresponding second weights.
When the ratio of the signal to noise ratio of the first image block to the second image block is in the interval [ second preset ratio, first preset ratio ], for example [0.95,1.05], it is indicated that the signal to noise ratio of the first image block is close to the signal to noise ratio of the second image block, that is, the image quality of the first image block is close to the image quality of the second image block, at this time, in order to ensure the image quality of the finally generated target image, the first image block needs to be filtered through the first image block and the second image block. The processor 40 may determine the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel, and each second pixel in the second image block and the second weight corresponding to each second pixel. As shown in fig. 8, the processor 40 may first calculate the sum X of the products of the pixel value of each first pixel N3 and the first weight value corresponding to each first pixel N3 in the first image block M1, and then calculate the sum of the products of the pixel value of each second pixel N4 and the second weight value corresponding to each second pixel N4 in the second image block M2 as Y.
Then, when the sum of all the first weights is 1 and the sum of all the second weights is 1, the pixel value of the first pixel N1 to be filtered is (x+y)/2; or the pixel value of the first pixel N1 to be filtered is ax+by, wherein the sum of a and b is 1, the values of a and b are related to the signal to noise ratio of the first image block and the second image block, and if the signal to noise ratio of the first image block is greater than the signal to noise ratio of the second image block, a is greater than b. When the sum of all the first weights is greater than 1 and the sum of all the second weights is greater than 1, the pixel value of the first pixel N1 to be filtered is (x+y)/16.
Thus, the processor 40 obtains the pixel values of all the first pixels to be filtered in the first image, so as to perform the filtering process on the first image to generate the target image.
It should be noted that, referring to fig. 8, when calculating the euclidean distance between each first pixel N3 and the first pixel N1 to be filtered in the first image block M1 to determine the first weight of each first pixel N3, the calculation may be performed by the coordinates of each first pixel N3 and the coordinates of the first pixel N1. It is also possible to determine a third image block of each first pixel N3 in the first image block M1 and a fourth image block of the first pixel N1 to be filtered, for example, the size of the first image block is 5*5, the sizes of the third image block and the fourth image block are 3*3, and then determine the first weight of the first pixel N3 in the first image block M1 by calculating the euclidean distance between each third image block and the fourth image block (for example, calculating the sum of the euclidean distances between any two pixels in the third image block and the fourth image block). Similarly, the weight of each second pixel N4 in the second image block M2 may be calculated in a similar manner, which is not described in detail herein.
Referring to fig. 2, 3 and 9, the image processing method according to the embodiment of the present application further includes, before executing step 012:
013: the first image and the second image are luminance aligned.
In some embodiments, the image processing apparatus 10 further comprises an alignment module 13. The alignment module 13 is used to perform step 013. That is, the alignment module 13 is used for performing brightness alignment on the first image and the second image.
In certain embodiments, processor 40 is configured to perform step 013. That is, the processor 40 is configured to perform brightness alignment on the first image and the second image.
Specifically, before the processor 40 filters the first image to generate the target image, the processor 40 also performs brightness alignment on the first image and the second image to ensure that the first image and the second image are at the same brightness when the first image is subjected to filtering processing according to the first image and the second image, so as to ensure the quality of the finally generated target image.
More specifically, since the exposure parameters of the first camera 20 capturing the first image and the second camera 30 capturing the second image are known or set in advance. Thus, a first preset exposure ratio can be set, and the first preset exposure ratio is calculated according to the exposure parameters of the first image and the exposure parameters of the second image.
When the first image and the second image are aligned in brightness, the image with the larger exposure time length in the first image and the second image may be divided by the first preset exposure ratio, or the image with the smaller exposure time length may be multiplied by the first preset exposure ratio.
For example, the first image is a long exposure image and the second image is a short exposure image, then the processor 40 may divide the exposure ratio parameter in the first image by a first preset exposure ratio or multiply the exposure ratio parameter in the second image by the first preset exposure ratio, thereby aligning the brightness of the first image and the second image to ensure the quality of the final generated target image.
In some embodiments, processor 40 may also be pre-configured with a second pre-set exposure ratio, the first pre-set exposure ratio and the second pre-set exposure ratio being different in magnitude. The processor 40 may perform brightness alignment of the first image and the second image by dividing an exposure ratio parameter of the first image by a second preset exposure ratio and multiplying the exposure ratio parameter of the second image by the second preset exposure ratio so that the exposure ratio parameters of the first image and the second image are on desired parameters.
Referring to fig. 2, 3 and 10, the image processing method according to the embodiment of the present application further includes the steps of:
014: calibrating the internal parameters of the first camera 20, the second camera 30 and the depth camera 60, and calibrating the external parameters of the first camera 20 and the depth camera 60 and the external parameters of the first camera 20 and the second camera 30;
015: according to the external parameters of the first camera 20 and the depth camera 60, aligning the depth image acquired by the depth camera 60 with the first image to determine a depth value corresponding to each first pixel in the first image;
016: and determining the mapping relation between the first pixels in the first image and the second pixels in the second image according to the depth value corresponding to each first pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30.
Wherein, step 0121: acquiring a first pixel in a first image and a second pixel corresponding to the first pixel in a second image, wherein the method comprises the following steps:
01211: and acquiring a second pixel corresponding to the first pixel in the first image in the second image according to the mapping relation.
In some embodiments, the image processing apparatus 10 further includes a calibration module 14 and a determination module 15. The calibration module 14 is configured to perform step 014. The determination module 15 is used to perform step 015 and step 016. The filtering module 12 is configured to perform step 01111. That is, the calibration module 14 is used for calibrating the internal parameters of the first camera 20, the second camera 30 and the depth camera 60, and calibrating the external parameters of the first camera 20 and the depth camera 60, and the external parameters of the first camera 20 and the second camera 30. The determining module 15 is configured to align the depth image acquired by the depth camera 60 with the first image according to the external parameters of the first camera 20 and the depth camera 60, so as to determine a depth value corresponding to each first pixel in the first image; and determining the mapping relation between the first pixels in the first image and the second pixels in the second image according to the depth value corresponding to each first pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30. The filtering module 12 is configured to obtain a second pixel corresponding to the first pixel in the first image in the second image according to the mapping relationship.
In certain embodiments, processor 40 is configured to perform step 014, step 015, step 016, and step 01111. That is, the processor 40 is configured to calibrate the internal parameters of the first camera 20, the second camera 30, and the depth camera 60, and calibrate the external parameters of the first camera 20 and the depth camera 60, and the external parameters of the first camera 20 and the second camera 30; according to the external parameters of the first camera 20 and the depth camera 60, aligning the depth image acquired by the depth camera 60 with the first image to determine a depth value corresponding to each first pixel in the first image; determining a mapping relationship between the first pixels in the first image and the second pixels in the second image according to the depth value corresponding to each first pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30; and obtaining a second pixel corresponding to the first pixel in the first image in the second image according to the mapping relation.
Specifically, when acquiring the second pixel corresponding to the first pixel in the first image in the second image, the processor 40 needs to acquire the mapping relationship between the first pixel and the second pixel first, so as to obtain the coordinate of the first pixel in the first image according to the mapping relationship, thereby obtaining the coordinate of the second pixel in the second image, that is, acquire the second pixel corresponding to the first pixel.
More specifically, the electronic device 100 further includes a depth camera 60, and the depth camera 60 is configured to capture a depth image of the current scene. When the processor 40 obtains the mapping relationship between the first pixel and the second pixel, the processor 40 first performs calibration on the first camera 20, the second camera 30 and the depth camera 60. The calibration method may be Zhang Zhengyou calibration method, which is not described in detail herein.
The processor 40 may calibrate the internal parameters of the first camera 20, the second camera 30, and the depth camera 60, and calibrate the external parameters of the first camera 20 and the depth camera 60, and the external parameters of the first camera 20 and the second camera 30. Wherein the internal parameters of the first camera 20, the second camera 30 and the depth camera 60 include the focal lengths f of the first camera 20, the second camera 30 and the depth camera 60, and the main point (image center point) position (X 1 ,Y 1 ) The principal point position (X 'of the second camera 30' 1 ,Y’ 1 ) And the principal point position (X "of the depth camera 60" 1 ,Y” 1 ,Z” 1 ) Etc. The external parameters of the first camera 20 and the depth camera 60 represent pose information of the first camera 20 relative to the depth camera 60, and the external parameters of the first camera 20 and the second camera 30 represent pose information of the first camera 20 relative to the second camera 30.
Next, the processor 40 may align the depth image acquired by the depth camera 60 with the first image by external parameters of the first camera 20 and the depth camera 60, such that each first pixel in the aligned portion of the first image and the depth image corresponds to a depth value.
Finally, the processor 40 can determine the mapping relationship between the first pixel in the first image and the second pixel in the second image according to the depth value corresponding to each pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30. Wherein the mapping relationship between the first pixel in the first image and the second pixel in the second image is substantially the parallax of the first camera 20 and the second camera 30. The processor 40 may determine the parallax of the first camera 20 and the second camera 30 according to the following formula (1) according to the depth value.
Where D is a depth value, D is a parallax value, B is a distance between optical centers of the first camera 20 and the second camera 30, and f is a focal length.
Thus, after the processor 40 obtains the mapping relationship between the first pixels and the second pixels, the processor 40 can obtain the coordinates of the second pixels corresponding to the first pixels in the second image according to the coordinates of each first pixel in the first image, so as to obtain the second pixels corresponding to the first pixels.
Referring to fig. 2, 3 and 11, the image processing method according to the embodiment of the present application further includes the steps of:
017: calibrating internal parameters and external parameters of the first camera 20 and the second camera 30;
018: calculating a depth image from the first image, the second image, and the internal and external parameters of the first camera 20 and the second camera 30, and aligning the depth image with the first image to obtain a depth value of each first pixel in the first image;
019: and determining the mapping relation between the first pixels in the first image and the second pixels in the second image according to the depth value corresponding to each first pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30.
Wherein, step 0121: acquiring a first pixel in a first image and a second pixel corresponding to the first pixel in a second image, wherein the method comprises the following steps:
01212: and acquiring a second pixel corresponding to the first pixel in the first image in the second image according to the mapping relation.
In certain embodiments, the calibration module 14 is configured to perform step 017. The alignment module 13 is for performing step 018. The determination module 15 is configured to perform step 019. The filtering module 12 is configured to perform step 01212. That is, the calibration module 14 is used to calibrate the internal and external parameters of the first and second cameras 20 and 30. The alignment module 13 is configured to calculate a depth image according to the first image, the second image, and the internal parameters and external parameters of the first camera 20 and the second camera 30, and align the depth image with the first image to obtain a depth value of each first pixel in the first image. The determining module 15 is configured to determine a mapping relationship between the first pixel in the first image and the second pixel in the second image according to the depth value corresponding to each first pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30. The filtering module 12 is configured to obtain a second pixel corresponding to the first pixel in the first image in the second image according to the mapping relationship.
In certain embodiments, processor 40 is configured to perform step 017, step 018, step 019 and step 01212. That is, the processor 40 is used to calibrate the internal and external parameters of the first and second cameras 20 and 30; calculating a depth image from the first image, the second image, and the internal and external parameters of the first camera 20 and the second camera 30, and aligning the depth image with the first image to obtain a depth value of each first pixel in the first image; determining a mapping relation between the first pixels in the first image and the second pixels in the second image according to the depth value corresponding to each first pixel in the first image and the internal parameters and external parameters of the first camera 20 and the second camera 30; and obtaining a second pixel corresponding to the first pixel in the first image in the second image according to the mapping relation.
Specifically, when the electronic device 100 is not provided with the depth camera 60, the processor 40 may also calibrate the internal parameters and external parameters of the first camera 20 and the second camera 30 when the processor 40 obtains the mapping relationship between the first pixel and the second pixel.
Next, the processor 40 may calculate depth images corresponding to the first image and the second image according to the first image, the second image, and the internal parameters and external parameters of the first camera 20 and the second camera 30, so as to correspond the first image and the depth image, so as to obtain a depth value corresponding to each first pixel in the first image.
Similarly, after obtaining the depth value corresponding to each first pixel, the processor 40 may calculate the mapping relationship between the first pixel and the second pixel according to the internal parameters and the external parameters of the first camera 20 and the second camera 30, and the depth value. Similarly, the mapping relationship between the first pixel in the first image and the second pixel in the second image is substantially the parallax between the first camera 20 and the second camera 30, and the processor 40 can calculate the parallax according to the above formula (1).
Finally, after obtaining the mapping relationship between the first pixels and the second pixels, the processor 40 may obtain the coordinates of the second pixels corresponding to the first pixels in the second image according to the coordinates of each first pixel in the first image, so as to obtain the second pixels corresponding to the first pixels.
Referring to fig. 2, 3 and 12, the image processing method according to the embodiment of the present application further includes the steps of:
020: the first image is subjected to distortion correction according to the internal parameters of the first camera 20, and the second image is subjected to distortion correction according to the internal parameters of the second camera 30;
021: the first image and the second image are baseline aligned based on the external parameters of the first camera 20 and the external parameters of the second camera 30.
In some embodiments, image processing apparatus 10 further includes an image processing module 16. The image processing module 16 is configured to perform steps 020 and 021. That is, the image processing module 16 is configured to correct distortion of the first image according to the internal reference of the first camera 20, and correct distortion of the second image according to the internal reference of the second camera 30; and baseline alignment of the first image and the second image is performed based on the external parameters of the first camera 20 and the external parameters of the second camera 30.
In certain embodiments, processor 40 is configured to perform steps 020 and 021. That is, the processor 40 is configured to correct the distortion of the first image according to the internal reference of the first camera 20, and correct the distortion of the second image according to the internal reference of the second camera 30; and baseline alignment of the first image and the second image is performed based on the external parameters of the first camera 20 and the external parameters of the second camera 30.
Specifically, before the processor 40 calibrates the internal parameters and the external parameters of the first camera 20 and the second camera 30 to determine the mapping relationship between the first pixel and the second pixel, the first image may be distorted according to the internal parameters of the first camera 20, and the second image may be distorted according to the internal parameters of the second camera 30.
More specifically, the lens of the camera may introduce distortion due to manufacturing accuracy and variations in the assembly process, resulting in distortion of the original image. Therefore, distortion correction is required for the first image and the second image. The distortion of the lens can be divided into radial distortion and tangential distortion.
Taking radial distortion as an example, the radial distortion is described, wherein the radial distortion can be classified into barrel distortion and pincushion distortion. As shown in fig. 13, the first image W1 is distorted in barrel shape, the second image W2 is distorted in pincushion shape, the processor 40 can correct the distortion of the first image W1 by the internal reference of the first camera 20 to obtain a normal first image W3, and the processor 40 can correct the distortion of the second image W2 by the internal reference of the second camera 30 to obtain a normal second image W4.
While the first and second images are being distortion corrected, the processor 40 may also baseline align the first and second images with the external parameters of the first camera 20 and the external parameters of the second camera 30. Wherein the first image and the second image aligned by the base line can ensure that each first pixel in the first image and each second pixel in the second image are located in the same horizontal direction, that is, the coordinates of each first pixel and the corresponding second pixel in the longitudinal direction are the same, and the first pixel O1 in the first image W3 and the second pixel O2 in the second image W4 shown in fig. 13 are located in the same horizontal direction. In calculating the mapping relation between the first pixel and the second pixel, it is essential to calculate the parallax of the first pixel and the second pixel in the lateral direction.
Referring to fig. 2, 3 and 14, in some embodiments, step 012: filtering the first image according to the first image and the second image to generate a target image, and further comprising the steps of:
0128: the first image is filtered based on the first image and the plurality of frames of the second image to generate a target image.
In certain embodiments, the filtering module 12 is configured to perform step 0128. That is, the filtering module 12 is configured to filter the first image according to the first image and the plurality of frames of the second image to generate the target image.
In certain embodiments, processor 40 is configured to perform step 0128. That is, the processor 40 is configured to filter the first image from the first image and the plurality of frames of the second image to generate the target image.
Specifically, the number of the second cameras 30 may be plural, such as 2, 3, or more, and the plural second cameras 30 may capture plural frames of the second image.
When the processor 40 filters the first image from the first image and the second image, it may be that the processor 40 filters the first image from the first image and the second image of the plurality of frames to generate the target image.
More specifically, referring to fig. 5, the processor 40 may acquire a corresponding second pixel Q7 of the first pixel Q5 in the second image P4 in the first image P3. When there are a plurality of second cameras 30, the second images Q4 are a plurality of second pixels Q7 corresponding to the first pixels Q5.
Then when the processor 40 filters the first pixels Q5 according to the second pixels Q8 surrounding the second pixels Q7 in the second image P4, then the second pixels Q8 surrounding each second pixel Q7 in the plurality of second images P4 filter the first pixels Q5 to generate the target image.
The pixel value of the first pixel Q5 may be the average value of the sum of all the products according to the number of the second pixels Q7 after calculating the sum of the products of the second pixels Q8 around each second pixel Q7 and the corresponding weights, as the pixel value of the first pixel Q5. For example, the second image P4 has 3 frames, and has 3 second pixels Q7 corresponding to the first pixels Q5, and then the sum of products of the second pixels Q8 around each second pixel Q7 and their corresponding weights is denoted as A1, A2, A3, and then the pixel value of the first pixel Q5 is (a1+a2+a3)/3.
In this manner, the processor 40 may filter the first image according to the plurality of cameras to ensure the image quality of the generated target image.
Referring to fig. 2, 3 and 15, in some embodiments, step 012: filtering the first image according to the first image and the second image to generate a target image, and further comprising the steps of:
0129: in the case where the exposure parameters of the first camera 20 and the second camera 30 are different, determining an image with higher signal-to-noise ratio of the first image and the second image as a guide image; a kind of electronic device with high-pressure air-conditioning system
0130: the first image is pilot filtered according to the pilot image to generate a target image.
In certain embodiments, the filtering module 12 is configured to perform steps 0129 and 0130. That is, the filtering module 12 is configured to determine, as the guide image, an image with higher signal-to-noise ratio of the first image and the second image in the case where the exposure parameters of the first camera 20 and the second camera 30 are different; and performing guided filtering on the first image according to the guided image to generate a target image.
In certain embodiments, processor 40 is configured to perform steps 0129 and 0130. That is, the processor 40 is configured to determine, as the guide image, an image having a higher signal-to-noise ratio of the first image and the second image in a case where the exposure parameters of the first camera 20 and the second camera 30 are different; and performing guided filtering on the first image according to the guided image to generate a target image.
Specifically, according to the above-mentioned knowledge that each camera corresponds to its own exposure parameter, the processor 40 may also determine, in the case where the exposure parameters of the first camera 20 and the second camera 30 are different, the first image and the second image, which can be used as the images of the guide image.
More specifically, the processor 40 may first determine the pilot image by comparing the signal-to-noise ratios of the first image and the second image. The guiding image is an image with high signal to noise ratio in the first image and the second image. For example, when the signal-to-noise ratio of the first image is greater than that of the second image, then the guide image is the first image. For another example, when the signal-to-noise ratio of the first image is smaller than that of the second image, the guide image is the second image. It will be appreciated that the guide image may be the first image itself or the second image.
After determining the guide image, the processor 40 may guide-filter the first image based on the guide image to generate the target image.
Further, the pilot filtering includes an original map and a pilot map. The original image is a first image, and the guide image is an image with high signal to noise ratio in the first image and the second image. The processor 40 may obtain the pixel value of each first pixel in the filtered first image according to the following formula (two).
Where i, j are the coordinates of the pixel, P j For the pixel value, W, of the first pixel in the first image ij () Weights, q, for weighted averaging of the guide map i Is the pixel value of the first pixel in the filtered first image.
Referring to fig. 2, 3 and 16, in some embodiments, step 012: filtering the first image according to the first image and the second image to generate a target image, and further comprising the steps of:
0131: and filtering the first image according to the first image and the second image based on a preset three-dimensional block matching algorithm to generate a target image.
In certain embodiments, the filtering module 12 is configured to perform step 0131. That is, the filtering module 12 is configured to filter the first image according to the first image and the second image based on a preset three-dimensional block matching algorithm, so as to generate the target image.
In certain embodiments, processor 40 is configured to perform step 0131. That is, the processor 40 is configured to filter the first image according to the first image and the second image based on a preset three-dimensional block matching algorithm to generate the target image.
Specifically, the processor 40 may further filter the first image according to the first image and the second image based on a preset three-dimensional block matching algorithm to generate the target image. The preset three-dimensional Block Matching algorithm is a Block Matching3D and BM3D algorithm. The BM3D algorithm is mainly used for integrating a plurality of similar blocks into a three-dimensional matrix through matching with adjacent image blocks, filtering in a three-dimensional space, and then inversely transforming and fusing the result into two dimensions to form a denoised image.
More specifically, from the foregoing, the processor 40 may determine an image for filtering the first image based on the signal-to-noise ratio magnitudes of the first image and the second image. For example, when the signal-to-noise ratio of the first image is greater than that of the second image, then the image that filters the first image is the first image itself. When the signal to noise ratio of the first image is smaller than that of the second image, the image which filters the first image is the second image. When the signal to noise ratio of the first image is similar or equal to the signal to noise ratio of the second image, the images for filtering the first image are the first image and the second image.
After determining the image to filter the first image, for example, when the processor 40 filters the first image according to the second image, the processor 40 may filter the first image through the second image according to the BM3D algorithm to obtain the target image.
For example, as shown in fig. 8, when the first image A1 is filtered by the second image A2, the second image block M2 including the second pixel N2 corresponding to the first pixel N1 may be acquired first, and the image block similar to the second image block M2 is found in the second image A2, so that a plurality of image blocks similar to the second image block M2 are integrated into a three-dimensional matrix, the filtering process is performed in the three-dimensional space, and then the result is inversely transformed and fused into two dimensions, so as to form a denoised second image block M2, and then the pixel value of the second pixel N2 in the denoised second image block M2 is the pixel value of the first pixel N1 in the filtered first image block M1. The similarity between the image block and the second image block M2 can be obtained by calculating the euclidean distance between the image block and the second image block M2, and the smaller the euclidean distance is, the higher the similarity is.
Referring to fig. 17, an embodiment of the present application also provides a non-transitory computer readable storage medium 300 containing a computer program 301. The computer program 301, when executed by the one or more processors 40, causes the one or more processors 40 to perform the image processing method of any of the embodiments described above.
For example, the computer program 301, when executed by the one or more processors 40, causes the processors 40 to perform the following image processing method:
011: acquiring a first image shot by the first camera 20 and a second image shot by the second camera 30, wherein the field of view ranges of the first camera 20 and the second camera 30 are at least partially overlapped; a kind of electronic device with high-pressure air-conditioning system
012: the first image is filtered according to the first image and the second image to generate a target image.
As another example, the computer program 301, when executed by the one or more processors 40, causes the processors 40 to perform the following image processing method:
0121: acquiring a second pixel corresponding to a first pixel in a first image in a second image;
0122: and filtering each first pixel according to the first pixels around each first pixel and/or the second pixels around the second pixels corresponding to each first pixel to generate a target image.
In the description of the present specification, reference to the terms "certain embodiments," "in one example," "illustratively," and the like, means that a particular feature, structure, material, or characteristic described in connection with the embodiments or examples is included in at least one embodiment or example of the present application. In this specification, schematic representations of the above terms do not necessarily refer to the same embodiments or examples. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Any process or method descriptions in flow charts or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process, and further implementations are included within the scope of the preferred embodiment of the present application in which functions may be executed out of order from that shown or discussed, including substantially concurrently or in reverse order, depending on the functionality involved, as would be understood by those reasonably skilled in the art of the present application.
While embodiments of the present application have been shown and described above, it will be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the above embodiments by one of ordinary skill in the art within the scope of the application.

Claims (14)

1. An image processing method, comprising:
acquiring a first image shot by a first camera and a second image shot by a second camera, wherein the field of view ranges of the first camera and the second camera are at least partially overlapped; a kind of electronic device with high-pressure air-conditioning system
Filtering the first image based on the first image and the second image to generate a target image,
wherein said filtering said first image from said first image and said second image to generate a target image comprises:
acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image;
filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or (b)
Filtering the first pixel according to the pixels in the range of the second pixel adjacent area corresponding to the first pixel so as to generate a target image; or (b)
And filtering the first pixel according to the pixels in the first pixel adjacent area range and the pixels in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
2. The image processing method according to claim 1, wherein the filtering the first image from the first image and the second image to generate a target image further comprises:
acquiring a first image block containing the first pixel to be filtered and a second image block containing the second pixel corresponding to the first pixel to be filtered;
determining a first weight of each first pixel in the first image block according to the Euclidean distance between the first pixel and the first pixel to be filtered;
determining a second weight of each second pixel in the second image block according to the Euclidean distance between the second pixel in the second image block and the second pixel corresponding to the first pixel to be filtered;
the filtering the first pixel according to the pixels in the area adjacent to the first pixel to generate a target image includes:
And under the condition that the ratio of the signal to noise ratio of the first image block to the signal to noise ratio of the second image block is larger than a first preset ratio, determining the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel.
3. The image processing method according to claim 2, wherein the filtering the first pixel according to the pixels in the second pixel vicinity corresponding to the first pixel to generate the target image includes:
and under the condition that the ratio of the signal to noise ratio of the first image block to the signal to noise ratio of the second image block is smaller than a second preset ratio, determining the pixel value of the first pixel to be filtered according to each second pixel in the second image block and the second weight corresponding to each second pixel.
4. The image processing method according to claim 2, wherein the filtering the first pixel according to the pixel in the first pixel neighboring area range and according to the pixel in the second pixel neighboring area range corresponding to the first pixel to generate the target image includes:
And under the condition that the ratio of the signal to noise ratio of the first image block to the signal to noise ratio of the second image block is in a section [ second preset ratio, first preset ratio ], determining the pixel value of the first pixel to be filtered according to each first pixel in the first image block and the first weight corresponding to each first pixel, and each second pixel in the second image block and the second weight corresponding to each second pixel.
5. The image processing method according to claim 1, characterized in that before filtering the first image from the first image and the second image to generate a target image, the image processing method further comprises:
and performing brightness alignment on the first image and the second image.
6. The image processing method according to claim 1, characterized in that the image processing method further comprises:
calibrating internal parameters of the first camera, the second camera and the depth camera, and calibrating external parameters between the first camera and the depth camera and external parameters between the first camera and the second camera;
according to the first camera and the external parameters of the depth camera, aligning the depth image acquired by the depth camera with the first image to determine a depth value corresponding to each first pixel in the first image;
Determining a mapping relation between a first pixel in the first image and a second pixel in the second image according to a depth value corresponding to each first pixel in the first image and internal parameters and external parameters of the first camera and the second camera;
the acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image includes:
and acquiring the second pixel corresponding to the first pixel in the first image in the second image according to the mapping relation.
7. The image processing method according to claim 1, characterized in that the image processing method further comprises:
calibrating internal parameters and external parameters of the first camera and the second camera;
calculating a depth image according to the first image, the second image and the internal parameters and external parameters of the first camera and the second camera, and aligning the depth image with the first image to acquire a depth value of each first pixel in the first image;
determining a mapping relation between a first pixel in the first image and a second pixel in the second image according to a depth value corresponding to each first pixel in the first image and internal parameters and external parameters of the first camera and the second camera;
The acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image includes:
and acquiring the second pixel corresponding to the first pixel in the first image in the second image according to the mapping relation.
8. The image processing method according to claim 6 or 7, characterized in that after calibrating the internal and external parameters of the first camera and the second camera, the image processing method further comprises:
the first image is subjected to distortion correction according to the internal parameters of the first camera, and the second image is subjected to distortion correction according to the internal parameters of the second camera;
and aligning the base line of the first image and the second image according to the external parameters of the first camera and the external parameters of the second camera.
9. The image processing method according to claim 1, wherein the plurality of second cameras are provided, the plurality of second cameras respectively capture a plurality of frames of the second image, the filtering the first image according to the first image and the second image to generate the target image includes:
The first image is filtered according to the first image and the plurality of frames of the second image to generate the target image.
10. The image processing method according to claim 1, wherein the filtering the first image from the first image and the second image to generate a target image further comprises:
under the condition that the exposure parameters of the first camera and the second camera are different, determining an image with higher signal to noise ratio in the first image and the second image as a guide image;
and performing guided filtering on the first image according to the guided image so as to generate the target image.
11. The image processing method according to claim 1, wherein the filtering the first image from the first image and the second image to generate a target image further comprises:
and filtering the first image according to the first image and the second image based on a preset three-dimensional block matching algorithm to generate the target image.
12. An image processing apparatus, comprising:
the acquisition module is used for acquiring a first image shot by the first camera and a second image shot by the second camera, and the field of view ranges of the first camera and the second camera are at least partially overlapped;
The filtering module is used for filtering the first image according to the first image and the second image so as to generate a target image;
the filtering module is further used for acquiring a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or (b)
Filtering the first pixel according to the pixels in the range of the second pixel adjacent area corresponding to the first pixel so as to generate a target image; or (b)
And filtering the first pixel according to the pixels in the first pixel adjacent area range and the pixels in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
13. The electronic device is characterized by comprising a processor, wherein the processor is used for acquiring a first image shot by a first camera and a second image shot by a second camera, and the field of view ranges of the first camera and the second camera are at least partially overlapped; filtering the first image according to the first image and the second image to generate a target image, wherein filtering the first image according to the first image and the second image to generate the target image comprises obtaining a first pixel in the first image and a second pixel corresponding to the first pixel in the second image; filtering the first pixel according to pixels in the range of the adjacent area of the first pixel so as to generate a target image; or filtering the first pixel according to the pixels in the adjacent area range of the second pixel corresponding to the first pixel so as to generate a target image; or filtering the first pixel according to the pixel in the first pixel adjacent area range and the pixel in the second pixel adjacent area range corresponding to the first pixel so as to generate a target image.
14. A non-transitory computer readable storage medium storing a computer program which, when executed by one or more processors, implements the image processing method of any one of claims 1 to 11.
CN202211185543.6A 2022-09-27 2022-09-27 Image processing method and device, electronic equipment and computer readable storage medium Pending CN116630172A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211185543.6A CN116630172A (en) 2022-09-27 2022-09-27 Image processing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211185543.6A CN116630172A (en) 2022-09-27 2022-09-27 Image processing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN116630172A true CN116630172A (en) 2023-08-22

Family

ID=87601425

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211185543.6A Pending CN116630172A (en) 2022-09-27 2022-09-27 Image processing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN116630172A (en)

Similar Documents

Publication Publication Date Title
CN106875339B (en) Fisheye image splicing method based on strip-shaped calibration plate
KR102279436B1 (en) Image processing methods, devices and devices
CN108600576B (en) Image processing apparatus, method and system, and computer-readable recording medium
US9325899B1 (en) Image capturing device and digital zooming method thereof
CN102227746B (en) Stereoscopic image processing device, method, recording medium and stereoscopic imaging apparatus
US9581436B2 (en) Image processing device, image capturing apparatus, and image processing method
US10827107B2 (en) Photographing method for terminal and terminal
US10306210B2 (en) Image processing apparatus and image capturing apparatus
CN104620569A (en) Imaging controller and imaging control method and program
CN111062881A (en) Image processing method and device, storage medium and electronic equipment
US20170330311A1 (en) Image processing device and method, image capturing device, program, and record medium
WO2020011112A1 (en) Image processing method and system, readable storage medium, and terminal
WO2016204068A1 (en) Image processing apparatus and image processing method and projection system
CN114697623B (en) Projection plane selection and projection image correction method, device, projector and medium
JP2013031154A (en) Image processing device, image processing method, image capturing device, and program
CN111385461B (en) Panoramic shooting method and device, camera and mobile terminal
CN109785225B (en) Method and device for correcting image
CN113421183B (en) Method, device and equipment for generating vehicle panoramic view and storage medium
CN116456191A (en) Image generation method, device, equipment and computer readable storage medium
CN116630172A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN114727073B (en) Image projection method and device, readable storage medium and electronic equipment
CN113114975B (en) Image splicing method and device, electronic equipment and storage medium
EP3605450B1 (en) Image processing apparatus, image pickup apparatus, control method of image processing apparatus, and computer-program
CN110689502B (en) Image processing method and related device
CN113570532A (en) Image processing method, device, terminal and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination