WO2022226701A1 - 图像处理方法、处理装置、电子设备和存储介质 - Google Patents

图像处理方法、处理装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022226701A1
WO2022226701A1 PCT/CN2021/089701 CN2021089701W WO2022226701A1 WO 2022226701 A1 WO2022226701 A1 WO 2022226701A1 CN 2021089701 W CN2021089701 W CN 2021089701W WO 2022226701 A1 WO2022226701 A1 WO 2022226701A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
processed
multiple frames
images
Prior art date
Application number
PCT/CN2021/089701
Other languages
English (en)
French (fr)
Inventor
罗俊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/089701 priority Critical patent/WO2022226701A1/zh
Publication of WO2022226701A1 publication Critical patent/WO2022226701A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/38Registration of image sequences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors

Definitions

  • the present application relates to image processing technology, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
  • the present application aims to solve one of the problems in the related art at least to a certain extent. Therefore, the purpose of this application is to provide an image processing method, an image processing apparatus, an electronic device and a computer-readable storage medium.
  • the first image data and the second image data are respectively processed to obtain multiple frames of first pixel images, first calibration parameters of the first image sensor, multiple frames of second pixel images, and multiple frames of the second image sensor.
  • the second calibration parameter ;
  • a target image is obtained by synthesizing the first processed pixel image and the second processed pixel image.
  • an acquisition module configured to acquire the first image data of the first image sensor and the second image data of the second image sensor
  • a preprocessing module configured to process the first image data and the second image data respectively to obtain multiple frames of first pixel images, first calibration parameters of the first pixel images, multiple frames of second pixel images and all frames of the first pixel images. the second calibration parameter of the second pixel image;
  • a first multi-frame processing module configured to align and fuse multiple frames of the first pixel images to generate a first processed pixel image and a first alignment model
  • a calculation module configured to calculate and obtain position conversion information between the first pixel image and the second image according to the first calibration parameter and the second calibration parameter;
  • a second multi-frame processing module configured to align and fuse the second pixel image to generate a second processed pixel image according to the position conversion information and the first alignment model
  • a synthesis module configured to synthesize the first processed pixel image and the second processed pixel image to obtain a target image.
  • the electronic device of an embodiment of the present application includes a first image sensor, a second image sensor, a processor, and a memory;
  • the image processing method includes: acquiring first image data of a first image sensor and second image data of a second image sensor; respectively processing the first image data and the second image data to obtain multiple frames of first pixels images, first calibration parameters of the first image sensor, multiple frames of second pixel images, and second calibration parameters of the first image sensor; aligning and fusing multiple frames of the first pixel images to generate a first process pixel image and a first alignment model; calculate and obtain position conversion information between the first image sensor and the second sensor according to the first calibration parameter and the second calibration parameter; according to the position conversion information and the The first alignment model aligns and fuses the second pixel image to generate a second processed pixel image; and synthesizes the first processed pixel image and the second processed pixel image to obtain a target image.
  • the computer-readable storage medium of the embodiments of the present application includes a computer program, which, when executed by one or more processors, causes the processors to execute the image processing method.
  • the image processing method includes: acquiring first image data of a first image sensor and second image data of a second image sensor; respectively processing the first image data and the second image data to obtain multiple frames of first pixels images, first calibration parameters of the first image sensor, multiple frames of second pixel images, and second calibration parameters of the first image sensor; aligning and fusing multiple frames of the first pixel images to generate a first process pixel image and a first alignment model; calculate and obtain position conversion information between the first image sensor and the second sensor according to the first calibration parameter and the second calibration parameter; according to the position conversion information and the The first alignment model aligns and fuses the second pixel image to generate a second processed pixel image; and synthesizes the first processed pixel image and the second processed pixel image to obtain a target image.
  • FIG. 1 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 2 is a schematic diagram of a module of an image processing apparatus according to some embodiments of the present application.
  • FIG. 3 is a schematic diagram of a module of an electronic device according to some embodiments of the present application.
  • FIG. 4 is a schematic diagram of a module of an image sensor according to some embodiments of the present application.
  • FIG. 5 is a schematic diagram of a scene in an image processing method according to some embodiments of the present application.
  • 6-7 are schematic flowcharts of image processing methods according to some embodiments of the present application.
  • FIG. 8 is a schematic diagram of a scene of an image processing method according to some embodiments of the present application.
  • FIG. 9 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 10 is a schematic diagram of another scene of the image processing method according to some embodiments of the present application.
  • FIG. 11-12 are schematic flowcharts of image processing methods according to some embodiments of the present application.
  • FIG. 13 is a schematic diagram of another scene of the image processing method according to some embodiments of the present application.
  • FIG. 14 is a schematic diagram of another module of the electronic device according to some embodiments of the present application.
  • FIG. 15 is a schematic diagram of connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
  • Electronic device 100 image processing apparatus 10, acquisition module 11, preprocessing module 12, first multi-frame processing module 13, first alignment unit 132, first fusion unit 134, calculation module 14, second multi-frame processing module 15, a second alignment unit 152, a second fusion unit 154, and a synthesis module 16;
  • Memory 40 programs 42 , computer readable storage medium 50 .
  • an embodiment of the present application provides an image processing method, and the image processing method includes the steps:
  • the present application also provides an image processing apparatus 10 for processing the above-mentioned image processing method.
  • the image processing apparatus 10 includes an acquisition module 11 , a preprocessing module 12 , a first multi-frame processing module 13 , and a calculation module 14.
  • step S11 can be realized by the acquisition module 11
  • step S12 can be realized by the preprocessing module 12
  • step S13 can be realized by the first multi-frame processing module 13
  • step S14 can be realized by the calculation module 14
  • step S15 can be realized by the second multi-frame processing module 13.
  • the frame processing module 15 is implemented, and step S16 can be implemented by the synthesis module 16 .
  • the acquiring module 11 may be configured to acquire the first image data of the first image sensor and the second image data of the second image sensor.
  • the preprocessing module 12 may be configured to process the first image data and the second image data to obtain multiple frames of first pixel images, first calibration parameters of the first image sensor, multiple frames of second pixel images, and first pixel images of the second image sensor. Two calibration parameters.
  • the first multi-frame processing module 13 may be configured to align and fuse the multi-frame first pixel images to generate a first processed pixel image and a first alignment model.
  • the calculation module 14 may be configured to calculate the position conversion information between the first image sensor and the second sensor according to the first calibration parameter and the second calibration parameter.
  • the second multi-frame processing module 15 may be configured to align and fuse the second pixel image to generate the second processed pixel image according to the position conversion information and the first alignment model.
  • the synthesis module 16 may be configured to synthesize the first processed pixel image and the second processed pixel image to obtain the target image.
  • an embodiment of the present application provides an electronic device 100 , and the image processing method of the present application can be completed by the electronic device 100 .
  • the electronic device 100 includes a processor 20 and a plurality of image sensors 30 including a first image sensor 31 and a second image sensor 32 .
  • the processor 20 may be configured to acquire the first image data of the first image sensor 31 and the second image data of the second image sensor 32, and process the first image data and the second image data to obtain multiple frames of first pixel images, A first calibration parameter of an image sensor 31 , multiple frames of second pixel images, and a second calibration parameter of the second image sensor 32 .
  • the processor 20 can be used to align and fuse multiple frames of the first pixel images to generate a first processed pixel image and a first alignment model, and calculate the first image sensor 31 and the second image according to the first calibration parameters and the second calibration parameters.
  • Position conversion information between sensors 32 may also be configured to perform alignment processing on the second pixel image according to the position conversion information and the first alignment model, and fuse to generate the second processed pixel image and synthesize the first processed pixel image and the second processed pixel image to obtain the target image.
  • image processing apparatus 10 and electronic device 100 of the present application in the process of performing multi-frame processing on the first pixel image and the second pixel image obtained by the first image sensor 31 and the second image sensor 32, according to the first pixel image and the second pixel image obtained by the second image sensor 32.
  • the first alignment model calculated from the one-pixel image performs alignment processing on the multiple frames of the first pixel images, and at the same time, the first alignment model and the position conversion information obtained from the first image data and the second image data are applied to the second pixel image.
  • Alignment processing is performed, so that the alignment processing of the second pixel image by the second alignment model calculated from the second pixel image is avoided, the complexity of signal processing and the computational cost are reduced, and the first processed pixels generated by fusion are The image and the second processed pixel image have similar signal-to-noise ratios and can be co-aligned. In this way, multi-frame image processing and multi-camera fusion processing with low complexity and low power consumption are realized.
  • the electronic device 100 may be a mobile phone, a tablet computer, a laptop computer, a drone, a robot, a smart wearable device (smart watch, smart bracelet, smart helmet, smart glasses, etc.), a virtual reality device, and the like.
  • the image processing apparatus 10 may be hardware or software pre-installed on the mobile phone, and may execute the image processing method when the mobile phone is activated.
  • the image processing apparatus 10 may be a low-level software code segment of a mobile phone or a part of an operating system.
  • the image sensor 30 may be a camera assembly, wherein the first image sensor 31 may be the main camera of the electronic device 100 , and the second image sensor 32 may be the auxiliary camera of the electronic device 1000 .
  • a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge-coupled device (CCD, Charge-coupled Device) photosensitive element can be used.
  • CMOS Complementary Metal Oxide Semiconductor
  • CCD Charge-coupled Device
  • the frame rates of the first image sensor 31 and the second image sensor 32 are different, and the frame rate of the first image sensor 31 is greater than the frame rate of the second image sensor 32.
  • the frame rate of the first image sensor 31 is 30fps
  • the frame rate of the second image sensor 32 is 10fps.
  • the image sensor 30 may include a pixel array 301 , a vertical driving unit 302 , a control unit 303 , a column processing unit 304 and a horizontal driving unit 305 .
  • the image sensor 30 may generate image data after exposure through the pixel array 301 .
  • the pixel array 301 may be a color filter array (Color Filter Array, CFA).
  • the pixel array 301 includes a plurality of photosensitive pixels arranged two-dimensionally in an array form (ie, arranged in a two-dimensional matrix form), and each photosensitive pixel includes a different spectral absorption.
  • Each photosensitive pixel includes a photoelectric conversion element, and each photosensitive pixel converts the absorbed light into electric charge according to the intensity of the light incident thereon, so that each photosensitive pixel can generate a plurality of different Pixel data for color channels, which ultimately generate image data.
  • the vertical driving unit 302 includes a shift register and an address decoder.
  • the vertical driving unit 302 includes readout scan and reset scan functions.
  • the readout scanning refers to sequentially scanning unit photosensitive pixels row by row, and reading signals from these unit photosensitive pixels row by row.
  • the signal output by each photosensitive pixel in the selected and scanned photosensitive pixel row is transmitted to the column processing unit 304 .
  • the reset scan is used to reset the charges, and the photocharges of the photoelectric conversion element are discarded, so that the accumulation of new photocharges can be started.
  • the signal processing performed by the column processing unit 304 is correlated double sampling (CDS) processing. In the CDS process, the reset level and the signal level output from each photosensitive pixel in the selected row are taken out, and the level difference is calculated. Thus, the signals of the photosensitive pixels in one row are obtained.
  • the column processing unit 304 may have an analog-to-digital (A/D) conversion function for converting an analog pixel signal into a digital format.
  • the horizontal driving unit 305 includes a shift register and an address decoder.
  • the horizontal driving unit 305 sequentially scans the pixel array 301 column by column. Through the selective scanning operation performed by the horizontal driving unit 305, each photosensitive pixel column is sequentially processed by the column processing unit 304 and sequentially output.
  • the control unit 303 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 302 , the column processing unit 304 and the horizontal driving unit 305 to work together.
  • the processor 20 can be connected to the pixel arrays 301 of the first image sensor 31 and the second image sensor 32, respectively, and the processor 20 can be set with a preset image readout mode.
  • First image data and second image data generated by the first image sensor 31 and the second image sensor 32 .
  • Multiple frames of first pixel images and the first calibration parameters of the first image sensor 31 can be obtained according to the pixel data of the first image data, and multiple frames of second pixel images and the second image sensor can be obtained from the pixel data of the second image data.
  • 32 second calibration parameter The first pixel image or the second pixel image of each frame includes pixel data of the same color and arranged in an array.
  • the first image data and the second image data are RGBW pixel data
  • the generated first pixel image and the second pixel image include RGB pixel data. That is, the first pixel image and the second pixel image include color information of three color channels of R (ie, red), G (ie, green) and B (ie, blue).
  • the processor 20 may generate multiple frames of the first pixel image. Perform calculation to obtain a first alignment model of the first pixel image, and perform alignment processing on multiple frames of the first pixel image according to the first alignment model. After the multiple frames of the first pixel images are aligned, the processor 20 may fuse the multiple frames of the first pixel images to generate one frame of the first pixel image.
  • the processor 20 may also calculate and obtain the position conversion information between the first image sensor 31 and the second image sensor 32 according to the first calibration parameter and the second calibration parameter. It can be understood that since the positions of the first image sensor 31 and the second image sensor 32 are different, image data are obtained from different viewing angles during the photographing process, resulting in differences between the obtained first image data and the second image data. Further, a second alignment model is generated according to the position conversion information and the first alignment model, and the multi-frame second pixel images are aligned according to the second alignment model, and the aligned multi-frame second pixel images are fused to generate a second process pixel image.
  • the processor 20 After the processor 20 generates the first processed pixel image and the second processed pixel image, the first processed pixel image and the second processed pixel image can be synthesized and processed to obtain a target image, thereby realizing a multi-image sensor. image fusion to improve the image quality.
  • the first image data includes a plurality of minimum repeating units A1, each minimum repeating unit A1 includes a plurality of pixel units a1, and each pixel unit a1 includes a plurality of color pixels and full-color pixels Pixels, the color pixels are arranged in the first diagonal direction, the full-color pixels are arranged in the second diagonal direction, and the first diagonal direction is different from the second diagonal direction.
  • Step S12 includes sub-steps:
  • Color pixels in a first diagonal direction are acquired to generate a first pixel image and/or panchromatic pixels in a second diagonal direction are acquired to generate a first pixel image.
  • the preset image readout mode may be the binning mode, that is, the processor 20 may use the binning mode to read the image data. Reading is performed to generate a first pixel image. It should be noted that the Binning algorithm is to add the charges corresponding to adjacent pixels of the same color together, and read them out in the mode of one pixel.
  • the color pixels in the first diagonal direction in each pixel unit a1 are read, and the second diagonal direction in each pixel unit a1 is read. read out the full-color pixels, and then arrange all the read-out color pixels in an array to form a first pixel image, or arrange all the read-out pan-color pixels in an array to form a first pixel image.
  • this embodiment only takes the first image data to generate the first pixel image as an example for illustration, and the second image data to generate the second pixel image may be similar to the above processing process, and will not be repeated here.
  • step S13 includes sub-steps:
  • the first multi-frame processing module 13 includes a first alignment unit 132 and a first fusion unit 134, steps S132 and S134 may be implemented by the first alignment unit 132, and step S136 may be implemented by the first fusion unit 134;
  • the first aligning unit 132 can be used to find mutually matching pixel points in the multiple frames of the first pixel images to calculate the first alignment model of the multiple frames of the first pixel images, and the first aligning unit 132 can also be used to The alignment model aligns multiple frames of the first pixel image.
  • the first fusion unit 134 may be configured to fuse the aligned multiple frames of the first pixel images to obtain a first processed pixel image.
  • the processor 20 may be configured to find mutually matching pixel points in the first pixel images of the multiple frames to calculate the first alignment model of the first pixel images of the multiple frames, and the processor 20 may also be configured to calculate the first alignment model according to the first pixel image of the multiple frames.
  • the alignment model aligns the multiple frames of the first pixel images and fuses the aligned multiple frames of the first pixel images to obtain a first processed pixel image.
  • the alignment model calculated based on the matched pixels can eliminate the motion relationship between the multiple frames of images. This enables multiple frames of first pixel images to be fused together with high quality.
  • the processor 20 may use a scale-invariant feature transform (Scale-Invariant Feature Transform, sift), a speeded up robust feature (Speeded Up Robust Features, surf) feature point matching algorithm or an optical flow field algorithm to find the first pixel image of the multiple frames. pixels that match each other.
  • Scale-Invariant Feature Transform Scale-Invariant Feature Transform, sift
  • speeded up robust feature Speeded Up Robust Features, surf
  • the sift algorithm refers to an algorithm for detecting and describing local features in an image in the field of computer vision. some degree of stability.
  • SIFT feature detection There are four steps in SIFT feature detection: 1. Extremum detection in scale space: search images on all scale spaces, identify potential pairs of scales and select invariant interest points through Gaussian differential functions. 2.
  • Feature point positioning At each candidate position, the position scale is determined by a fitting fine model, and the selection of key points is based on their stability. 3.
  • Feature direction assignment Based on the local gradient direction of the image, one or more directions are assigned to each key point position. All subsequent operations are to transform the direction, scale and position of the key points, so as to provide different characteristics of these features. transsexual. 4.
  • Feature point description In the neighborhood around each feature point, the local gradients of the image are measured at selected scales, and these gradients are transformed into a representation that allows relatively large local shape deformation and illumination transformation .
  • the Surf algorithm is a robust image recognition and description algorithm that can be used for computer vision tasks.
  • the concepts and steps of the SSURF algorithm are based on SIFT, but the detailed process is slightly different.
  • the SURF algorithm includes the following three steps: feature point detection, feature proximity description, and descriptor pairing.
  • the optical flow field algorithm is a point-based matching algorithm, which uses the changes of pixels in the image sequence in the time domain and the correlation between adjacent frames to find the corresponding relationship between the previous frame and the current frame, so as to calculate the corresponding relationship between the previous frame and the current frame.
  • the optional first alignment model in this embodiment may be an affine transformation model and a perspective transformation model. That is, the affine transformation model or the perspective transformation model can be calculated according to the matching pixel points, and then the multi-frame first pixel images can be aligned according to the affine transformation model or the perspective transformation model, and the aligned multi-frame images can be obtained. Frame the first pixel image.
  • step S132 includes sub-steps:
  • sub-step S1322 and sub-step S1324 may be implemented by the first alignment unit 132, that is, the first alignment unit 132 may be used to determine the number of pixels that match each other in the first pixel images of two adjacent frames.
  • the first alignment unit 132 may also be configured to calculate an affine transformation matrix of every two adjacent frames according to the first coordinate to obtain a first alignment model.
  • the processor 20 is configured to determine the first coordinates of the pixels that match each other in the first pixel images of two adjacent frames, and calculate the affine transformation matrix of each adjacent two frames according to the first coordinates to obtain the first coordinate An aligned model.
  • the affine transformation matrix means that any parallelogram in one plane can be mapped to another parallelogram by affine transformation, and the image mapping operation is performed in the same spatial plane, through different transformation parameters. Periods get smaller to get different types of parallelograms.
  • the scaling and rotation of the image are controlled based on the scaling and rotation parameters, and the displacement of the image is controlled based on the position parameters.
  • first coordinate refers to the coordinate of the pixel point in the image sensor coordinate system
  • second coordinate refers to the coordinate of the pixel point in the world coordinate system
  • the processor 20 may include a preset affine transformation formula H:
  • (x, y) is the first coordinate of the first pixel image of the first frame of the two adjacent frames of the first pixel image
  • (x', y') is the second frame of the two adjacent frames of the first pixel image
  • a 00 , a 01 and a 10 , a 11 are the scaling and rotation parameters of the affine transformation matrix
  • a 02 and a 12 are the displacement parameters of the affine transformation matrix.
  • the processor 20 may substitute the first coordinates (x, y) and (x', y') of the first pixel images of two adjacent frames into the preset affine transformation formula to calculate the scaling and rotation parameter a of the matrix 00 , a 01 and a 10 , a 11 , displacement parameters a 02 and a 12 , thereby obtaining the affine transformation matrix between the first pixel images of two adjacent frames.
  • n-1 affine transformation matrices can be generated.
  • the first pixel image includes 4 frames, namely p0, p1, p2, and p3, then the first affine transformation matrix H 01 can be generated between the first pixel image and p1, and the first pixel image A second affine transformation matrix H 12 may be generated between p1 and p2, and a third affine transformation matrix H 23 may be generated between the first pixel images p2 and p3.
  • the processor 20 may generate a first alignment model from all the obtained affine transformation matrices. In this way, the multiple frames of the first pixel images can be aligned according to the first alignment model.
  • step S15 includes sub-steps:
  • the second multi-frame processing module 15 may include a second alignment unit 152 and a second fusion unit 154 .
  • Sub-steps S152 and S154 may be implemented by the second alignment unit 152
  • sub-step S156 may be implemented by the second fusion unit 154 .
  • the second alignment unit 132 is configured to calibrate the first alignment model according to the position conversion information to generate the second alignment model, and the second alignment unit 132 is further configured to calibrate the first alignment model according to the position conversion information to generate the first alignment model.
  • the second fusion unit 154 is configured to fuse the aligned second pixel images to generate a second processed pixel image.
  • the processor 20 may be configured to calibrate the first alignment model based on the positional translation information to generate the second alignment model and further to calibrate the first alignment model based on the positional translation information to generate the second alignment Model.
  • the processor 20 is also operable to fuse the aligned second pixel images to generate a second processed pixel image.
  • the positions of the first image sensor 31 and the second image sensor 32 are different, and if the multiple frames of the second pixel images are aligned through the first alignment model, the alignment may not be possible. Therefore, it is necessary to obtain the position conversion information between the first image sensor 31 and the second image sensor 32 according to the positional relationship between the first image sensor 31 and the second image sensor 32 , and then align the first alignment model according to the position conversion information. Perform calibration to generate a second alignment model, in this way, multiple frames of second pixel images can be aligned according to the second alignment model, and then can be fused to generate a second processed pixel image.
  • this embodiment takes the alignment of two adjacent second pixel images as an example for illustration. Since the frame rate of the first image sensor 31 is 30 fps, the frame rate of the second image sensor 32 is 10 fps. That is, at the same time, if the first image sensor 31 generates 3 frames of image data and the second image sensor 32 generates 1 frame of image data, the first pixel image including 4 frames needs to be aligned. Moreover, the second pixel image of the first frame corresponds to the first pixel image of the first frame, and the second pixel image of the second frame corresponds to the first pixel image of the fourth frame.
  • the calculation formula of the second alignment model M 03 is:
  • K1 is the intrinsic matrix of the first image sensor 31
  • K2 is the intrinsic matrix of the second image sensor 32
  • the intrinsic matrix (Intrinsic Matrix) represents the intrinsic properties of the image sensor, and the three-dimensional camera coordinates can be converted into two-dimensional through the intrinsic matrix. Image coordinates.
  • the formula for the intrinsic matrix is:
  • H 01 , H 12 and H 23 are obtained by the first alignment model
  • H 01 is the affine transformation matrix between the first pixel image p0 of the first frame and the first pixel image p1 of the second frame
  • H 12 is the first pixel image of the second frame.
  • H 23 is the affine transformation matrix between the first pixel image p2 of the third frame and the first pixel image p3 of the fourth frame.
  • E 12 is position conversion information between the first image sensor 31 and the second image sensor 32 . Its calculation formula is:
  • the position conversion information E 12 is an extrinsic matrix of the image sensor, which is used to describe the position of the image sensor in the world coordinate system and the direction it points, wherein R is a rotation matrix and T is a displacement vector.
  • sub-step S156 includes:
  • sub-steps S1562 and S1564 may be implemented by the second fusion unit 154 .
  • the second fusion unit 154 is configured to obtain fusion parameters generated by aligning and merging multiple frames of the first pixel images, and the second fusion unit 154 is further configured to fuse the aligned second pixel images according to the fusion parameters to generate the first pixel image. 2. Process the pixel image.
  • the processor 20 may be configured to obtain fusion parameters generated by aligning and merging multiple frames of the first pixel images, and the processor 20 may be further configured to fuse the aligned second pixel images according to the fusion parameters to generate The second processes the pixel image.
  • the color shift between the second processed pixel image and the first processed pixel image is reduced, thereby reducing the color shift between the second processed pixel image and the first processed pixel image.
  • the image quality of the target image synthesized from the first processed pixel image and the second processed pixel image is further improved.
  • step S16 includes:
  • sub-steps S162 , S164 and S166 may be implemented by the synthesis module 16 .
  • the synthesis module 16 is configured to match the first processed pixel image and the second processed pixel image to obtain the zoom coefficients of the first processed pixel image and the second processed image, and the synthesis module 16 is further configured to align the first processed pixel image according to the zoom coefficient The image and the second processed pixel image are zoomed to obtain the first intermediate image and the second intermediate image.
  • the synthesis module 16 is used for synthesizing the first intermediate image and the second intermediate image to generate a target image.
  • the processor 20 may be configured to match the first processed pixel image and the second processed pixel image to obtain a zoom factor of the first processed pixel image and the second processed image and to adjust the first processed pixel image according to the zoom factor and the second processing pixel image zoom processing to obtain the first intermediate image and the second intermediate image.
  • the processor 20 is also operable to synthesize the first intermediate image and the second intermediate image to generate the target image.
  • the first image sensor 31 and the second image sensor 32 have different shooting ranges.
  • the first image sensor 31 is a wide-angle lens
  • the second image sensor 32 is a telephoto lens. Therefore, in the first image sensor 31
  • the scene in the second image data is only the scene in a partial area of the first image data. Therefore, in order to be able to synthesize the first processed pixel image and the second processed pixel image, the first processed pixel image and the second processed pixel image need to be enlarged or reduced according to the matching zoom factor, so that the first processed pixel image and the second processed pixel image need to be enlarged or reduced.
  • the two processed pixel images can be matched so that the target image can be synthesized.
  • an embodiment of the present application provides an electronic device 100, including a processor 20, a memory 30, and one or more programs 32, wherein one or more programs 32 are stored in the memory 30 and processed
  • the processor 20 is executed, and the program 32 is executed by the processor 20 to execute the instructions of the above-mentioned image processing method.
  • the present application provides a non-volatile computer-readable storage medium 40 containing a computer program.
  • the computer program is executed by one or more processors 20 , the processor 20 is made to execute the above-mentioned image processing method. .
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Studio Devices (AREA)

Abstract

一种图像处理方法,包括步骤:(S11)获取第一图像传感器(31)的第一图像数据和第二图像传感器(32)的第二图像数据,(S12)分别对第一图像数据和第二图像数据处理得到多帧第一像素图像、第一像素图像的第一校准参数、多帧第二像素图像和第二像素图像的第二校准参数,(S13)对多帧第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型,(S14)根据第一校准参数和第二校准参数计算得到第一图像传感器(31)和第二图像传感器(32)之间的位置转换信息,(S15)根据位置转换信息和第一对齐模型对第二像素图像对齐处理并融合生成第二处理像素图像,(S16)合成第一处理像素图像和第二处理像素图像得到目标图像。另外,还公开了一种图像处理装置(10)、电子设备(100)和计算机可读存储介质。

Description

图像处理方法、处理装置、电子设备和存储介质 技术领域
本申请涉及图像处理技术,特别涉及一种图像处理方法、图像处理装置、电子设备和计算机可读存储介质。
背景技术
相关技术中,可通过对多个摄像头进行融合,实现无缝变焦过渡,从而提升图像的画质。然而,要实现多帧处理以获得良好的图像质量以及多摄像头处理难度较大,且功耗较高。因此,如何以降低复杂度和功耗来实现多帧图像处理和多摄像机融合处理成了亟待解决的问题。
发明内容
有鉴于此,本申请旨在至少在一定程度上解决相关技术中的问题之一。为此,本申请的目的在于提供一种图像处理方法、图像处理装置、电子设备和计算机可读存储介质。
本申请实施方式的图像处理方法,包括:
获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;
分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第二图像传感器的第二校准参数;
对多帧所述第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型;
根据所述第一校准参数和第二校准参数计算得到所述第一图像传感器和所述第二传感器之间的位置转换信息;
根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;和
合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
本申请实施方式的图像处理装置包括:
获取模块,用于获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;
预处理模块,用于分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一像素图像的第一校准参数、多帧第二像素图像和所述第二像素 图像的第二校准参数;
第一多帧处理模块,用于对多帧所述第一像素图像对齐并融合以生成第一处理像素图像、第一对齐模型;
计算模块,用于根据所述第一校准参数和第二校准参数计算得到所述第一像素图像和所述第二图像之间的位置转换信息;
第二多帧处理模块,用于根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;和
合成模块,用于合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
本申请实施方式的电子设备,包括第一图像传感器、第二图像传感器、处理器和存储器;和
一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,并且被所述处理器执行,所述程序包括用于执行所述图像处理方法的指令。所述图像处理方法包括:获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第一图像传感器的第二校准参数;对多帧所述第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型;根据所述第一校准参数和第二校准参数计算得到所述第一图像传感器和所述第二传感器之间的位置转换信息;根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
本申请实施方式的计算机可读存储介质,包括计算机程序,当所述计算机计算机程序被一个或多个处理器执行时,使得所述处理器执行所述图像处理方法。所述图像处理方法包括:获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第一图像传感器的第二校准参数;对多帧所述第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型;根据所述第一校准参数和第二校准参数计算得到所述第一图像传感器和所述第二传感器之间的位置转换信息;根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式图像处理方法的一个流程示意图;
图2是本申请某些实施方式的图像处理装置的一个模块示意图;
图3是本申请某些实施方式的电子设备的一个模块示意图;
图4是本申请某些实施方式的图像传感器的一个模块示意图;
图5是本申请某些实施方式的图像处理方法中的一个场景示意图;
图6-7是本申请某些实施方式的图像处理方法的流程示意图;
图8是本申请某些实施方式的图像处理方法的场景示意图;
图9是本申请某些实施方式的图像处理方法的流程示意图;
图10是本申请某些实施方式的图像处理方法的又一场景示意图;
图11-12是本申请某些实施方式的图像处理方法的流程示意图;
图13是本申请某些实施方式的图像处理方法的又一场景示意图;
图14是本申请某些实施方式的电子设备的又一模块示意图;
图15是本申请某些实施方式的处理器和计算机可读存储介质的连接示意图。
主要元件符号说明:
电子设备100、图像处理装置10、获取模块11、预处理模块12、第一多帧处理模块13、第一对齐单元132、第一融合单元134、计算模块14、第二多帧处理模块15、第二对齐单元152、第二融合单元154、合成模块16;
处理器20;
图像传感器30、第一图像传感器31、第二图像传感器32、像素阵列301、垂直驱动单元302、控制单元303、列处理单元304、水平驱动单元305;
存储器40、程序42、计算机可读存储介质50。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
请参阅图1,本申请实施方式提供一种图像处理方法,图像处理方法包括步骤:
S11,获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;
S12,分别对第一图像数据和第二图像数据处理得到多帧第一像素图像、第一图像传感器的第一校准参数、多帧第二像素图像和第二图像传感器的第二校准参数;
S13,对多帧第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型;
S14,根据第一校准参数和第二校准参数计算得到第一图像传感器和第二传感器之间的位置转换信息;
S15,根据位置转换信息和第一对齐模型对第二像素图像对齐处理并融合生成第二处理像素图像;和
S16,合成第一处理像素图像和第二处理像素图像得到目标图像。
请结合图2,本申请还提供了一种图像处理装置10,用于处理上述的图像处理方法,图像处理装置10包括获取模块11、预处理模块12、第一多帧处理模块13、计算模块14、第二多帧处理模块15和合成模块16。
其中,步骤S11可以由获取模块11实现,步骤S12可以由预处理模块12实现,步骤S13可以由第一多帧处理模块13实现,步骤S14可以由计算模块14实现、步骤S15可以由第二多帧处理模块15实现,步骤S16可以由合成模块16实现。
或者说,获取模块11可以用于获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据。
预处理模块12可以用于分别对第一图像数据和第二图像数据处理得到多帧第一像素图像、第一图像传感器的第一校准参数、多帧第二像素图像和第二图像传感器的第二校准参数。
第一多帧处理模块13可以用于对多帧第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型。
计算模块14可以用于根据第一校准参数和第二校准参数计算得到第一图像传感器和第二传感器之间的位置转换信息。
第二多帧处理模块15可以用于根据位置转换信息和第一对齐模型对第二像素图像对齐处理并融合生成第二处理像素图像。
合成模块16可以用于合成第一处理像素图像和第二处理像素图像得到目标图像。
请结合图3,本申请实施方式提供了一种电子设备100,本申请的图像处理方法可以由电子设备100完成。电子设备100包括处理器20和多个图像传感器30,多个图像传感器30包括第一图像传感器31和第二图像传感器32。
处理器20可以用于获取第一图像传感器31的第一图像数据和第二图像传感器32的第二图像数据和分别对第一图像数据和第二图像数据处理得到多帧第一像素图像、 第一图像传感器31的第一校准参数、多帧第二像素图像和第二图像传感器32的第二校准参数。处理器20可以用于对多帧第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型和根据第一校准参数和第二校准参数计算得到第一图像传感器31和第二图像传感器32之间的位置转换信息。处理器20还可以用于根据位置转换信息和第一对齐模型对第二像素图像对齐处理并融合生成第二处理像素图像和合成第一处理像素图像和第二处理像素图像得到目标图像。
本申请图像处理方法、图像处理装置10和电子设备100中,在对由第一图像传感器31和第二图像传感器32得到的第一像素图像和第二像素图像进行多帧处理过程中,根据第一像素图像计算得到的第一对齐模型对多帧第一像素图像进行对齐处理的同时,还将第一对齐模型以及由第一图像数据和第二图像数据得到的位置转换信息对第二像素图像进行对齐处理,从而,避免了通过第二像素图像计算得到的第二对齐模型对第二像素图像进行对齐处理,降低了信号处理的复杂性以及计算成本,并且,使得融合生成的第一处理像素图像和第二处理像素图像具有相似的信噪比以及能够协同对齐。如此,实现了低复杂度和低功耗的多帧图像处理和多摄像机融合处理。
电子设备100可以是手机、平板电脑、笔记本电脑、无人机、机器人、智能穿戴设备(智能手表、智能手环、智能头盔、智能眼镜等)、虚拟现实设备等。
本实施方式以电子设备100是手机为例进行说明,也即是说,图像处理方法和图像处理装置10应用于但不限于手机。图像处理装置10可以是预安装于手机的硬件或软件,并在手机上启动运行时可以执行图像处理方法。例如,图像处理装置10可以是手机的底层软件代码段或者说是操作系统的一部分。
图像传感器30可以为摄像头组件,其中,第一图像传感器31可以为电子设备100的主摄像头,第二图像传感器32为电子设备1000的辅摄像头。可以采用互补金属氧化物半导体(CMOS,ComplementaryMetal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
第一图像传感器31和第二图像传感器32的帧率不同,第一图像传感器31的帧率大于第二图像传感器32的帧率,例如,在一些实施方式中,第一图像传感器31的帧率为30fps,第二图像传感器32的帧率为10fps。
请参阅图4,图像传感器30可包括有像素阵列301、垂直驱动单元302、控制单元303、列处理单元304和水平驱动单元305。
图像传感器30可通过像素阵列301曝光后生成图像数据。像素阵列301可以为色彩滤波阵列(Color Filter Array,CFA),像素阵列301包括有阵列形式二维排列(即二维矩阵形式排布)的多个感光像素,每个感光像素包括具有不同光谱吸收体特性的吸收区,并 且,每个感光像素包括光电转换元件,每个感光像素根据入射在其上的光的强度将吸收的光转换为电荷,使得每个感光像素均可以生成多个具有不同颜色通道的像素数据,从而最终生成图像数据。
垂直驱动单元302包括移位寄存器和地址译码器。垂直驱动单元302包括读出扫描和复位扫描功能。读出扫描是指顺序地逐行扫描单位感光像素,从这些单位感光像素逐行地读取信号。被选择并扫描的感光像素行中的每一感光像素输出的信号被传输到列处理单元304。复位扫描用于复位电荷,光电转换元件的光电荷被丢弃,从而可以开始新的光电荷的积累。由列处理单元304执行的信号处理是相关双采样(CDS)处理。在CDS处理中,取出从所选行中的每一感光像素输出的复位电平和信号电平,并且计算电平差。因而,获得了一行中的感光像素的信号。列处理单元304可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。
水平驱动单元305包括移位寄存器和地址译码器。水平驱动单元305顺序逐列扫描像素阵列301。通过水平驱动单元305执行的选择扫描操作,每一感光像素列被列处理单元304顺序地处理,并且被顺序输出。
控制单元303根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元302、列处理单元304和水平驱动单元305协同工作。
处理器20可分别与第一图像传感器31和第二图像传感器32的像素阵列301连接,处理器20可设置有预设图像读出模式,在预设图像读出模式下,可分别读取从第一图像传感器31和第二图像传感器32生成的第一图像数据和第二图像数据。并可根据第一图像数据的像素数据得到多帧第一像素图像以及第一图像传感器31的第一校准参数,以及将第二图像数据的像素数据得到多帧第二像素图像和第二图像传感器32的第二校准参数。每一帧的第一像素图像或第二像素图像都包括相同颜色且阵列排布的像素数据。
例如,第一图像数据和第二图像数据为RGBW像素数据,生成的第一像素图像和第二像素图像包括RGB像素数据。也即是,第一像素图像和第二像素图像包括R(即红色)、G(即绿色)和B(即蓝色)三个颜色通道的彩色信息。
进一步地,处理器20在根据预设图像读出模式将第一图像数据生成第一像素图像以及将第二图像数据分别生成多帧第二像素图像后,处理器20可多帧第一像素图像进行计算,得到第一像素图像的第一对齐模型,并根据第一对齐模型对多帧第一像素图像进行对齐处理。在多帧第一像素图像对齐后,处理器20可将多帧第一像素融合生成一帧第一像素图像。
另外,处理器20还可根据第一校准参数和第二校准参数计算得到第一图像传感器31和第二图像传感器32之间的位置转换信息。可以理解,由于第一图像传感器31 和第二图像传感器32的位置不同,拍照过程中以不同视角得到图像数据,从而导致得到的第一图像数据和第二图像数据存在差异。进而,根据位置转换信息以及第一对齐模型生成第二对齐模型,并根据第二对齐模型对多帧第二像素图像进行对齐处理,再将对齐后的多帧第二像素图像融合生成第二处理像素图像。如此,无需根据第二像素图像计算得到第二对齐模型,降低了计算成本和功耗,并且提升了效率,并且,避免了第一处理像素图像和第二处理像素图像形成错位,而影响后续处理。
更进一步地,在处理器20生成第一处理像素图像和第二处理像素图像后,可将第一处理像素图像和第二处理像素图像进行合成处理,得到目标图像,从而,实现了多图像传感器的图像融合,提升了图像的画质。
请结合图5,在某些实施方式中,第一图像数据包括多个最小重复单元A1,每个最小重复单元A1包括多个像素单元a1,每个像素单元a1包括多个彩色像素和全色像素,彩色像素设置在第一对角线方向,全色像素设置在第二对角线方向,第一对角线方向与第二对角线方向不同,步骤S12包括子步骤:
获取第一对角线方向上的彩色像素以生成第一像素图像和/或获取第二对角线方向上的全色像素以生成第一像素图像。
具体的,处理器20通过预设图像读出模式在读取图像传感器30采集的图像数据时,预设图像读出模式可以为binning模式,也即是,处理器20可通过binning模式对图像数据进行读取从而生成第一像素图像。需要说明的是,Binning算法是将相同颜色的相邻像素对应的电荷加在一起,以一个像素的模式读出。
进一步地,在以Binning模式进行读取时,将每个像素单元a1中的第一对角线方向上的彩色像素进行读取,并将每个像素单元a1中的第二对角线方向上的全色像素进行读取,进而将所有读出的彩色像素进行阵列排布形成第一像素图像,或者,将所有读出的全色像素进行陈列排布生成第一像素图像。
需要说明的是,本实施方式仅仅以第一图像数据生成第一像素图像为例进行举例说明,第二图像数据生成第二像素图像可以与上述处理过程类似,在此不再赘述。
请参阅图6,在某些实施方式中,步骤S13包括子步骤:
S132,查找多帧第一像素图像中相互匹配的像素点以计算多帧第一像素图像的第一对齐模型;
S134,根据第一对齐模型对齐多帧第一像素图像;
S136,融合对齐后的多帧第一像素图像以得到第一处理像素图像。
在某些实施方式中,第一多帧处理模块13包括第一对齐单元132和第一融合单元134,步骤S132和S134可以由第一对齐单元132实现,步骤S136可以由第一融 合单元134;
或者说,第一对齐单元132可以用于查找多帧第一像素图像中相互匹配的像素点以计算多帧第一像素图像的第一对齐模型,第一对齐单元132还可以用于根据第一对齐模型对齐多帧第一像素图像。
第一融合单元134可以用于融合对齐后的多帧第一像素图像以得到第一处理像素图像。
在某些实施方式中,处理器20可以用于查找多帧第一像素图像中相互匹配的像素点以计算多帧第一像素图像的第一对齐模型,处理器20还可以用于根据第一对齐模型对齐多帧第一像素图像和融合对齐后的多帧第一像素图像以得到第一处理像素图像。
需要说明的是,由于相互匹配的像素点之间的运动反映的是多帧图像之间的运动,则基于相互匹配的像素点计算得到的对齐模型,可以消除多帧图像之间的运动关系,使得多帧第一像素图像可以高质量的融合到一起。
处理器20可采用尺度不变特征变换(Scale-Invariant Feature Transform,sift)、加速稳健特征(Speeded Up Robust Features,surf)特征点匹配算法或光流场算法来查找到多帧第一像素图像之间相互匹配的像素点。
相关计算领域人员可以理解,sift算法是指在计算机视觉领域中检测和描述图像中局部特征的算法,其对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定。SIFT特征检测有四步:1、尺度空间的极值检测:搜索所有尺度空间上的图像,通过高斯微分函数来识别潜在的对尺度和选择不变的兴趣点。2、特征点定位:在每个候选的位置上,通过一个拟合精细模型来确定位置尺度,关键点的选取依据他们的稳定程度。3.特征方向赋值:基于图像局部的梯度方向,分配给每个关键点位置一个或多个方向,后续的所有操作都是对于关键点的方向、尺度和位置进行变换,从而提供这些特征的不变性。4.特征点描述:在每个特征点周围的邻域内,在选定的尺度上测量图像的局部梯度,这些梯度被变换成一种表示,这种表示允许比较大的局部形状的变形和光照变换。
Surf算法是一种稳健的图像识别和描述算法,可被用于计算机视觉任务,SSURF算法的概念及步骤均建立在SIFT之上,但详细的流程略有不同。SURF算法包含以下三个步骤:特征点侦测、特征邻近描述、描述子配对。
光流场算法是一种基于点的匹配算法,利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息的一种方法。
进一步地,本实施方式可选用的第一对齐模型可以是仿射变换模型和透视变换模型。也即是,可根据相互匹配的像素点来计算出仿射变换模型或透视变换模型,进而根据仿射变换模型或透视变换模型来对多帧第一像素图像进行对齐处理,得到对齐后的多帧第一像素图像。
请参阅图7,在某些实施方式中,步骤S132包括子步骤:
S1322,确定相邻两帧第一像素图像中相互匹配的像素点的第一坐标;
S1324,根据第一坐标计算每相邻两帧的仿射变换矩阵以得到第一对齐模型。
在某些实施方式中,子步骤S1322和子步骤S1324可以由第一对齐单元132实现,也即是第一对齐单元132可以用于确定相邻两帧第一像素图像中相互匹配的像素点的第一坐标,第一对齐单元132还可以用于根据第一坐标计算每相邻两帧的仿射变换矩阵以得到第一对齐模型。
在某些实施方式中,处理器20用于确定相邻两帧第一像素图像中相互匹配的像素点的第一坐标和根据第一坐标计算每相邻两帧的仿射变换矩阵以得到第一对齐模型。
相关领域技术人员可以理解,仿射变换矩阵是指:一个平面内的任意平行四边形可以被仿射变换映射为另一个平行四边形,图像的映射操作在同一个空间平面内进行,通过不同的变换参数时期变小而得到不同类型的平行四边形。在用仿射变换矩阵时,基于缩放和旋转参数控制图像的缩放与旋转,基于位置参数控制图像的位移。
需要说明的是,第一坐标是指在像素点在图像传感器坐标系的坐标,第二坐标是指像素点在世界坐标系中的坐标。
具体地,处理器20可包括有预设仿射变换公式H:
Figure PCTCN2021089701-appb-000001
其中,(x,y)为相邻两帧第一像素图像中第一帧第一像素图像的第一坐标,(x’,y’)为相邻两帧第一像素图像中的第二帧第一像素图像的第一坐标。a 00、a 01和a 10、a 11为仿射变换矩阵的缩放与旋转参数,a 02和a 12为仿射变换矩阵的位移参数。
处理器20可将相邻两帧第一像素图像的第一坐标(x,y)和(x’,y’)代入预设仿射变换公式中,以计算出该矩阵的缩放与旋转参数a 00、a 01和a 10、a 11、位移参数a 02和a 12,从而,得到相邻两帧第一像素图像之间的仿射变换矩阵。
可以理解,若第一像素图像包括n帧(n为大于等于2的整数),则可以生成n-1个仿射变换矩阵。例如,请结合图8,第一像素图像包括有4帧,分别为p0、p1、p2、 p3,则第一像素图像和p1之间可以生成第一仿射变换矩阵H 01,第一像素图像p1和p2之间可以生成第二仿射变换矩阵H 12,第一像素图像p2和p3之间可以生成第三仿射变换矩阵H 23
进而,处理器20可以将所有的得到的仿射变换矩阵生成第一对齐模型。如此,多帧第一像素图像可以根据第一对齐模型进行对齐。
请参阅图9,在某些实施方式中,步骤S15包括子步骤:
S152,根据位置转换信息对第一对齐模型进行校准以生成第二对齐模型;
S154,根据第二对齐模型对第二像素图像对齐处理;
S156,融合对齐后的第二像素图像以生成第二处理像素图像。
在某些实施方式中,第二多帧处理模块15可包括第二对齐单元152和第二融合单元154。子步骤S152和S154可以第二对齐单元152实现,子步骤S156可以由第二融合单元154实现。
或者说,第二对齐单元132用于根据位置转换信息对第一对齐模型进行校准以生成第二对齐模型,第二对齐单元132还用于根据位置转换信息对第一对齐模型进行校准以生成第二对齐模型。第二融合单元154用于融合对齐后的第二像素图像以生成第二处理像素图像。
在某些实施方式中,处理器20可以用于根据位置转换信息对第一对齐模型进行校准以生成第二对齐模型以及还用于根据位置转换信息对第一对齐模型进行校准以生成第二对齐模型。处理器20还可用于融合对齐后的第二像素图像以生成第二处理像素图像。
可以理解,第一图像传感器31与第二图像传感器32的位置不同,多帧第二像素图像若通过第一对齐模型进行对齐时,可能导致无法对齐。因此,需要根据第一图像传感器31和第二图像传感器32之间的位置关系,从而得到第一图像传感器31与第二图像传感器32之间位置转换信息,并根据位置转换信息对第一对齐模型进行校准生成第二对齐模型,如此,多帧第二像素图像可根据第二对齐模型对齐,进而可以融合生成第二处理像素图像。
请结合图10,需要说明的是,本实施方式以两帧相邻的第二像素图像对齐进行举例说明。由于第一图像传感器31的帧率为30fps,第二图像传感器32的帧率为10fps。也即是,在同一时间内,若第一图像传感器31生成3帧图像数据,则第二图像传感器32生成1帧图像数据,则第一像素图像包括4帧需要对齐。并且,第一帧第二像素图像与第一帧第一像素图像对应,第二帧第二像素图像与第四帧第一像素图像对应。
第二对齐模型M 03的计算公式为:
Figure PCTCN2021089701-appb-000002
其中,K1是第一图像传感器31的内在矩阵,K2是第二图像传感器32的内在矩阵,内在矩阵(Intrinsic Matrix)表示图像传感器的内在属性,通过内在矩阵可以将三维相机坐标转换为二维的图像坐标。内在矩阵的公式为:
Figure PCTCN2021089701-appb-000003
H 01、H 12和H 23由第一对齐模型得到,H 01为第一帧第一像素图像p0和第二帧第一像素图像p1之间的仿射变换矩阵,H 12为第二帧第一像素图像p1和第三帧第一像素图像p3之间的仿射变换矩阵,H 23为第三帧第一像素图像p2和第四帧第一像素图像p3之间的仿射变换矩阵。
E 12为第一图像传感器31和第二图像传感器32之间的位置转换信息。其计算公式为:
E 12=[R 12][T 12]
位置转换信息E 12为图像传感器的外在矩阵(Extrinsic Matrix),用于描述图像传感器在世界坐标系中的位置以及其所指向的方向,其中,R为旋转矩阵,T为位移向量。
请参阅图11,在某些实施方式中,子步骤S156包括:
S1562,获取对多帧第一像素图像对齐并融合生成的融合参数;
S1564,根据融合参数对对齐后的第二像素图像进行融合以生成第二处理像素图像。
在某些实施方式中,子步骤S1562和S1564可以第二融合单元154实现。或者说,第二融合单元154用于获取对多帧第一像素图像对齐并融合生成的融合参数,第二融合单元154还用于根据融合参数对对齐后的第二像素图像进行融合以生成第二处理像素图像。
在某些实施方式中,处理器20可以用于获取对多帧第一像素图像对齐并融合生成的融合参数,处理器20还用于根据融合参数对对齐后的第二像素图像进行融合以生成第二处理像素图像。
如此,通过根据第一像素图像融合生成的融合参数来参与第二像素图像的融合,而生成第二处理像素图像,减少了第二处理像素图像与第一处理像素图像之间的色偏,进而进一步地提高由第一处理像素图像和第二处理像素图像合成的目标图像的画 质。
请参阅图12和图13,在某些实施方式中,步骤S16包括:
S162,匹配第一处理像素图像和第二处理像素图像以得到第一处理像素图像和第二处理图像的变焦系数;
S164,根据变焦系数对第一处理像素图像和第二处理像素图像变焦处理以得到第一中间图像和第二中间图像;
S166,合成第一中间图像和第二中间图像以生成目标图像。
在某些实施方式中,子步骤S162、S164和S166可以由合成模块16实现。
或者说,合成模块16用于匹配第一处理像素图像和第二处理像素图像以得到第一处理像素图像和第二处理图像的变焦系数,合成模块16还用于根据变焦系数对第一处理像素图像和第二处理像素图像变焦处理以得到第一中间图像和第二中间图像。合成模块16用于合成第一中间图像和第二中间图像以生成目标图像。
在某些实施方式中,处理器20可以用于匹配第一处理像素图像和第二处理像素图像以得到第一处理像素图像和第二处理图像的变焦系数以及根据变焦系数对第一处理像素图像和第二处理像素图像变焦处理以得到第一中间图像和第二中间图像。处理器20还可用于合成第一中间图像和第二中间图像以生成目标图像。
可以理解,通常,第一图像传感器31和第二图像传感器32具有不同的拍摄范围,例如,第一图像传感器31为广角镜头,第二图像传感器32为长焦镜头,从而,在第一图像传感器31和第二图像传感器32拍摄时生成第一图像数据和第二图像数据时,第二图像数据的景物仅为第一图像数据中部分区域的景物。因此,为了能够将第一处理像素图像与第二处理像素图像合成,需要对第一处理像素图像与第二处理像素图像根据匹配的变焦系数进行放大或缩小处理,使得第一处理像素图像和第二处理像素图像能够匹配,从而能够合成目标图像。
请参阅图14,本申请实施方式提供了一种电子设备100,包括处理器20、存储器30以及一个或多个程序32,其中,一个或多个程序32被存储在存储器30中,并且被处理器20执行,程序32被处理器20执行上述的图像处理方法的指令。
请结合图15,本申请提供了一种包含计算机程序的非易失性计算机可读存储介质40,当计算机程序被一个或多个处理器20执行时,使得处理器20执行上述的图像处理方法。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或 示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (20)

  1. 一种图像处理方法,其特征在于,包括:
    获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;
    分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第二图像传感器的第二校准参数;
    对多帧所述第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型;
    根据所述第一校准参数和第二校准参数计算得到所述第一图像传感器和所述第二传感器之间的位置转换信息;
    根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;和
    合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
  2. 如权利要求1所述的图像处理方法,其特征在于,所述根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像包括:
    根据所述位置转换信息对所述第一对齐模型进行校准以生成第二对齐模型;
    根据所述第二对齐模型对所述第二像素图像对齐处理;
    融合对齐后的所述第二像素图像以生成所述第二处理像素图像。
  3. 如权利要求2所述的图像处理方法,其特征在于,所述融合对齐后的所述第二像素图像以生成所述第二处理像素图像包括:
    获取所述对多帧所述第一像素图像对齐并融合生成的融合参数;
    根据所述融合参数对对齐后的所述第二像素图像进行融合以生成所述第二处理像素图像。
  4. 如权利要求1所述的图像处理方法,其特征在于,所述对多帧所述第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型包括:
    查找多帧所述第一像素图像中相互匹配的像素点以计算所述多帧所述第一像素图像的第一对齐模型;
    根据所述第一对齐模型对齐多帧所述第一像素图像;
    融合对齐后的多帧所述第一像素图像以得到第一处理像素图像。
  5. 如权利要求4所述的图像处理方法,其特征在于,所述查找多帧所述第一像素图像中相互匹配的像素点以计算所述多帧所述第一像素图像的第一对齐模型包括:
    确定相邻两帧所述第一像素图像中相互匹配的像素点的第一坐标;
    根据所述第一坐标计算每相邻两帧的仿射变换矩阵以得到所述第一对齐模型。
  6. 如权利要求4所述的图像处理方法,其特征在于,所述查找多帧所述第一像素图像中相互匹配的像素点以计算所述多帧所述第一像素图像的第一对齐模型包括:
    采用尺度不变特征变换、加速稳健特征或光流场算法任意一种匹配算法来查找到多帧所述第一像素图像之间相互匹配的像素点。
  7. 如权利要求1所述的图像处理方法,其特征在于,所述合成所述第一处理像素图像和所述第二处理像素图像得到目标图像包括:
    匹配所述第一处理像素图像和所述第二处理像素图像以得到所述第一处理像素图像和所述第二处理图像的变焦系数;
    根据所述变焦系数对所述第一处理像素图像和所述第二处理像素图像变焦处理以得到第一中间图像和第二中间图像;
    合成所述第一中间图像和所述第二中间图像以生成所述目标图像。
  8. 如权利要求1所述的图像处理方法,其特征在于,所述第一图像数据包括多个最小重复单元,每个所述最小重复单元包括多个像素单元,每个所述像素单元包括多个彩色像素和全色像素,所述彩色像素设置在第一对角线方向,所述全色像素设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同,所述分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第二图像传感器的第二校准参数包括:
    获取所述第一对角线方向上的所述彩色像素以生成所述第一像素图像和/或获取所述第二对角线方向上的所述全色像素以生成所述第一像素图像。
  9. 如权利要求8所述的图像处理方法,其特征在于,所述第一像素图像和所述第二像素图像均包括彩色像素,所述彩色像素以拜耳阵列形式排布。
  10. 一种图像处理装置,其特征在于,包括:
    获取模块,用于获取第一图像传感器的第一图像数据和第二图像传感器的第二图像数据;
    预处理模块,用于分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第二图像传感器的第二校准参数;
    第一多帧处理模块,用于对多帧所述第一像素图像对齐并融合以生成第一处理像素图像、第一对齐模型;
    计算模块,用于根据所述第一校准参数和第二校准参数计算得到所述第一像素图像和所述第二图像之间的位置转换信息;
    第二多帧处理模块,用于根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;和
    合成模块,用于合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
  11. 一种电子设备,其特征在于,包括第一图像传感器、第二图像传感器和处理器,所述处理器用于:
    获取所述第一图像传感器的第一图像数据和所述第二图像传感器的第二图像数据;
    分别对所述第一图像数据和所述第二图像数据处理得到多帧第一像素图像、所述第一图像传感器的第一校准参数、多帧第二像素图像和所述第二图像传感器的第二校准参数;
    对多帧所述第一像素图像对齐并融合以生成第一处理像素图像和第一对齐模型;
    根据所述第一校准参数和第二校准参数计算得到所述第一图像传感器和所述第二传感器之间的位置转换信息;
    根据所述位置转换信息和所述第一对齐模型对所述第二像素图像对齐处理并融合生成第二处理像素图像;和
    合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
  12. 如权利要求11所述的电子设备,其特征在于,所述处理器还用于:根据所述位置转换信息对所述第一对齐模型进行校准以生成第二对齐模型;
    根据所述第二对齐模型对所述第二像素图像对齐处理;
    融合对齐后的所述第二像素图像以生成所述第二处理像素图像。
  13. 如权利要求12所述的电子设备,其特征在于,所述处理器还用于:获取所述对多帧所述第一像素图像对齐并融合生成的融合参数;
    根据所述融合参数对对齐后的所述第二像素图像进行融合以生成所述第二处理像素图像。
  14. 如权利要求11所述的电子设备,其特征在于,所述处理器还用于:
    查找多帧所述第一像素图像中相互匹配的像素点以计算所述多帧所述第一像素图像的第一对齐模型;
    根据所述第一对齐模型对齐多帧所述第一像素图像;
    融合对齐后的多帧所述第一像素图像以得到第一处理像素图像。
  15. 如权利要求14所述的电子设备,其特征在于,所述处理器还用于:
    确定相邻两帧所述第一像素图像中相互匹配的像素点的第一坐标;
    根据所述第一坐标计算每相邻两帧的仿射变换矩阵以得到所述第一对齐模型。
  16. 如权利要求14所述的电子设备,其特征在于,所述处理器还用于:
    采用尺度不变特征变换、加速稳健特征或光流场算法任意一种匹配算法来查找到多帧所述第一像素图像之间相互匹配的像素点。
  17. 如权利要求11所述的电子设备,其特征在于,所述处理器还用于:
    匹配所述第一处理像素图像和所述第二处理像素图像以得到所述第一处理像素图像和所述第二处理图像的变焦系数;
    根据所述变焦系数对所述第一处理像素图像和所述第二处理像素图像变焦处理以得到第一中间图像和第二中间图像;
    合成所述第一中间图像和所述第二中间图像以生成所述目标图像。
  18. 如权利要求11所述的电子设备,其特征在于,所述第一图像数据包括多个最小重复单元,每个所述最小重复单元包括多个像素单元,每个所述像素单元包括多个彩色像素和全色像素,所述彩色像素设置在第一对角线方向,所述全色像素设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同,所述处理器用于:
    获取所述第一对角线方向上的所述彩色像素以生成所述第一像素图像和/或获取所述第二对角线方向上的所述全色像素以生成所述第一像素图像。
  19. 一种电子设备,其特征在于,包括第一图像传感器、第二图像传感器、处理器和存储器;和
    一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,并且被所述一个或多个处理器执行,所述程序包括用于执行根据权利要求1-9任意一项所述的图像处理方法的指令。
  20. 一种包含计算机程序的非易失性计算机可读存储介质,其特征在于,当所述计算机计算机程序被一个或多个处理器执行时,使得所述处理器执行权利要求1-9中任一项所述的图像处理方法。
PCT/CN2021/089701 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质 WO2022226701A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089701 WO2022226701A1 (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089701 WO2022226701A1 (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质

Publications (1)

Publication Number Publication Date
WO2022226701A1 true WO2022226701A1 (zh) 2022-11-03

Family

ID=83847521

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089701 WO2022226701A1 (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质

Country Status (1)

Country Link
WO (1) WO2022226701A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105611181A (zh) * 2016-03-30 2016-05-25 努比亚技术有限公司 多帧拍摄图像合成装置和方法
WO2018072267A1 (zh) * 2016-10-17 2018-04-26 华为技术有限公司 用于终端拍照的方法及终端
US20190026924A1 (en) * 2016-01-15 2019-01-24 Nokia Technologies Oy Method and Apparatus for Calibration of a Multi-Camera System
CN111479102A (zh) * 2019-01-23 2020-07-31 韩华泰科株式会社 图像传感器模块
CN112261387A (zh) * 2020-12-21 2021-01-22 展讯通信(上海)有限公司 用于多摄像头模组的图像融合方法及装置、存储介质、移动终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190026924A1 (en) * 2016-01-15 2019-01-24 Nokia Technologies Oy Method and Apparatus for Calibration of a Multi-Camera System
CN105611181A (zh) * 2016-03-30 2016-05-25 努比亚技术有限公司 多帧拍摄图像合成装置和方法
WO2018072267A1 (zh) * 2016-10-17 2018-04-26 华为技术有限公司 用于终端拍照的方法及终端
CN111479102A (zh) * 2019-01-23 2020-07-31 韩华泰科株式会社 图像传感器模块
CN112261387A (zh) * 2020-12-21 2021-01-22 展讯通信(上海)有限公司 用于多摄像头模组的图像融合方法及装置、存储介质、移动终端

Similar Documents

Publication Publication Date Title
JP6767543B2 (ja) 異なる種類の撮像装置を有するモノリシックカメラアレイを用いた画像の撮像および処理
US10638099B2 (en) Extended color processing on pelican array cameras
WO2021179806A1 (zh) 图像获取方法、成像装置、电子设备及可读存储介质
CN111757006B (zh) 图像获取方法、摄像头组件及移动终端
JP2019220957A5 (zh)
EP1841207B1 (en) Imaging device, imaging method, and imaging device design method
JP6711612B2 (ja) 画像処理装置、画像処理方法、および撮像装置
WO2005057922A1 (en) Imaging device
JPH08116490A (ja) 画像処理装置
US11758289B2 (en) Image processing method, image processing system, electronic device, and readable storage medium
TW201143384A (en) Camera module, image processing apparatus, and image recording method
CN111246064A (zh) 图像处理方法、摄像头组件及移动终端
CN108781250A (zh) 摄像控制装置、摄像控制方法和摄像装置
CN113170061B (zh) 图像传感器、成像装置、电子设备、图像处理系统及信号处理方法
TWI599809B (zh) 鏡頭模組陣列、影像感測裝置與數位縮放影像融合方法
US11902674B2 (en) Image acquisition method, camera assembly, and mobile terminal
CN108156383B (zh) 基于相机阵列的高动态十亿像素视频采集方法及装置
US20240054613A1 (en) Image processing method, imaging processing apparatus, electronic device, and storage medium
KR20200098032A (ko) 이미지 센서의 픽셀 어레이 및 이를 포함하는 이미지 센서
CN115280766B (zh) 图像传感器、成像装置、电子设备、图像处理系统及信号处理方法
WO2022226701A1 (zh) 图像处理方法、处理装置、电子设备和存储介质
JP6751426B2 (ja) 撮像装置
JP2006135823A (ja) 画像処理装置、撮像装置および画像処理プログラム
JP6069857B2 (ja) 撮像装置
CN112702543B (zh) 图像处理方法、图像处理系统、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938192

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21938192

Country of ref document: EP

Kind code of ref document: A1