WO2022226702A1 - 图像处理方法、处理装置、电子设备和存储介质 - Google Patents

图像处理方法、处理装置、电子设备和存储介质 Download PDF

Info

Publication number
WO2022226702A1
WO2022226702A1 PCT/CN2021/089704 CN2021089704W WO2022226702A1 WO 2022226702 A1 WO2022226702 A1 WO 2022226702A1 CN 2021089704 W CN2021089704 W CN 2021089704W WO 2022226702 A1 WO2022226702 A1 WO 2022226702A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
pixel
multiple frames
images
processed
Prior art date
Application number
PCT/CN2021/089704
Other languages
English (en)
French (fr)
Inventor
罗俊
Original Assignee
Oppo广东移动通信有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Oppo广东移动通信有限公司 filed Critical Oppo广东移动通信有限公司
Priority to PCT/CN2021/089704 priority Critical patent/WO2022226702A1/zh
Priority to CN202180095061.9A priority patent/CN116982071A/zh
Publication of WO2022226702A1 publication Critical patent/WO2022226702A1/zh
Priority to US18/493,934 priority patent/US20240054613A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/58Control of the dynamic range involving two or more exposures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • G06T2207/20032Median filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/133Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements including elements passing panchromatic light, e.g. filters passing white light
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/135Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on four or more different wavelength filter elements

Definitions

  • the present application relates to image processing technology, and in particular, to an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium.
  • Multi-channel readout of image data of the image sensor can be used to better process the image data, thereby improving the imaging quality of the image sensor.
  • the readout design of the existing image sensor is too complicated, and when the multi-channel image data is fused into an image, it may cause image artifacts, and the resulting image is of poor quality.
  • the present application aims to solve one of the problems in the related art at least to a certain extent. Therefore, the purpose of this application is to provide an image processing method, an image processing apparatus, an electronic device and a computer-readable storage medium.
  • a target image is obtained by synthesizing the first processed pixel image and the second processed pixel image.
  • an image preprocessing module configured to acquire image data of the image sensor according to a preset image readout mode to obtain multiple frames of first pixel images and multiple frames of second pixel images;
  • a first multi-frame processing module configured to align and fuse multiple frames of the first pixel images to obtain a first processed pixel image, an alignment model and a fusion parameter
  • a second multi-frame processing module configured to process multiple frames of the second pixel images according to the alignment model and the fusion parameters to obtain second processed pixel images
  • a synthesis module configured to synthesize the first processed pixel image and the second processed pixel image to obtain a target image.
  • An electronic device of an embodiment of the present application including an image sensor, a processor, and a memory;
  • the image processing method includes: acquiring image data of an image sensor according to a preset image readout mode to obtain multiple frames of first pixel images and multiple frames of second pixel images; aligning and fusing the multiple frames of the first pixel images to obtain multiple frames of first pixel images. obtaining a first processed pixel image, an alignment model and fusion parameters; processing multiple frames of the second pixel images according to the alignment model and the fusion parameters to obtain a second processed pixel image; synthesizing the first processed pixel image and The second processed pixel image obtains the target image.
  • the computer-readable storage medium of the embodiments of the present application includes a computer program, which, when executed by one or more processors, causes the processors to execute the image processing method.
  • the image processing method includes: acquiring image data of an image sensor according to a preset image readout mode to obtain multiple frames of first pixel images and multiple frames of second pixel images; aligning and fusing the multiple frames of the first pixel images to obtain multiple frames of first pixel images. obtaining a first processed pixel image, an alignment model and fusion parameters; processing multiple frames of the second pixel images according to the alignment model and the fusion parameters to obtain a second processed pixel image; synthesizing the first processed pixel image and The second processed pixel image obtains the target image.
  • FIG. 1 is a schematic flowchart of an image processing method according to some embodiments of the present application.
  • FIG. 2 is a schematic diagram of a module of an image processing apparatus according to some embodiments of the present application.
  • FIG. 3 is a schematic diagram of a module of an electronic device according to some embodiments of the present application.
  • FIG. 4 is a schematic diagram of a module of an image sensor according to some embodiments of the present application.
  • FIG. 5 is a schematic diagram of a scene in an image processing method according to some embodiments of the present application.
  • FIGS. 6-10 are schematic flowcharts of image processing methods according to some embodiments of the present application.
  • FIG. 11 is a schematic diagram of another module of the electronic device according to some embodiments of the present application.
  • FIG. 12 is a schematic diagram of connection between a processor and a computer-readable storage medium according to some embodiments of the present application.
  • Electronic device 100 image processing apparatus 10, preprocessing module 11, first multi-frame processing module 12, alignment unit 122, first fusion unit 124, second multi-frame processing module 13, second fusion unit 132, synthesis module 14,
  • An image sensor 30 a pixel array 301 , a vertical driving unit 302 , a control unit 303 , a column processing unit 304 , and a horizontal driving unit 305 .
  • Memory 40 programs 42 , computer readable storage medium 50 .
  • an embodiment of the present application provides an image processing method, and the image processing method includes the steps:
  • the present application also provides an image processing apparatus 10 for processing the above-mentioned image processing method.
  • the image processing apparatus 10 includes a preprocessing module 11 , a first multi-frame processing module 12 , and a second multi-frame processing module 13 and synthesis module 14.
  • step S11 may be implemented by the preprocessing module 11
  • step S12 may be implemented by the first multi-frame processing module 12
  • step S13 may be implemented by the second multi-frame processing module 13
  • step S14 may be implemented by the synthesis module 14 .
  • the preprocessing module 11 may be configured to acquire image data of the image sensor according to the preset image readout mode to obtain multiple frames of first pixel images and multiple frames of second pixel images.
  • the first multi-frame processing module 12 may be configured to align and fuse the multiple frames of the first pixel images to obtain the first processed pixel images, the alignment model and the fusion parameters.
  • the second multi-frame processing module 13 may be configured to process the multi-frame second pixel images according to the alignment model and the fusion parameters to obtain the second processed pixel images.
  • the synthesizing module 14 may be configured to synthesize the first processed pixel image and the second processed pixel image to obtain the target image.
  • an embodiment of the present application provides an electronic device 100 , and the image processing method of the present application can be completed by the electronic device 100 .
  • the electronic device 100 includes a processor 20 and an image sensor 30 .
  • the processor 20 may be configured to acquire image data of the image sensor 30 according to a preset image readout mode to obtain multiple frames of first pixel images and multiple frames of second pixel images.
  • the processor 20 can be used to align and fuse the multiple frames of the first pixel images to obtain the first processed pixel images, the alignment model and the fusion parameters;
  • the pixel image is processed to obtain a second processed pixel image and used to synthesize the first processed pixel image and the second processed pixel image to obtain a target image.
  • the image data is divided into multiple frames of first pixel images and second pixel images of different channels through a preset image readout mode
  • the first pixel image is first Performing alignment and fusion processing on multiple frames of first pixel images to obtain first processed pixel images, an alignment model and fusion parameters, and then aligning multiple frames of second pixel images of the second pixel images according to the alignment model obtained from the first pixel images, and fuse the aligned second pixel images according to the fusion parameters obtained from the first pixel images to obtain a second processed pixel image.
  • the quality of the first processed pixel image and the second processed pixel image can be synchronized, avoiding the need for the first processed pixel image.
  • Artifacts are generated when processing the pixel image and the second pixel image to synthesize the target image, improving image quality.
  • the electronic device 100 may be a mobile phone, a tablet computer, a notebook computer, a smart wearable device (smart watch, smart bracelet, smart helmet, smart glasses, etc.), a virtual reality device, and the like.
  • the image processing apparatus 10 may be hardware or software pre-installed on the mobile phone, and may execute the image processing method when the mobile phone is activated.
  • the image processing apparatus 10 may be a low-level software code segment of a mobile phone or a part of an operating system.
  • the image sensor 30 can be a camera component, and can use a complementary metal oxide semiconductor (CMOS, Complementary Metal Oxide Semiconductor) photosensitive element or a charge-coupled device (CCD, Charge-coupled Device) photosensitive element.
  • CMOS complementary metal oxide semiconductor
  • CCD Charge-coupled Device
  • the image sensor 30 may include a pixel array 301 , a vertical driving unit 302 , a control unit 303 , a column processing unit 304 and a horizontal driving unit 305 .
  • the image sensor 30 may generate image data after exposure through the pixel array 301 .
  • the pixel array 301 may be a color filter array (Color Filter Array, CFA).
  • the pixel array 301 includes a plurality of photosensitive pixels arranged two-dimensionally in an array form (ie, arranged in a two-dimensional matrix form), and each photosensitive pixel includes a different spectral absorption.
  • Each photosensitive pixel includes a photoelectric conversion element, and each photosensitive pixel converts the absorbed light into electric charge according to the intensity of the light incident thereon, so that each photosensitive pixel can generate a plurality of different Pixel data for color channels, which ultimately generate image data.
  • the vertical driving unit 302 includes a shift register and an address decoder.
  • the vertical driving unit 302 includes readout scan and reset scan functions.
  • the readout scanning refers to sequentially scanning unit photosensitive pixels row by row, and reading signals from these unit photosensitive pixels row by row.
  • the signal output by each photosensitive pixel in the selected and scanned photosensitive pixel row is transmitted to the column processing unit 304 .
  • the reset scan is used to reset the charges, and the photocharges of the photoelectric conversion element are discarded, so that the accumulation of new photocharges can be started.
  • the signal processing performed by the column processing unit 304 is correlated double sampling (CDS) processing. In the CDS process, the reset level and the signal level output from each photosensitive pixel in the selected row are taken out, and the level difference is calculated. Thus, the signals of the photosensitive pixels in one row are obtained.
  • the column processing unit 304 may have an analog-to-digital (A/D) conversion function for converting an analog pixel signal into a digital format.
  • the horizontal driving unit 305 includes a shift register and an address decoder.
  • the horizontal driving unit 305 sequentially scans the pixel array 301 column by column. Through the selective scanning operation performed by the horizontal driving unit 305, each photosensitive pixel column is sequentially processed by the column processing unit 304 and sequentially output.
  • the control unit 303 configures timing signals according to the operation mode, and uses various timing signals to control the vertical driving unit 302 , the column processing unit 304 and the horizontal driving unit 305 to work together.
  • the processor 20 can be connected to the pixel array 301 of the image sensor 30 , and after the pixel array 301 is exposed to generate image data, it can be transmitted to the processor 20 .
  • the processor 20 can be set with a preset image readout mode, and in the preset image readout mode, the image data generated from the image sensor 30 can be read, and the pixel data of the image data can be separated to obtain multiple frames of different channels.
  • One pixel image and multiple frames of second pixel images, and one frame of the first pixel image corresponds to one frame of the second pixel image, and the exposure times of the first pixel images of different frames or the second pixel images of different frames are different.
  • the first pixel image includes a long-exposure pixel image, a medium-exposure pixel image, and a short-exposure pixel image, and the exposure times corresponding to the long-exposure pixel image, the medium-exposure pixel image, and the short-exposure pixel image decrease sequentially.
  • the first pixel image or the second pixel image of each frame includes a plurality of pixels arranged in an array, for example, in the present application, the first pixel image includes a plurality of R pixels, G pixels and B pixels, the second pixel image includes a plurality of W pixels arranged in an array, that is, the first pixel image includes three color channels of R (ie red), G (ie green) and B (ie blue). information, the second processed pixel image has full-color information, and the full-color information may also be called luminance information. It can be understood that, in some other embodiments, due to different preset image readout modes, the generated first pixel image and the second pixel image are different.
  • the processor 20 may perform the processing on the multiple frames of the first pixel image and the multiple frames of the second image.
  • the alignment process is performed, and after the alignment, the multiple frames of the first pixel images or the multiple frames of the second pixel images are fused into one frame of the first processed pixel image or the second processed pixel image.
  • the processor 20 may first perform alignment calculation on the multiple frames of the first pixel images to establish an alignment model, and then may perform alignment processing on the multiple frames of the first pixel images and the multiple frames of the second pixel images respectively according to the alignment model, and then align the alignment model.
  • the subsequent multiple frames of the first pixel images are fused to generate one frame of the first pixel image and the fusion parameters are obtained.
  • the fusion processing is performed on the second pixel image according to the fusion parameters generated by the first pixel image to generate the second processed pixel image. .
  • the processor 20 may perform a synthesis process on the first processed pixel image and the second processed pixel image to obtain the target image.
  • the first processed pixel image and the second processed pixel image generated by fusion can be synchronized, avoiding the need for the first pixel image to be processed.
  • the processed pixel image and the second processed pixel image are misaligned, so as to ensure that the quality of the synthesized target image is improved.
  • only the alignment model of the first pixel image needs to be calculated. , reducing the amount of calculation and improving the efficiency.
  • the color shift between the first processed pixel image and the second processed pixel image can be reduced, thereby further significantly improve the quality of the target image.
  • the image data includes multiple minimum repeating units A1, each minimum repeating unit A1 includes multiple pixel units a1, and each pixel unit a1 includes multiple color pixels and full Color pixels, the color pixels are arranged in the first diagonal direction, the panchromatic pixels are arranged in the second diagonal direction, and the first diagonal direction is different from the second diagonal direction.
  • Step S11 includes sub-steps:
  • sub-step S112 and sub-step S114 may be implemented by the preprocessing module 11 , or in other words.
  • the preprocessing module 11 may be used to obtain color pixels in the first diagonal direction to generate a first pixel image.
  • the preprocessing module 11 may also be used to acquire panchromatic pixels in the second diagonal direction to generate a second pixel image.
  • the processor 20 may be configured to acquire color pixels in a first diagonal direction to generate a first pixel image.
  • the processor 20 may also be used to acquire panchromatic pixels in a second diagonal direction to generate a second pixel image.
  • the preset image readout mode may be the Binning mode, that is, the processor 20 may use the Binning mode to read the image data. Reading is performed to generate a first pixel image and a second pixel image. It should be noted that the Binning algorithm is to add the charges corresponding to adjacent pixels of the same color in the same pixel unit a1 together, and read out in the mode of one pixel.
  • the color pixels in the first diagonal direction in each pixel unit a1 are read, and the second diagonal direction in each pixel unit a1 is read.
  • the read out panchromatic pixels are read, and then all read out color pixels are arrayed to form a first pixel image, and all read out panchromatic pixels are arrayed and arranged to form a second pixel image.
  • step S12 includes sub-steps:
  • the first multi-frame processing module 12 includes an alignment unit 122 and a first fusion unit 124. Steps S122 and S124 may be implemented by the alignment unit 122, and step S126 may be implemented by the first fusion unit 124;
  • the aligning unit 122 may be configured to find matching pixels in the first pixel images of the multiple frames to calculate the alignment model of the first pixel images of the multiple frames, and the alignment unit 122 may also be configured to align the first pixels of the multiple frames according to the alignment model. image.
  • the first fusion unit 124 may be configured to fuse the aligned multiple frames of the first pixel images to obtain a first processed pixel image.
  • the processor 20 may be configured to find mutually matching pixel points in the first pixel images of the multiple frames to calculate the alignment model of the first pixel images of the multiple frames, and the processor 20 may also be configured to align the multiple frames of the first pixel images according to the alignment model. The frame of the first pixel image and the aligned multiple frames of the first pixel image are fused to obtain the first processed pixel image.
  • the alignment model calculated based on the matched pixels can eliminate the motion relationship between the multiple frames of images. This enables multiple frames of first pixel images to be fused together with high quality.
  • the processor 20 can use a scale-invariant feature transform (Scale-Invariant Feature Transform, sift), an accelerated robust feature (Speeded Up Robust Features, surf) feature point matching algorithm or an optical flow field algorithm to find the first pixel image of multiple frames of pixels. pixels that match each other.
  • Scale-Invariant Feature Transform Scale-Invariant Feature Transform, sift
  • accelerated robust feature Speeded Up Robust Features, surf
  • the sift algorithm refers to an algorithm for detecting and describing local features in an image in the field of computer vision. some degree of stability.
  • SIFT feature detection There are four steps in SIFT feature detection: 1. Extremum detection in scale space: search images on all scale spaces, identify potential pairs of scales and select invariant interest points through Gaussian differential functions. 2.
  • Feature point positioning At each candidate position, the position scale is determined by a fitting fine model, and the selection of key points is based on their stability. 3.
  • Feature direction assignment Based on the local gradient direction of the image, one or more directions are assigned to each key point position, and all subsequent operations are to transform the direction, scale and position of the key points, thereby providing the uniqueness of these features. transsexual. 4.
  • Feature point description In the neighborhood around each feature point, the local gradients of the image are measured at selected scales, and these gradients are transformed into a representation that allows relatively large local shape deformation and illumination transformation .
  • the Surf algorithm is a robust image recognition and description algorithm that can be used for computer vision tasks.
  • the concepts and steps of the SSURF algorithm are based on SIFT, but the detailed process is slightly different.
  • the SURF algorithm includes the following three steps: feature point detection, feature proximity description, and descriptor pairing.
  • the optical flow field algorithm is a point-based matching algorithm, which uses the changes of pixels in the image sequence in the time domain and the correlation between adjacent frames to find the corresponding relationship between the previous frame and the current frame, so as to calculate the corresponding relationship between the previous frame and the current frame.
  • the optional alignment model in this embodiment may be an affine transformation model and a perspective transformation model. That is, the affine transformation model or the perspective transformation model can be calculated according to the matching pixel points, and then the multi-frame first pixel images can be aligned according to the affine transformation model or the perspective transformation model, and the aligned multi-frame images can be obtained. Frame the first pixel image.
  • step S122 includes sub-steps:
  • sub-step S1222 and sub-step S1224 may be implemented by the alignment unit 122, that is, the alignment unit 122 may be used to calculate the scaling and rotation parameters and displacement parameters reflected by the matched pixels, and the alignment unit 122 may also use It is used to create an alignment model according to the scaling and rotation parameters and displacement parameters.
  • the processor 20 is configured to calculate scaling and rotation parameters and displacement parameters reflected by the matched pixels, and establish an alignment model according to the scaling and rotation parameters and the displacement parameters.
  • the transformation model may be an affine transformation model
  • the reflection transformation model means that any parallelogram in one plane can be mapped to another parallelogram by affine transformation, and the image mapping operation is performed in the same spatial plane Carry out, through different transformation parameters, the period becomes smaller to obtain different types of parallelograms.
  • the affine transformation matrix model is used, the scaling and rotation of the image are controlled based on the scaling and rotation parameters, and the displacement of the image is controlled based on the position parameter.
  • the coordinate values of the pixel points of the first frame of the first pixel image in the multiple frames of the first pixel image are obtained, and the coordinate value is substituted into the preset affine transformation formula, and the second frame of the first pixel image that matches each other is transferred to the first pixel image of the second frame.
  • the coordinate values of the pixels in are substituted to calculate the scaling and rotation parameters, and the displacement parameters, and then the affine transformation model can be constructed according to the scaling and rotation parameters and the positioning parameters. In this way, the alignment of multiple frames of images can be performed without relying on equipment such as a fixed frame, and the pixel deviation after the synthesis of multiple frames of images is reduced.
  • step S13 includes sub-steps:
  • the second multi-frame processing module 13 may include a second fusion unit 132 .
  • Sub-step S132 may be implemented by the alignment unit 122, and sub-step S134 may be implemented by the second fusion unit 132, or in other words, the alignment unit 122 is further configured to perform alignment processing on multiple frames of second pixel images according to the alignment model.
  • the second fusion unit 132 is configured to fuse the aligned multiple frames of second pixel images according to the fusion parameters to obtain a second processed pixel image.
  • the processor 20 may be configured to align the multiple frames of the second pixel images according to the alignment model and fuse the aligned multiple frames of the second pixel images according to the fusion parameters to obtain the second processed pixel images.
  • the aligned multiple frames of the second pixel images are fused to generate the second processed pixel image according to the fusion parameter, which reduces the color shift between the second processed pixel image and the first processed pixel image.
  • step S14 includes:
  • step S142 may be implemented by the synthesis module 14, or in other words, the synthesis module 14 may be configured to synthesize the first processed pixel image and the second processed pixel image based on a median filtering algorithm to generate a target image.
  • the processor 20 may be configured to synthesize the first processed pixel image and the second processed pixel image based on a median filtering algorithm to generate the target image.
  • the processor 20 may first perform median filtering on the first processed pixel image and the second processed pixel image, and then synthesize the first processed pixel image and the second processed pixel image after median filtering; or , firstly synthesizing the first processed pixel image and the second processed pixel image into a new image, and then performing median filtering on the image to generate the final image.
  • the flexibility of the median filter is strong, and the signal-to-noise ratio of the image after the median filter is significantly improved.
  • median filtering is a nonlinear signal processing technology based on sorting statistics theory that can effectively suppress noise.
  • the median value of each point value in the domain is replaced, so that the surrounding pixel values are close to the true value, thereby eliminating isolated noise points.
  • an embodiment of the present application provides an electronic device 100, including a processor 20, a memory 30, and one or more programs 32, wherein the one or more programs 32 are stored in the memory 30 and processed
  • the processor 20 is executed, and the program 32 is executed by the processor 20 to execute the instructions of the above-mentioned image processing method.
  • the present application provides a non-volatile computer-readable storage medium 40 containing a computer program.
  • the computer program is executed by one or more processors 20 , the processor 20 is made to execute the above-mentioned image processing method. .
  • any description of a process or method in the flowcharts or otherwise described herein may be understood to represent a module, segment or portion of code comprising one or more executable instructions for implementing a specified logical function or step of the process , and the scope of the preferred embodiments of the present application includes alternative implementations in which the functions may be performed out of the order shown or discussed, including performing the functions substantially concurrently or in the reverse order depending upon the functions involved, which should It is understood by those skilled in the art to which the embodiments of the present application belong.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法。图像处理方法包括步骤:(S11)根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像,(S12)对多帧第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数,(S13)分别根据对齐模型和融合参数对多帧第二像素图像处理得到第二处理像素图像,(S14)合成第一处理像素图像和第二处理像素图像得到目标图像。还公开了一种图像处理装置10、电子设备100和计算机可读存储介质。

Description

图像处理方法、处理装置、电子设备和存储介质 技术领域
本申请涉及图像处理技术,特别涉及一种图像处理方法、图像处理装置、电子设备和计算机可读存储介质。
背景技术
手机等移动终端中往往装配有图像传感器,以实现拍照功能,相关技术中,可采用多通道读出图像传感器的图像数据以更好的对图像数据进行处理,从而提高图像传感器的成像质量。然而,现有的图像传感器读出设计中,过于复杂,而且,多通道的图像数据融合成图像时可能造成图像的伪影,生成的图像画质不佳。
发明内容
有鉴于此,本申请旨在至少在一定程度上解决相关技术中的问题之一。为此,本申请的目的在于提供一种图像处理方法、图像处理装置、电子设备和计算机可读存储介质。
本申请实施方式的图像处理方法,包括:
根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;
对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;
分别根据所述对齐模型和所述融合参数对多帧所述第二像素图像处理得到第二处理像素图像;和
合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
本申请实施方式的图像处理装置包括:
图像预处理模块,用于根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;
第一多帧处理模块,用于对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;
第二多帧处理模块,用于分别根据所述对齐模型和所述融合参数对多帧所述第二像素图像处理得到第二处理像素图像;和
合成模块,用于合成所述第一处理像素图像和所述第二处理像素图像得到目标图 像。
本申请实施方式的电子设备,包括图像传感器、处理器和存储器;和
一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,并且被所述处理器执行,所述程序包括用于执行所述图像处理方法的指令。所述图像处理方法包括:根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;分别根据所述对齐模型和所述融合参数对多帧所述第二像素图像处理得到第二处理像素图像;合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
本申请实施方式的计算机可读存储介质,包括计算机程序,当所述计算机计算机程序被一个或多个处理器执行时,使得所述处理器执行所述图像处理方法。所述图像处理方法包括:根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;分别根据所述对齐模型和所述融合参数对多帧所述第二像素图像处理得到第二处理像素图像;合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
附图说明
本申请上述的和/或附加的方面和优点从下面结合附图对实施例的描述中将变得明显和容易理解,其中:
图1是本申请某些实施方式图像处理方法的一个流程示意图;
图2是本申请某些实施方式的图像处理装置的一个模块示意图;
图3是本申请某些实施方式的电子设备的一个模块示意图;
图4是本申请某些实施方式的图像传感器的一个模块示意图;
图5是本申请某些实施方式的图像处理方法中的一个场景示意图;
图6-10是本申请某些实施方式的图像处理方法的流程示意图。
图11是本申请某些实施方式的电子设备的又一模块示意图;
图12是本申请某些实施方式的处理器和计算机可读存储介质的连接示意图。
主要元件符号说明:
电子设备100、图像处理装置10、预处理模块11、第一多帧处理模块12、对齐单元122、第一融合单元124、第二多帧处理模块13、第二融合单元132、合成模块 14、
处理器20;
图像传感器30、像素阵列301、垂直驱动单元302、控制单元303、列处理单元304、水平驱动单元305。
存储器40、程序42、计算机可读存储介质50。
具体实施方式
下面详细描述本申请的实施方式,所述实施方式的示例在附图中示出,其中自始至终相同或类似的标号表示相同或类似的元件或具有相同或类似功能的元件。下面通过参考附图描述的实施方式是示例性的,仅用于解释本申请,而不能理解为对本申请的限制。
请参阅图1,本申请实施方式提供一种图像处理方法,图像处理方法包括步骤:
S11,根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;
S12,对多帧第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;
S13,分别根据对齐模型和融合参数对多帧第二像素图像处理得到第二处理像素图像;和
S14,合成第一处理像素图像和第二处理像素图像得到目标图像。
请结合图2,本申请还提供了一种图像处理装置10,用于处理上述的图像处理方法,图像处理装置10包括预处理模块11、第一多帧处理模块12、第二多帧处理模块13和合成模块14。
其中,步骤S11可以由预处理模块11实现,步骤S12可以由第一多帧处理模块12实现,步骤S13可以由第二多帧处理模块13实现,步骤S14可以由合成模块14实现。
或者说,预处理模块11可以用于根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像。
第一多帧处理模块12可以用于对多帧第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数。
第二多帧处理模块13可以用于分别根据对齐模型和融合参数对多帧第二像素图像处理得到第二处理像素图像。
合成模块14可以用于合成第一处理像素图像和第二处理像素图像得到目标图像。
请结合图3,本申请实施方式提供了一种电子设备100,本申请的图像处理方法可以由电子设备100完成。电子设备100包括处理器20和图像传感器30。
处理器20可以用于20可以用于根据预设图像读出模式获取图像传感器30的图像数据以得到多帧第一像素图像和多帧第二像素图像。处理器20可以用于对多帧第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;处理器20还可以用于分别根据对齐模型和融合参数对多帧第二像素图像处理得到第二处理像素图像以及用于合成第一处理像素图像和第二处理像素图像得到目标图像。
本申请图像处理方法、图像处理装置10和电子设备100中,通过预设图像读出模式将图像数据分成不同通道的多帧第一像素图像和第二像素图像,并先对第一像素图像的多帧第一像素图像进行对齐和融合处理而得到第一处理像素图像、对齐模型和融合参数,再根据第一像素图像得到的对齐模型对第二像素图像的多帧第二像素图像进行对齐,以及根据第一像素图像得到的融合参数对对齐后的第二像素图像进行融合,得到第二处理像素图像,如此,可使得第一处理像素图像和第二处理像素图像质量同步,避免了第一处理像素图像和第二像素图像合成目标图像时产生伪影,提升了图像质量。
电子设备100可以是手机、平板电脑、笔记本电脑、智能穿戴设备(智能手表、智能手环、智能头盔、智能眼镜等)、虚拟现实设备等。
本实施方式以电子设备100是手机为例进行说明,也即是说,图像处理方法和图像处理装置10应用于但不限于手机。图像处理装置10可以是预安装于手机的硬件或软件,并在手机上启动运行时可以执行图像处理方法。例如,图像处理装置10可以是手机的底层软件代码段或者说是操作系统的一部分。
图像传感器30可以为摄像头组件,可以采用互补金属氧化物半导体(CMOS,ComplementaryMetal Oxide Semiconductor)感光元件或者电荷耦合元件(CCD,Charge-coupled Device)感光元件。
请参阅图4,图像传感器30可包括有像素阵列301、垂直驱动单元302、控制单元303、列处理单元304和水平驱动单元305。
图像传感器30可通过像素阵列301曝光后生成图像数据。像素阵列301可以为色彩滤波阵列(Color Filter Array,CFA),像素阵列301包括有阵列形式二维排列(即二维矩阵形式排布)的多个感光像素,每个感光像素包括具有不同光谱吸收体特性的吸收区,并且,每个感光像素包括光电转换元件,每个感光像素根据入射在其上的光的强度将吸收的光转换为电荷,使得每个感光像素均可以生成多个具有不同颜色通道的像素数据,从而最终生成图像数据。
垂直驱动单元302包括移位寄存器和地址译码器。垂直驱动单元302包括读出扫描和 复位扫描功能。读出扫描是指顺序地逐行扫描单位感光像素,从这些单位感光像素逐行地读取信号。被选择并扫描的感光像素行中的每一感光像素输出的信号被传输到列处理单元304。复位扫描用于复位电荷,光电转换元件的光电荷被丢弃,从而可以开始新的光电荷的积累。由列处理单元304执行的信号处理是相关双采样(CDS)处理。在CDS处理中,取出从所选行中的每一感光像素输出的复位电平和信号电平,并且计算电平差。因而,获得了一行中的感光像素的信号。列处理单元304可以具有用于将模拟像素信号转换为数字格式的模数(A/D)转换功能。
水平驱动单元305包括移位寄存器和地址译码器。水平驱动单元305顺序逐列扫描像素阵列301。通过水平驱动单元305执行的选择扫描操作,每一感光像素列被列处理单元304顺序地处理,并且被顺序输出。
控制单元303根据操作模式配置时序信号,利用多种时序信号来控制垂直驱动单元302、列处理单元304和水平驱动单元305协同工作。
处理器20可与图像传感器30的像素阵列301连接,在像素阵列301曝光生成图像数据后,可传输至处理器20。处理器20可设置有预设图像读出模式,在预设图像读出模式下,可读取从图像传感器30生成的图像数据,并可将图像数据的像素数据分离得到不同通道的多帧第一像素图像和多帧第二像素图像,并且,一帧第一像素图像对应一帧第二像素图像,不同帧的第一像素图像或不同帧的第二像素图像曝光时间不同。例如,在一些实施方式中,第一像素图像包括长曝光像素图像、中曝光像素图像和短曝光像素图像,长曝光像素图像、中曝光像素图像和短曝光像素图像对应的曝光时间依次减少。
每一帧的第一像素图像或第二像素图像都包括多个阵列排布的像素,例如,在本申请中,第一像素图像包括以拜耳阵列形式排布的多个R像素、G像素和B像素,第二像素图像包括阵列排布的多个W像素,也即是,第一像素图像包括R(即红色)、G(即绿色)和B(即蓝色)三个颜色通道的彩色信息,第二处理像素图像具有全色信息,全色信息又可称为亮度信息。可以理解,在其它的一些实施方式中,由于预设图像读出模式的不同,生成的第一像素图像和第二像素图像不同。
进一步地,处理器20在根据预设图像读出模式将图像数据分别生成多帧第一像素图像和第二像素图像后,处理器20可对多帧第一像素图像和多帧第二图像进行对齐处理,并在对齐后将多帧第一像素图像或多帧第二像素图像融合成一帧第一处理像素图像或第二处理像素图像。
可以理解,不同的曝光时间可能导致不同的像素图像位置出现变化,导致多帧第一像素图像融合第一处理像素图像后出现像素偏差,因此,需要将多帧第一像素图像或多帧第二像素图像进行对齐处理,减少了第一处理像素图像或第二处理像素图像的 像素偏差。
具体地,处理器20可先对多帧第一像素图像进行对齐计算建立对齐模型,进而可根据对齐模型分别对多帧第一像素图像以及多帧第二像素图像进行对齐处理,进而再对对齐后的多帧第一像素图像进行融合,生成一帧第一像素图像并得到融合参数,同时,将根据第一像素图像生成的融合参数对第二像素图像进行融合处理,生成第二处理像素图像。
更进一步地,在生成第一处理像素图像和第二处理像素图像后,处理器20可将第一处理像素图像和第二处理像素图像进行合成处理,得到目标图像。
如此,通过将多帧第一像素图像计算得到的对齐模型分别对第一像素图像和第二像素图像对齐,可以使得融合生成的第一处理像素图像与第二处理像素图像同步,避免了第一处理像素图像和第二处理像素图像形成错位,从而保证合成的目标图像画质提升,同时,由于在对第一像素图像和第二像素图像进行对齐时,只需计算第一像素图像的对齐模型,减少了计算量,提升了效率。另外,由于在多帧对齐的第二像素图像融合过程中,有根据第一像素图像生成的融合参数的参与,可降低第一处理像素图像和第二处理像素图像之间的色偏,从而进一步地提升了目标图像的画质。
请结合图5和图6,在某些实施方式中,图像数据包括多个最小重复单元A1,每个最小重复单元A1包括多个像素单元a1,每个像素单元a1包括多个彩色像素和全色像素,彩色像素设置在第一对角线方向,全色像素设置在第二对角线方向,第一对角线方向与第二对角线方向不同,步骤S11包括子步骤:
S112,获取第一对角线方向上的彩色像素以生成第一像素图像;
S114,获取第二对角线方向上的全色像素以生成第二像素图像。
请进一步参阅图2,在某些实施方式中,子步骤S112和子步骤S114可以由预处理模块11实现,或者说。
预处理模块11可以用于获取第一对角线方向上的彩色像素以生成第一像素图像。预处理模块11还可以用于获取第二对角线方向上的全色像素以生成第二像素图像。
在某些实施方式中,处理器20可以用于获取第一对角线方向上的彩色像素以生成第一像素图像。处理器20还可以用于获取第二对角线方向上的全色像素以生成第二像素图像。
具体的,处理器20通过预设图像读出模式在读取图像传感器30采集的图像数据时,预设图像读出模式可以为Binning模式,也即是,处理器20可通过Binning模式对图像数据进行读取从而生成第一像素图像和第二像素图像。需要说明的是,Binning算法是将同一像素单元a1内的相同颜色的相邻像素对应的电荷加在一起,以一个像素的模 式读出。
进一步地,在以Binning模式进行读取时,将每个像素单元a1中的第一对角线方向上的彩色像素进行读取,并将每个像素单元a1中的第二对角线方向上的全色像素进行读取,进而将所有读出的彩色像素进行阵列排布形成第一像素图像,将所有读出的全色像素进行陈列排布生成第二像素图像。
请参阅图7,在某些实施方式中,步骤S12包括子步骤:
S122,查找多帧第一像素图像中相互匹配的像素点以计算多帧第一像素图像的对齐模型;
S124,根据对齐模型对齐多帧第一像素图像;
S126,融合对齐后的多帧第一像素图像以得到第一处理像素图像。
在某些实施方式中,第一多帧处理模块12包括对齐单元122和第一融合单元124,步骤S122和S124可以由对齐单元122实现,步骤S126可以由第一融合单元124;
或者说,对齐单元122可以用于查找多帧第一像素图像中相互匹配的像素点以计算多帧第一像素图像的对齐模型,对齐单元122还可以用于根据对齐模型对齐多帧第一像素图像。
第一融合单元124可以用于融合对齐后的多帧第一像素图像以得到第一处理像素图像。
在某些实施方式中,处理器20可以用于查找多帧第一像素图像中相互匹配的像素点以计算多帧第一像素图像的对齐模型,处理器20还可以用于根据对齐模型对齐多帧第一像素图像和融合对齐后的多帧第一像素图像以得到第一处理像素图像。
需要说明的是,由于相互匹配的像素点之间的运动反映的是多帧图像之间的运动,则基于相互匹配的像素点计算得到的对齐模型,可以消除多帧图像之间的运动关系,使得多帧第一像素像图像可以高质量的融合到一起。
处理器20可采用尺度不变特征变换(Scale-Invariant Feature Transform,sift)、加速稳健特征(Speeded Up Robust Features,surf)特征点匹配算法或光流场算法来查找到多帧像素第一像素图像之间相互匹配的像素点。
相关计算领域人员可以理解,sift算法是指在计算机视觉领域中检测和描述图像中局部特征的算法,其对旋转、尺度缩放、亮度变化保持不变性,对视角变化、仿射变换、噪声也保持一定程度的稳定。SIFT特征检测有四步:1、尺度空间的极值检测:搜索所有尺度空间上的图像,通过高斯微分函数来识别潜在的对尺度和选择不变的兴趣点。2、特征点定位:在每个候选的位置上,通过一个拟合精细模型来确定位置尺度,关键点的选取依据他们的稳定程度。3.特征方向赋值:基于图像局部的梯度方向, 分配给每个关键点位置一个或多个方向,后续的所有操作都是对于关键点的方向、尺度和位置进行变换,从而提供这些特征的不变性。4.特征点描述:在每个特征点周围的邻域内,在选定的尺度上测量图像的局部梯度,这些梯度被变换成一种表示,这种表示允许比较大的局部形状的变形和光照变换。
Surf算法是一种稳健的图像识别和描述算法,可被用于计算机视觉任务,SSURF算法的概念及步骤均建立在SIFT之上,但详细的流程略有不同。SURF算法包含以下三个步骤:特征点侦测、特征邻近描述、描述子配对。
光流场算法是一种基于点的匹配算法,利用图像序列中像素在时间域上的变化以及相邻帧之间的相关性来找到上一帧跟当前帧之间存在的对应关系,从而计算出相邻帧之间物体的运动信息的一种方法。
进一步地,本实施方式可选用的对齐模型可以是仿射变换模型和透视变换模型。也即是,可根据相互匹配的像素点来计算出仿射变换模型或透视变换模型,进而根据仿射变换模型或透视变换模型来对多帧第一像素图像进行对齐处理,得到对齐后的多帧第一像素图像。
请参阅图8,在某些实施方式中,步骤S122包括子步骤:
S1222,计算相互匹配的像素点反映的缩放与旋转参数、位移参数;
S1224,根据缩放与旋转参数、位移参数建立对齐模型。
在某些实施方式中,子S1222和子步骤S1224可以由对齐单元122实现,也即是对齐单元122可以用于计算相互匹配的像素点反映的缩放与旋转参数、位移参数,对齐单元122还可以用于根据缩放与旋转参数、位移参数建立对齐模型。
在某些实施方式中,处理器20用于计算相互匹配的像素点反映的缩放与旋转参数、位移参数和根据缩放与旋转参数、位移参数建立对齐模型。
在本实施方式中,变换模型可以为仿射变换模型,反射变换模型是指:一个平面内的任意平行四边形可以被仿射变换映射为另一个平行四边形,图像的映射操作在同一个空间平面内进行,通过不同的变换参数时期变小而得到不同类型的平行四边形。在仿射用变换矩阵模型时,基于缩放和旋转参数控制图像的缩放与旋转,基于位置参数控制图像的位移。
具体地,获取多帧第一像素图像中第一帧第一像素图像的像素点的坐标值,并将坐标值代入预设仿射变换公式中,以及向相互匹配的第二帧第一像素图像中的像素点的坐标值代入计算出缩放与旋转参数,以及位移参数,进而可以根据缩放与旋转参数以及位于参数构建仿射变换模型。如此,在不依赖固定架等设备的基础上,即可进行多帧图像的对齐,减少了多帧图像合成后的像素偏差。
请参阅图9,在某些实施方式中,步骤S13包括子步骤:
S132,根据对齐模型对多帧第二像素图像对齐处理;
S134,根据融合参数融合对齐后的多帧第二像素图像以得到第二处理像素图像。
在某些实施方式中,第二多帧处理模块13可包括第二融合单元132。子步骤S132可以对齐单元122实现,子步骤S134可以由第二融合单元132实现,或者说,对齐单元122还用于根据对齐模型对多帧第二像素图像对齐处理。第二融合单元132用于根据融合参数融合对齐后的多帧第二像素图像以得到第二处理像素图像。
在某些实施方式中,处理器20可以用于根据对齐模型对多帧第二像素图像对齐处理以及根据融合参数融合对齐后的多帧第二像素图像以得到第二处理像素图像。
如此,通过对齐模型对多帧第二像素图像对齐,减少了融合生成的第二处理像素图像的像素偏差,并且使得第二处理像素图像和第一处理像素图像能够同步对齐。根据融合参数将对齐的多帧第二像素图像融合生成第二处理像素图像,减少了第二处理像素图像与第一处理像素图像之间的色偏。
请参阅图10,在某些实施方式中,步骤S14包括:
S142,基于中值滤波算法合成第一处理像素图像和第二处理像素图像以生成目标图像。
在某些实施方式中,步骤S142可以由合成模块14实现,或者说,合成模块14可以用于基于中值滤波算法合成第一处理像素图像和第二处理像素图像以生成目标图像。
在某些实施方式中,处理器20可以用于基于中值滤波算法合成第一处理像素图像和第二处理像素图像以生成目标图像。
具体地,处理器20在合成时,可以先对第一处理像素图像和第二处理像素图像进行中值滤波,再合成进行中值滤波后的第一处理像素图像和第二处理像素图像;或者,先将第一处理像素图像和第二处理像素图像合成为一个新的图像,再对该图像进行中值滤波以生成最终图像。如此,中值滤波的灵活性较强,且经过中值滤波的图像的信噪比得到明显提升。
需要说明的是,中值滤波是基于排序统计理论的一种能有效抑制噪声的非线性信号处理技术,中值滤波的基本原理是把数字图像或数字序列中一点的值用该点的一个邻域中各点值的中值代替,让周围的像素值接近的真实值,从而消除孤立的噪声点。
请参阅图11,本申请实施方式提供了一种电子设备100,包括处理器20、存储器30以及一个或多个程序32,其中,一个或多个程序32被存储在存储器30中,并且被处理器20执行,程序32被处理器20执行上述的图像处理方法的指令。
请结合图12,本申请提供了一种包含计算机程序的非易失性计算机可读存储介质40,当计算机程序被一个或多个处理器20执行时,使得处理器20执行上述的图像处理方法。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”或“一些示例”等的描述意指结合所述实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。此外,在不相互矛盾的情况下,本领域的技术人员可以将本说明书中描述的不同实施例或示例以及不同实施例或示例的特征进行结合和组合。
流程图中或在此以其他方式描述的任何过程或方法描述可以被理解为,表示包括一个或更多个用于实现特定逻辑功能或过程的步骤的可执行指令的代码的模块、片段或部分,并且本申请的优选实施方式的范围包括另外的实现,其中可以不按所示出或讨论的顺序,包括根据所涉及的功能按基本同时的方式或按相反的顺序,来执行功能,这应被本申请的实施例所属技术领域的技术人员所理解。
尽管上面已经示出和描述了本申请的实施方式,可以理解的是,上述实施方式是示例性的,不能理解为对本申请的限制,本领域的普通技术人员在本申请的范围内可以对上述实施方式进行变化、修改、替换和变型。
以上实施例仅表达了本申请的几种实施方式,其描述较为具体和详细,但并不能因此而理解为对本申请专利范围的限制。应当指出的是,对于本领域的普通技术人员来说,在不脱离本申请构思的前提下,还可以做出若干变形和改进,这些都属于本申请的保护范围。因此,本申请专利的保护范围应以所附权利要求为准。

Claims (10)

  1. 一种图像处理方法,其特征在于,包括:
    根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;
    对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;
    分别根据所述对齐模型和所述融合参数对多帧所述第二像素图像处理得到第二处理像素图像;和
    合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,所述对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数包括:
    查找多帧所述第一像素图像中相互匹配的像素点以计算所述多帧所述第一像素图像的对齐模型;
    根据所述对齐模型对齐多帧所述第一像素图像;
    融合对齐后的多帧所述第一像素图像以得到第一处理像素图像和融合参数。
  3. 根据权利要求2所述的图像处理方法,其特征在于,所述查找多帧所述第一像素图像中相互匹配的像素点以计算多帧所述第一像素图像的对齐模型包括:
    计算所述相互匹配的像素点反映的缩放与旋转参数、位移参数;
    根据所述缩放与旋转参数、所述位移参数得到所述对齐模型。
  4. 根据权利要求1所述的图像处理方法,其特征在于,所述分别根据所述对齐模型和所述融合参数对所述多帧所述第二像素图像处理得到第二处理像素图像包括:
    根据所述对齐模型对多帧所述第二像素图像对齐处理;
    根据所述融合参数融合对齐后的多帧所述第二像素图像以得到第二处理像素图像。
  5. 根据权利要求1所述的图像处理方法,其特征在于,所述图像数据包括多个最小重复单元,每个所述最小重复单元包括多个像素单元,每个所述像素单元包括多个彩色像素和全色像素,所述彩色像素设置在第一对角线方向,所述全色像素设置在第二对角线方向,所述第一对角线方向与所述第二对角线方向不同,所述根据预设图像读出模式获取图像传感器的图像数据以得到第一像素图像和第二像素图像包括:
    获取所述第一对角线方向上的所述彩色像素以生成所述第一像素图像;
    获取所述第二对角线方向上的所述全色像素以生成所述第二像素图像。
  6. 根据权利要求1所述的图像处理方法。其特征在于,所述合成所述第一处理像素图像和所述第二处理像素图像得到目标图像包括:
    基于中值滤波算法合成所述第一处理像素图像和第二处理像素图像以生成目标图像
  7. 根据权利要求1所述的图像处理方法,其特征在于,所述第一像素图像包括以拜耳阵列形式排布的R像素、G像素和B像素,所述第二像素图像包括阵列排布的W像素。
  8. 一种图像处理装置,其特征在于,包括:
    预处理模块,用于根据预设图像读出模式获取图像传感器的图像数据以得到多帧第一像素图像和多帧第二像素图像;
    第一多帧处理模块,用于对多帧所述第一像素图像对齐并融合处理以得到第一处理像素图像、对齐模型和融合参数;
    第二多帧处理模块,用于分别根据所述对齐模型和所述融合参数对所述多帧所述第二像素图像处理得到第二处理像素图像;和
    合成模块,用于合成所述第一处理像素图像和所述第二处理像素图像得到目标图像。
  9. 一种电子设备,其特征在于,包括图像传感器、处理器和存储器;和
    一个或多个程序,其中所述一个或多个程序被存储在所述存储器中,并且被所述处理器执行,所述程序包括用于执行根据权利要求1-7任意一项所述的图像处理方法的指令。
  10. 一种包含计算机程序的非易失性计算机可读存储介质,其特征在于,当所述计算机计算机程序被一个或多个处理器执行时,使得所述处理器执行权利要求1-7中任一项所述的图像处理方法。
PCT/CN2021/089704 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质 WO2022226702A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
PCT/CN2021/089704 WO2022226702A1 (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质
CN202180095061.9A CN116982071A (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质
US18/493,934 US20240054613A1 (en) 2021-04-25 2023-10-25 Image processing method, imaging processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2021/089704 WO2022226702A1 (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US18/493,934 Continuation US20240054613A1 (en) 2021-04-25 2023-10-25 Image processing method, imaging processing apparatus, electronic device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022226702A1 true WO2022226702A1 (zh) 2022-11-03

Family

ID=83847525

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/089704 WO2022226702A1 (zh) 2021-04-25 2021-04-25 图像处理方法、处理装置、电子设备和存储介质

Country Status (3)

Country Link
US (1) US20240054613A1 (zh)
CN (1) CN116982071A (zh)
WO (1) WO2022226702A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386089A (zh) * 2023-06-05 2023-07-04 季华实验室 运动场景下人体姿态估计方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915940A (zh) * 2015-06-03 2015-09-16 厦门美图之家科技有限公司 一种基于图像对齐的图像去噪的方法和系统
CN108288253A (zh) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 Hdr图像生成方法及装置
CN109584179A (zh) * 2018-11-29 2019-04-05 厦门美图之家科技有限公司 一种卷积神经网络模型生成方法及图像质量优化方法
CN111275653A (zh) * 2020-02-28 2020-06-12 北京松果电子有限公司 图像去噪方法及装置
US20200302582A1 (en) * 2019-03-19 2020-09-24 Apple Inc. Image fusion architecture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104915940A (zh) * 2015-06-03 2015-09-16 厦门美图之家科技有限公司 一种基于图像对齐的图像去噪的方法和系统
CN108288253A (zh) * 2018-01-08 2018-07-17 厦门美图之家科技有限公司 Hdr图像生成方法及装置
CN109584179A (zh) * 2018-11-29 2019-04-05 厦门美图之家科技有限公司 一种卷积神经网络模型生成方法及图像质量优化方法
US20200302582A1 (en) * 2019-03-19 2020-09-24 Apple Inc. Image fusion architecture
CN111275653A (zh) * 2020-02-28 2020-06-12 北京松果电子有限公司 图像去噪方法及装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116386089A (zh) * 2023-06-05 2023-07-04 季华实验室 运动场景下人体姿态估计方法、装置、设备及存储介质
CN116386089B (zh) * 2023-06-05 2023-10-31 季华实验室 运动场景下人体姿态估计方法、装置、设备及存储介质

Also Published As

Publication number Publication date
CN116982071A (zh) 2023-10-31
US20240054613A1 (en) 2024-02-15

Similar Documents

Publication Publication Date Title
JP6767543B2 (ja) 異なる種類の撮像装置を有するモノリシックカメラアレイを用いた画像の撮像および処理
US20230017746A1 (en) Image acquisition method, electronic device, and non-transitory computerreadable storage medium
US9681057B2 (en) Exposure timing manipulation in a multi-lens camera
WO2022007469A1 (zh) 图像获取方法、摄像头组件及移动终端
WO2021196553A1 (zh) 高动态范围图像处理系统及方法、电子设备和可读存储介质
CN111246064B (zh) 图像处理方法、摄像头组件及移动终端
WO2017159027A1 (ja) 撮像制御装置、および撮像制御方法、ならびに撮像装置
WO2010090025A1 (ja) 撮像処理装置
JP2017118197A (ja) 画像処理装置、画像処理方法、および撮像装置
CN116684752A (zh) 图像传感器及其操作方法
US11902674B2 (en) Image acquisition method, camera assembly, and mobile terminal
US11758289B2 (en) Image processing method, image processing system, electronic device, and readable storage medium
US20240054613A1 (en) Image processing method, imaging processing apparatus, electronic device, and storage medium
RU2633758C1 (ru) Телевизионная камера повышенной чувствительности для панорамного компьютерного наблюдения
JPH10271380A (ja) 改善された性能特性を有するディジタル画像を生成する方法
WO2022226701A1 (zh) 图像处理方法、处理装置、电子设备和存储介质
WO2022088310A1 (zh) 图像处理方法、摄像头组件及移动终端
RU2710779C1 (ru) Устройство "кольцевого" фотоприёмника цветного изображения для панорамного телевизионно-компьютерного наблюдения
JP6069857B2 (ja) 撮像装置
RU2579003C1 (ru) Устройство компьютерной системы для панорамного сканирования цветного изображения
RU2641287C1 (ru) Телевизионная камера цветного изображения для панорамного компьютерного сканирования
RU2564678C1 (ru) Компьютерная система панорамного телевизионного наблюдения с повышенной чувствительностью
RU2564091C1 (ru) Компьютерная система панорамного телевизионного наблюдения с повышенной чувствительностью на внешней периферии кольцевого изображения
RU2570348C1 (ru) Компьютерная система панорамного телевизионного наблюдения цветного изображения
RU2709409C1 (ru) Устройство компьютерной системы для телевизионного кругового обзора внутренней поверхности труб и трубопроводов большого диаметра

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21938193

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 202180095061.9

Country of ref document: CN

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21938193

Country of ref document: EP

Kind code of ref document: A1