WO2017107700A1 - 一种实现图像配准的方法及终端 - Google Patents

一种实现图像配准的方法及终端 Download PDF

Info

Publication number
WO2017107700A1
WO2017107700A1 PCT/CN2016/105706 CN2016105706W WO2017107700A1 WO 2017107700 A1 WO2017107700 A1 WO 2017107700A1 CN 2016105706 W CN2016105706 W CN 2016105706W WO 2017107700 A1 WO2017107700 A1 WO 2017107700A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
registration
algorithm
unit
frame image
Prior art date
Application number
PCT/CN2016/105706
Other languages
English (en)
French (fr)
Inventor
戴向东
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017107700A1 publication Critical patent/WO2017107700A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images

Definitions

  • This document relates to, but is not limited to, image processing technology, and more particularly to a method and terminal for implementing image registration.
  • Common algorithms for denoising processing include single frame based linear filtering algorithm and multi-frame fusion denoising algorithm; single frame based linear filtering algorithms include algorithms such as Gaussian filtering, bilateral filtering, non-local mean filtering, and single frame based linear filtering.
  • the denoising effect of the algorithm is poor.
  • the multi-frame fusion denoising algorithm performs image fusion by taking multiple images.
  • the denoising effect is better than the single-frame based linear filtering algorithm.
  • This kind of algorithm takes into account the occurrence of Gaussian noise in daily scenes. The probability is relatively large, and the expected mean value of Gaussian noise is 0.
  • the fusion of multiple noise images captured at different times can effectively eliminate Gaussian noise and achieve the purpose of denoising.
  • the difficulty of the multi-frame fusion noise reduction algorithm is that the images of multiple different moments are easy to appear pixel misalignment. Since the multi-frame fusion noise reduction algorithm performs image fusion at the pixel level, the image registration accuracy needs to be in the unit pixel. Within the accuracy range, if the image registration cannot meet the accuracy requirements, there will be a phenomenon in which the pixel points are blurred and misaligned after image fusion. The related image registration method is slow and the image registration efficiency is low. It can not meet the multi-frame fusion noise reduction algorithm for image fusion, and it is prone to pixel pixel blur error, which affects the image quality after image fusion.
  • Embodiments of the present invention provide a method and a terminal for implementing image registration, which can improve multi-frame fusion.
  • the noise reduction algorithm performs the quality of image fusion.
  • An embodiment of the present invention provides a terminal for implementing image registration, including: an extracting unit, a pairing unit, and a registration unit; wherein
  • the extracting unit is configured to use the feature extraction algorithm to extract feature points of the multi-frame image of the same scene at different times by using the feature description operator;
  • the pairing unit is configured to pair the feature points of each extracted frame image
  • the registration unit is configured to perform image registration on the multi-frame image that completes the feature point pairing by using a preset image registration transformation model.
  • the terminal further includes:
  • the obtaining unit is configured to acquire the multi-frame image of the same scene at the different moments.
  • the pairing unit is set to,
  • the feature points extracted by each frame image are paired by the K nearest neighbor knn algorithm.
  • the registration unit is configured to perform image registration on the multi-frame image of the completed feature point pairing by the perspective transformation model.
  • the terminal further includes:
  • the exclusion unit is configured to exclude the incorrect matching in the pairing unit pairing result by the error matching elimination algorithm before the registration unit performs image registration.
  • the terminal further includes:
  • the weighting unit is configured to perform image registration after the registration unit performs image registration, and performs weighted averaging on each frame image of the image registration to obtain a denoised image.
  • the levy extraction algorithm comprises: a directional introduction ORB algorithm.
  • the error matching exclusion algorithm includes: a random consistency sampling Ransac algorithm.
  • the weighting unit is configured to perform weighted averaging for each frame image that completes image registration by obtaining a denoised image by:
  • D(x, y) is the denoised pixel value of the image at the pixel point coordinates (x, y);
  • I i (x, y) is the i-th image at the pixel point coordinates (x, y) The true pixel value, where n is the number of images.
  • the acquiring unit is configured to acquire the multi-frame image of the same scene at different times in the following manner:
  • Multi-frame images of the same scene at different times are acquired by fast continuous shooting of the same scene using the same shooting parameters.
  • the present application further provides a method for implementing image registration, including:
  • the feature extraction algorithm is used to extract the feature points of the multi-frame image of the same scene at different times by using the feature description operator;
  • the image registration of the multi-frame image that completes the feature point pairing is performed by a preset image registration transformation model.
  • the method further includes: acquiring a multi-frame image of the same scene at the different moments.
  • pairing the feature points extracted for each frame image includes:
  • the feature points extracted by each frame image are paired by the K nearest neighbor knn algorithm.
  • the preset image registration transformation model is a perspective transformation model.
  • the method further includes: excluding the erroneous matching in the paired result by the error matching exclusion algorithm.
  • the method further includes: weighting and averaging each frame image of the image registration to obtain a denoised image.
  • the levy extraction algorithm comprises: a directional introduction ORB algorithm.
  • the error matching exclusion algorithm includes: a random consistency sampling Ransac algorithm.
  • weighting and averaging each frame image of the image registration is performed to obtain a denoised image, including:
  • D(x, y) is the denoised pixel value of the image at the pixel point coordinates (x, y);
  • I i (x, y) is the i-th image at the pixel point coordinates (x, y) The true pixel value, where n is the number of images.
  • acquiring the multi-frame image of the same scene at different times includes:
  • Multi-frame images of the same scene at different times are acquired by fast continuous shooting of the same scene using the same shooting parameters.
  • the technical solution of the embodiment of the present invention includes: using an orientation introduction (ORB) feature extraction algorithm to extract feature points of a multi-frame image of the same scene at different times by using a feature description operator; and pairing feature points of each frame image extracted;
  • the preset image registration transformation model performs image registration on the multi-frame image that completes the feature point pairing.
  • the feature points of the multi-frame image of the same scene at different times are extracted by the ORB feature extraction algorithm, and the feature points of each frame image are paired, and the image registration is performed by using a preset image registration transformation model.
  • the speed and efficiency of image registration are improved, and the image quality of multi-frame fusion noise reduction algorithm image fusion is improved.
  • FIG. 1 is a schematic diagram showing the hardware structure of an optional terminal for implementing various embodiments of the present invention
  • FIG. 2 is a flowchart of a method for implementing image registration according to an embodiment of the present invention
  • Figure 3 is a registration difference image for image registration using a perspective transformation model
  • FIG. 5 is a flowchart of another method for implementing image registration according to the present invention.
  • FIG. 6 is a schematic diagram of feature point pairing on two frames of images according to an embodiment of the present invention.
  • FIG. 7 is a schematic diagram of the method for eliminating mismatching according to an embodiment of the present invention.
  • FIG. 8 is a schematic diagram of image jitter according to an embodiment of the present invention.
  • FIG. 9(b) is an image after denoising according to an embodiment of the present invention.
  • FIG. 10(a) is a first partial schematic diagram of an image before denoising according to an embodiment of the present invention.
  • FIG. 10(b) is a first partial schematic diagram of an image after denoising according to an embodiment of the present invention.
  • FIG. 11(a) is a second partial schematic view showing an image before denoising according to an embodiment of the present invention.
  • 11(b) is a second partial schematic diagram of an image after denoising according to an embodiment of the present invention.
  • FIG. 12 is a structural block diagram of a terminal for implementing image registration according to an embodiment of the present invention.
  • module A terminal embodying various embodiments of the present invention will now be described with reference to the accompanying drawings.
  • suffixes such as “module,” “component,” or “unit” used to denote an element are merely illustrative of the embodiments of the present invention, and do not have a specific meaning per se. Therefore, “module” and “component” can be used in combination.
  • FIG. 1 is a schematic structural diagram of hardware of an optional terminal for implementing various embodiments of the present invention, as shown in FIG.
  • the terminal 100 may include an A/V (Audio/Video) input unit 120, an output unit 150, a memory 160, a controller 180, a power supply unit 190, and the like.
  • Figure 1 shows a terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The components of the terminal will be described in detail below.
  • the A/V input unit 120 is arranged to receive a video signal.
  • the A/V input unit 120 may include a camera 121 that processes image data of still pictures or video obtained by an image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium). Two or more cameras 121 may be provided depending on the configuration of the terminal.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151.
  • the display unit 151 can display information processed in the terminal 100.
  • the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like. Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • the terminal 100 may include two or more display units (or other display devices) according to a particular desired embodiment, for example, the terminal may include an external display unit (not shown) and an internal display unit (not shown).
  • the touch screen can be set to detect touch input pressure as well as touch input position and touch input area.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • Controller 180 typically controls the overall operation of the terminal.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be used, for example, in computer software, hardware, or any group thereof.
  • the computer readable medium is implemented.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160
  • the terminal has been described in terms of its function.
  • a slide type terminal among various types of terminals such as a folding type, a bar type, a swing type, a slide type terminal, and the like will be described as an example. Therefore, the embodiment of the present invention can be applied to any type of terminal, and is not limited to a slide type terminal.
  • FIG. 2 is a flowchart of a method for implementing image registration according to an embodiment of the present invention. As shown in FIG. 2, the method includes:
  • Step 200 Using an orientation introduction (ORB) feature extraction algorithm to extract feature points of a multi-frame image of the same scene at different times by using a feature description operator;
  • ORB orientation introduction
  • the currently popular algorithms for expressing feature description operators include Sift, Surf, FAST, ORB, and the like.
  • Oriented Brief ORB, ORiented Brief
  • the predecessor of the feature extraction algorithm is the Brief feature extraction algorithm.
  • the Brief feature extraction algorithm was proposed by the Calounder of the EPFL EPFL in the European Conference on Computer Vision (ECCV) 2010.
  • ECCV European Conference on Computer Vision
  • a feature description operator that can be quickly calculated and expressed in binary code. The main idea is to randomly select several pairs of points near the feature points, and combine the magnitudes of the gray values of these pairs into a binary string.
  • the binary string acts as a feature description operator for the feature point.
  • the biggest advantage of the Brief feature extraction algorithm is its high speed, which mainly has the disadvantages of no rotation invariance and poor anti-noise performance.
  • the ORB feature extraction algorithm improves the shortcomings of the Brief feature extraction algorithm, and makes the algorithm have better anti-noise ability while having rotation invariance.
  • the calculation speed of the ORB feature extraction algorithm still maintains the calculation speed of the Brief feature extraction algorithm.
  • the calculation speed of the ORB algorithm is 100 times that of the SIFT algorithm and 10 times that of the SURF algorithm.
  • SIFI algorithm Take the SIFI algorithm as an example, in SIFT In the algorithm, the gradient histogram sets the direction of the first peak to the main direction of the feature point. If the measure of the secondary peak reaches 80% of the peak value, the direction of the second peak is also set as the main direction. time consuming.
  • the main direction of the feature point is calculated by the moment.
  • the feature description operator can be extracted according to the main direction.
  • the ORB algorithm does not directly use the pixel to compare with the pixel, but selects an area centered on the pixel as the comparison object, thus improving the anti-noise ability of the algorithm.
  • the method further includes: acquiring the multi-frame image of the same scene at the different moments.
  • the method for acquiring multi-frame images of the same scene at different times is the same as the method for acquiring images in the multi-frame fusion noise reduction algorithm.
  • the same shooting parameters are used according to the preset number of image capture parameters.
  • the number of image capture parameters is usually set to 3 images; the same shooting parameters are used, for example, the same A continuous continuous shooting of the same scene by exposure, focusing, metering, etc., thereby obtaining a multi-frame image of the same scene at different times of image registration in the embodiment of the present invention.
  • Step 201 pair feature points of each extracted frame image
  • pairing the feature points extracted for each frame image includes:
  • the feature points extracted for each frame image are paired by the K nearest neighbor (knn) algorithm.
  • Step 202 Perform image registration on the multi-frame image that completes the feature point pairing by using a preset image registration transformation model.
  • the preset image registration transformation model is a perspective transformation model.
  • the transmission transformation model is more flexible, and a transmission transformation can transform a rectangle into a trapezoid, which describes a method of projecting one plane in space into another.
  • the matrix expression of the transmission transformation model is:
  • matrix M is the matrix of image registration, where a 02 and a 12 are the displacement parameters of image registration; a 00 , a 01 and a 10 , a 11 are the scaling and rotation of image registration Parameters; a 20 and a 21 are the amount of deformation in the horizontal and vertical directions of the image registration, It is the result obtained after the image is registered.
  • the perspective transformation model is mainly used when the handheld terminal performs shooting.
  • the shaking motion of the mobile phone is basically not in the same plane, and the image registration by using the perspective transformation model can effectively solve the problem.
  • the problem that multi-frame images due to shooting jitter are no longer in the same plane.
  • the perspective transformation model has better effects, for example, image registration and affineing of the perspective transformation model.
  • the transformation model performs image registration for comparison.
  • the accuracy of image registration is measured by registering differential images.
  • the principle is to differentiate the registered image from the reference image.
  • FIG. 3 is the registration difference image with the perspective transformation model for image registration
  • FIG. 4 is the registration difference image with the affine transformation model for image registration, which can be determined by comparison.
  • the registration difference image of the perspective transformation model has fewer white points in the real coil region than the affine transformation model registration difference image. Therefore, the perspective transformation model is better for image registration.
  • the method of the embodiment of the present invention further includes: performing a random matching sampling (Ransac) algorithm to eliminate the incorrect matching in the pairing result.
  • Ransac random matching sampling
  • the method of the embodiment of the present invention uses the random consistency sampling Ransac algorithm to perform matching matching and eliminate the error.
  • the matching guarantees the accuracy of the spatial transformation matrix after the calculation.
  • the algorithm of the present invention can be applied to any other algorithm that can perform error matching exclusion.
  • the Ransac algorithm is only an optional embodiment of the method of the present invention.
  • the method of the embodiment of the present invention further includes: weighting and averaging each frame image of the image registration to obtain a denoising image.
  • the process is as follows: assuming that n images have completed image registration, the corresponding image sequence for completing the registration process is [I 1 , I 2 ... I n ], and I i represents the i-th registration.
  • the denoised image representation function is D(x, y)
  • I i (x, y) is the true pixel value of the i-th image at the pixel point coordinates (x, y)
  • N i (x, y) is the pixel value of the image after the pixel coordinates (x, y) are disturbed by noise
  • D(x, y) is the denoised pixel value of the image at the pixel point coordinate (x, y)
  • the weighted average of the noise interference term is split into an independent one, and the noise interference is set to satisfy the Gaussian model, and the average value of the noise interference term is basically 0, Formula (2) represents;
  • the feature points of the multi-frame image of the same scene at different times are extracted by the ORB feature extraction algorithm, and the feature points of each frame image are paired, and the image registration is performed by using a preset image registration transformation model.
  • the speed and efficiency of image registration are improved, and the image quality of multi-frame fusion noise reduction algorithm image fusion is improved.
  • FIG. 5 is a flowchart of another method for implementing image registration according to an embodiment of the present invention. As shown in FIG. 5, the method includes:
  • Step 500 Acquire multi-frame images of the same scene at different times.
  • the method for acquiring multi-frame images of the same scene at different times is the same as the method for acquiring images in the multi-frame fusion noise reduction algorithm.
  • the same shooting parameters are used according to the preset number of image capturing parameters.
  • the number of image capturing parameters is usually set to 3 images taken; the same shooting parameters are used, for example, the same Exposure, focusing, metering, etc., for rapid continuous shooting of the same scene, thereby obtaining a multi-frame image of the same scene at different times of image registration in the embodiment of the present invention.
  • Step 501 Using an ORB feature extraction algorithm to extract feature points of a multi-frame image of the same scene at different times by using a feature description operator;
  • the currently popular algorithms for expressing feature description operators include Sift, Surf, FAST, ORB, and the like.
  • Oriented Brief ORB, ORiented Brief
  • the predecessor of the feature extraction algorithm is the Brief feature extraction algorithm.
  • the Brief feature extraction algorithm was proposed by the Calounder of the EPFL EPFL in the European Conference on Computer Vision (ECCV) 2010.
  • ECCV European Conference on Computer Vision
  • a feature description operator that can be quickly calculated and expressed in binary code. The main idea is to randomly select several pairs of points near the feature points, and combine the magnitudes of the gray values of these pairs into a binary string.
  • the binary string acts as a feature description operator for the feature point.
  • the biggest advantage of the Brief feature extraction algorithm is its high speed, which mainly has the disadvantages of no rotation invariance and poor anti-noise performance.
  • the ORB feature extraction algorithm improves the shortcomings of the Brief feature extraction algorithm, and makes the algorithm have better anti-noise ability while having rotation invariance.
  • the calculation speed of the ORB feature extraction algorithm still maintains the calculation speed of the Brief feature extraction algorithm.
  • the calculation speed of the ORB algorithm is 100 times that of the SIFT algorithm and 10 times that of the SURF algorithm.
  • the gradient histogram sets the direction of the first peak to the main direction of the feature point, and if the measure of the secondary peak reaches 80% of the peak value, the direction of the second peak is also set. In the main direction, the algorithm is relatively more time consuming.
  • the main direction of the feature point is calculated by the moment. After the main direction, the feature description operator can be extracted according to the main direction.
  • the ORB algorithm does not directly use the pixel to compare with the pixel, but selects an area centered on the pixel as the comparison object, thus improving the anti-noise ability of the algorithm.
  • Step 502 pair feature points of each extracted frame image
  • pairing the feature points extracted for each frame image includes:
  • the feature points extracted for each frame image are paired by the knn algorithm.
  • FIG. 6 is a schematic diagram of feature point pairing on two frames of images according to an embodiment of the present invention. As shown in FIG. 6, two frames of images on the left and right sides complete pairing of feature points.
  • Step 503 Exclude the erroneous matching in the pairing result by using the Ransac algorithm.
  • the method of the embodiment of the present invention uses the random consistency sampling Ransac algorithm to perform matching matching and eliminate the error.
  • the matching guarantees the accuracy of the spatial transformation matrix after the calculation.
  • the algorithm of the present invention can be applied to any other algorithm that can perform error matching exclusion.
  • the Ransac algorithm is only an optional embodiment of the method of the present invention.
  • FIG. 7 is a schematic diagram of the method for eliminating mismatching according to an embodiment of the present invention. As shown in FIG. 7, the feature points of the erroneous matching in the two frames of images are excluded.
  • Step 504 Perform image registration on the multi-frame image that completes the feature point pairing by using a preset image registration transformation model.
  • the preset image registration transformation model is a perspective transformation model.
  • the transmission transformation model is more flexible, and a transmission transformation can transform a rectangle into a trapezoid, which describes a method of projecting one plane in space into another.
  • the matrix expression of the transmission transformation model is:
  • the perspective transformation model is mainly used when the handheld terminal performs shooting.
  • the shaking motion of the mobile phone is basically not in the same plane, and the image registration by using the perspective transformation model can effectively solve the problem.
  • the problem that multi-frame images due to shooting jitter are no longer in the same plane.
  • FIG. 8 is a schematic diagram of image dithering according to an embodiment of the present invention. As shown in FIG. 8, during the image capturing process, the mobile phone is shaken from the solid line position 1 to the broken line position 2 due to the jitter.
  • Step 505 Perform weighted averaging on each frame image of the image registration to obtain a denoised image.
  • FIG. 9(a) is an image before denoising according to an embodiment of the present invention
  • FIG. 9(b) is an image after denoising according to an embodiment of the present invention, and is analyzed from the display effect of the overall image by FIG. 9(a) and FIG. 9(b).
  • the image quality after denoising in Figure 9(b) is effectively improved.
  • FIG. 10(a) is a first partial schematic diagram of an image before denoising according to an embodiment of the present invention
  • FIG. 10(b) is a first partial schematic diagram of an image after denoising according to an embodiment of the present invention.
  • (a) and FIG. 10(b) are analyzed from the display effect of the image of the first partial portion, and the image clarity of the first portion after denoising in FIG. 10(b) is improved;
  • FIG. 11(a) is an implementation of the present invention.
  • FIG. 11(b) is a second partial schematic diagram of an image after denoising according to an embodiment of the present invention, through images of the second partial image through FIGS. 11(a) and 11(b) In the display effect analysis, the image clarity of the second portion after denoising in Fig. 11(b) is improved.
  • FIG. 12 is a structural block diagram of a terminal for implementing image registration according to an embodiment of the present invention. As shown in FIG. 12, the method includes: an extracting unit 1201, a pairing unit 1202, and a registration unit 1203.
  • the extracting unit 1201 is configured to use the directional introduction ORB feature extraction algorithm to extract feature points of the multi-frame image of the same scene at different times by using the feature description operator;
  • the pairing unit 1202 is configured to pair feature points of each of the extracted frame images
  • Pairing unit 1202 is set to,
  • the feature points extracted for each frame image are paired by the K nearest neighbor (knn) algorithm.
  • the registration unit 1203 is configured to perform image registration on the multi-frame image that completes the feature point pairing by using a preset image registration transformation model.
  • the registration unit 1203 is configured to perform image registration on the multi-frame image in which the feature point pairing is completed by the perspective transformation model.
  • the terminal of the embodiment of the present invention further includes an obtaining unit 1206, configured to acquire a multi-frame image of the same scene at different times.
  • the terminal of the embodiment of the present invention further includes an exclusion unit 1204. Before the registration unit 1203 performs image registration, the random matching sampling Ransac algorithm is used to exclude the incorrect matching in the pairing unit 1202 pairing result.
  • the terminal of the embodiment of the present invention further includes a weighting unit 1205. After the image registration is performed, the image registration is performed, and each frame image of the image registration is weighted and averaged to obtain a denoising image.
  • the embodiment of the present invention further provides a terminal for implementing image registration, comprising: an obtaining unit, an extracting unit, a pairing unit, an excluding unit, a registration unit, and a weighting unit; wherein
  • the obtaining unit is configured to acquire the multi-frame image of the same scene at the different moments.
  • the acquisition unit can generally be acquired by a camera, and the acquired image can be obtained. To be stored in memory.
  • the extracting unit is configured to extract the feature points of the multi-frame image of the same scene at different times by using the feature description operator by using the orientation introduction ORB feature extraction algorithm;
  • the pairing unit is configured to pair the feature points of each extracted frame image
  • the pairing unit is set to,
  • the feature points extracted for each frame image are paired by the K nearest neighbor (knn) algorithm.
  • the exclusion unit is set to perform the image registration by the registration unit, and the random matching sampling Ransac algorithm is used to eliminate the mismatch in the pairing unit pairing result.
  • the registration unit is configured to perform image registration on the multi-frame image that completes the feature point pairing by using a preset image registration transformation model.
  • the registration unit is configured to perform image registration on the multi-frame image that completes the feature point pairing through the perspective transformation model.
  • the weighting unit is set as the registration unit for image registration, and each frame image of the image registration is weighted and averaged to obtain a denoised image.
  • the embodiment of the invention further provides a computer storage medium, wherein the computer storage medium stores computer executable instructions, and the computer executable instructions are used to execute the method described in the foregoing embodiments.
  • each module/unit in the above embodiment may be implemented in the form of hardware, for example, by an integrated circuit to implement its corresponding function. It can also be implemented in the form of a software function module, for example, by a processor executing a program/instruction stored in a memory to perform its corresponding function.
  • the invention is not limited to any specific form of combination of hardware and software.
  • the above technical solution improves the speed and efficiency of image registration, and improves the image quality of the image fusion of the multi-frame fusion noise reduction algorithm.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

一种实现图像配准的方法及终端,包括:采用定向简介(ORB)特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点(200);对提取的每个帧图像的特征点进行配对(201);通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准(202)。上述技术方案通过ORB特征提取算法提取不同时刻同一场景的多帧图像的特征点,对提取的每个帧图像的特征点进行配对后采用预设的图像配准变换模型进行图像配准,提高了图像配准的速度和效率,提升了多帧融合降噪算法图像融合的图像质量。

Description

一种实现图像配准的方法及终端 技术领域
本文涉及但不限于图像处理技术,尤指一种实现图像配准的方法及终端。
背景技术
通过终端进行图像拍摄时,图像成像容易受到噪声的干扰,以手持终端进行拍摄为例,为了获取高质量的成像效果,需要对图像进行去噪处理。
去噪处理的常用算法包括基于单帧的线性滤波算法和多帧融合降噪算法;基于单帧的线性滤波算法包括例如高斯滤波、双边滤波、非局部均值滤波等算法,基于单帧的线性滤波算法去噪处理效果较差;多帧融合降噪算法通过利用拍摄多张图像进行图像融合,去噪效果优于基于单帧的线性滤波算法,这类算法考虑到在日常场景中高斯噪声出现的概率比较大,利用高斯噪声的期望均值为0的特点,对多张不同时刻拍摄的包含噪声图像进行融合,可以有效地消除高斯噪声,达到去噪的目的。
多帧融合降噪算法的难点在于拍摄的多张不同时刻的图像容易出现像素点错位的情况,由于多帧融合降噪算法进行图像融合是像素级的,因此图像配准精度需要在单位像素的精度范围内,如果图像配准无法满足精度要求,将出现图像融合后出现像素点模糊错位的现象。相关的图像配准方法速度慢、图像配准效率低,无法满足多帧融合降噪算法进行图像融合,容易出现像素点模糊错误的现象,影响图像融合后的图像质量。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供一种实现图像配准的方法及终端,能够提高多帧融合 降噪算法进行图像融合的质量。
本发明实施例提供了一种实现图像配准的终端,包括:提取单元、配对单元和配准单元;其中,
提取单元设置为,采用特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
配对单元设置为,对提取的每个帧图像的特征点进行配对;
配准单元设置为,通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
可选地,该终端还包括:
获取单元设置为,获取所述不同时刻同一场景的多帧图像。
可选地,配对单元是设置为,
通过K最近邻knn算法对每个帧图像提取的特征点进行配对。
可选地,配准单元是设置为,通过透视变换模型对完成特征点配对的多帧图像进行图像配准。
可选地,该终端还包括:
排除单元设置为,所述配准单元进行图像配准之前,通过错误匹配排除算法对配对单元配对结果中错误的匹配进行排除。
可选地,该终端还包括:
加权单元设置为,配准单元进行图像配准后,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
可选地,所述征提取算法包括:定向简介ORB算法。
可选地,所述错误匹配排除算法包括:随机一致性采样Ransac算法。
可选地,加权单元,是设置为通过如下方式实现将完成图像配准的每个帧图像进行加权求平均,获得去噪图像:
当噪声干扰满足高斯模型时,
Figure PCTCN2016105706-appb-000001
其中,D(x,y)为图像在像素点坐标(x,y)处的去噪后的像素值;Ii(x,y)为第i幅图像在像素点坐标(x,y)的真实像素值,n表示图像数。
可选地,获取单元,是设置为通过如下方式获取所述不同时刻同一场景的多帧图像:
通过采用相同的拍摄参数对同一场景快速的连续拍摄获取所述不同时刻同一场景的多帧图像。
另一方面,本申请还提供一种实现图像配准的方法,包括:
采用特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
对提取的每个帧图像的特征点进行配对;
通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
可选地,该方法之前还包括:获取所述不同时刻同一场景的多帧图像。
可选地,对每个帧图像提取的特征点进行配对包括:
通过K最近邻knn算法对每个帧图像提取的特征点进行配对。
可选地,预设的图像配准变换模型为透视变换模型。
可选地,进行图像配准之前,该方法还包括:通过错误匹配排除算法对配对的结果中错误的匹配进行排除。
可选地,进行图像配准后,该方法还包括:将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
可选地,所述征提取算法包括:定向简介ORB算法。
可选地,所述错误匹配排除算法包括:随机一致性采样Ransac算法。
可选地,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像,包括:
当噪声干扰满足高斯模型时,
Figure PCTCN2016105706-appb-000002
其中,D(x,y)为图像在像素点坐标(x,y)处的去噪后的像素值;Ii(x,y)为第i幅图像在像素点坐标(x,y)的真实像素值,n表示图像数。
可选地,获取所述不同时刻同一场景的多帧图像包括:
通过采用相同的拍摄参数对同一场景快速的连续拍摄获取所述不同时刻同一场景的多帧图像。
本发明实施例的技术方案包括:采用定向简介(ORB)特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;对提取的每个帧图像的特征点进行配对;通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。本发明实施例方法通过ORB特征提取算法提取不同时刻同一场景的多帧图像的特征点,对提取的每个帧图像的特征点进行配对后采用预设的图像配准变换模型进行图像配准,提高了图像配准的速度和效率,提升了多帧融合降噪算法图像融合的图像质量。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为实现本发明各个实施例一个可选的终端的硬件结构示意;
图2为本发明实施例实现图像配准的方法的流程图;
图3为采用透视变换模型进行图像配准的配准差分图像;
图4为采用仿射变换模型进行图像配准的配准差分图像;
图5为本发明另一实现图像配准的方法的流程图;
图6为本发明实施例对两帧图像进行特征点配对的示意图;
图7为本发明实施例对错误配对进行排除后的示意图;
图8为本发明实施例图像抖动的示意图;
图9(a)为本发明实施例去噪前图像;
图9(b)为本发明实施例去噪后图像;
图10(a)为本发明实施例去噪前图像的第一局部示意图;
图10(b)为本发明实施例去噪后图像的第一局部示意图;
图11(a)为本发明实施例去噪前图像的第二局部示意图;
图11(b)为本发明实施例去噪后图像的第二局部示意图;
图12为本发明实施例一种实现图像配准的终端的结构框图。
本发明的实施方式
下文中将结合附图对本发明的实施例进行详细说明。需要说明的是,在不冲突的情况下,本申请中的实施例及实施例中的特征可以相互任意组合。
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
现在将参考附图描述实现本发明各个实施例的终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明实施例的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
图1为实现本发明各个实施例一个可选的终端的硬件结构示意,如图1所示,
终端100可以包括A/V(音频/视频)输入单元120、输出单元150、存储器160、控制器180和电源单元190等等。图1示出了具有各种组件的终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述终端的元件。
A/V输入单元120设置为接收视频信号。A/V输入单元120可以包括相机121,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中可以根据终端的构造提供两个或更多相机121。
输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151。
显示单元151可以显示在终端100中处理的信息。当终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,终端100可以包括两个或更多显示单元(或其它显示装置),例如,终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可设置为检测触摸输入压力以及触摸输入位置和触摸输入面积。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制终端的总体操作。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组 合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型终端等等的各种类型的终端中的滑动型终端作为示例。因此,本发明实施例能够应用于任何类型的终端,并且不限于滑动型终端。
基于上述终端硬件结构以及通信系统,提出本发明方法各个实施例。
图2为本发明实施例实现图像配准的方法的流程图,如图2所示,包括:
步骤200、采用定向简介(ORB)特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
需要说明的是,目前比较流行的表达特征描述算子的算法包括Sift、Surf、FAST、ORB等。定向简介(ORB,ORiented Brief)特征提取算法的前身是Brief特征提取算法,Brief特征提取算法是由洛桑联邦理工学院EPFL的卡隆德尔(Calonder)在欧洲计算机视觉国际会议(ECCV)2010上提出的一种可以快速计算且表达方式为二进制编码的特征描述算子,主要思路就是在特征点附近随机选取若干点对,将这些点对的灰度值的大小,组合成一个二进制串,并将这个二进制串作为该特征点的特征描述算子。Brief特征提取算法最大的优点在于速度快,主要有不具有旋转不变性和对抗噪声性能差的缺点。ORB特征提取算法对Brief特征提取算法存在的缺点进行了改进,使算法在具有旋转不变性的同时具有较好的抗噪能力,同时ORB特征提取算法计算速度仍保持有Brief特征提取算法的计算速度优势,ORB算法的计算速度是SIFT算法的100倍,是SURF算法的10倍。以SIFI算法为例,在SIFT 算法中,梯度直方图把第一峰值的方向设置为特征点的主方向,如果次峰值的量度达到峰值的80%,则把第二个峰值的方向也设定为主方向,该算法相对更耗时。而在ORB特征提取算法中,特征点的主方向通过矩(moment)计算得来,有了主方向之后,就可以依据该主方向提取特征描述算子。ORB算法不直接使用像素点与像素点之间进行比较,而是选择以该像素为中心的一个区域作为比较对象,因此提高了算法的抗噪能力。
本步骤之前还包括:获取所述不同时刻同一场景的多帧图像。
需要说明的是,获取不同时刻同一场景的多帧图像的方法与多帧融合降噪算法中获取图像方法相同。以通过手机进行图像拍摄为例,按照预设的图像拍摄的数量参数采用相同的拍摄参数进行拍摄,图像拍摄的数量参数通常设置拍摄的图像为3张;采用相同的拍摄参数,例如采用相同的曝光、对焦、测光等对同一场景快速的连续拍摄,从而获得本发明实施例进行图像配准的不同时刻同一场景的多帧图像。
步骤201、对提取的每个帧图像的特征点进行配对;
本步骤中,对每个帧图像提取的特征点进行配对包括:
通过K最近邻(knn)算法对每个帧图像提取的特征点进行配对。
需要说明的是,采用K最近邻算法对每个帧图像提取的特征点进行配对属于本领域技术人员的惯用技术手段,在此不再赘述。
步骤202、通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
本步骤中,预设的图像配准变换模型为透视变换模型。
需要说明的是,透射变换模型是一种更具有灵活性,一个透射变换可以将矩形转变成梯形,它描述了将空间内一个平面投影到另一个空间平面内的方法。透射变换模型的矩阵表达式为:
Figure PCTCN2016105706-appb-000003
矩阵表达式中,
Figure PCTCN2016105706-appb-000004
是图像配准的原型,矩阵M为图像配准的矩阵,其中,a02和a12为图像配准的位移参数;a00、a01和a10、a11为图像配准的缩放与旋转参数;a20和a21为图像配准的水平与垂直方向的变形量,
Figure PCTCN2016105706-appb-000005
是图像配准后获得的结果。
本发明实施例采用透视变换模型主要考虑到手持终端进行拍摄时,如手机在连续拍摄多帧图像时,手机的抖动运动基本上不在同一个平面,采用透视变换模型进行图像配准可以有效的解决由于拍摄抖动造成的多帧图像不再同一平面的问题。将本发明实施例使用的透视变换模型与普通的变换模型进行的图像配准进行对比,可以明显看出透视变换模型具有更好的效果,例如,将透视变换模型进行图像配准和采用仿射变换模型进行图像配准进行对比,通过配准差分图像来衡量图像配准的精度,原理是将配准后的图像与基准图像进行差分,如果像素点没有配准对齐,就会出现差值,在差分图像表现出亮度比较大的现象;图3为采用透视变换模型进行图像配准的配准差分图像,图4为采用仿射变换模型进行图像配准的配准差分图像,通过对比可以确定透视变换模型的配准差分图像在实线圈定区域出现的白点少于仿射变换模型配准差分图像,因此,采用透视变换模型进行图像配准效果更佳。
进行图像配准之前,本发明实施例方法还包括:通过随机一致性采样(Ransac)算法进行对配对结果中错误的匹配进行排除。
需要说明的是,由于特征点受到噪声或算法参数设置不合理的影响会出现错误匹配,需要对错误的匹配进行排除,本发明实施例方法利用随机一致性采样Ransac算法进行匹配对优化,排除错误的匹配,保证之后计算的空间变换矩阵精度。另外,对于其他任何可以进行错误匹配排除的算法都可以应用与本发明实施例,Ransac算法只是本发明方法的可选实施例。
进行图像配准后,本发明实施例方法还包括:将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
需要说明的是,过程如下:假设有n帧图像已经完成图像配准,对应的 完成配准处理的图像序列为[I1,I2...In],Ii表示第i幅配准后的图像,去噪后的图像表达函数为D(x,y),Ii(x,y)为第i幅图像在像素点坐标(x,y)的真实像素值,Ni(x,y)为图像在像素点坐标(x,y)受到噪声干扰后的像素值,D(x,y)为图像在像素点坐标(x,y)处的去噪后的像素值,则:
Figure PCTCN2016105706-appb-000006
本发明实施例方法将式(1)拆成两项后,噪声干扰项的加权平均被拆分为独立的一项,设定噪声干扰满足高斯模型,则噪声干扰项平均值基本为0,通过式(2)表示;
Figure PCTCN2016105706-appb-000007
通过式(1)和式(2)进行,获得去噪图像可以通过式(3)表示:
Figure PCTCN2016105706-appb-000008
本发明实施例方法通过ORB特征提取算法提取不同时刻同一场景的多帧图像的特征点,对提取的每个帧图像的特征点进行配对后采用预设的图像配准变换模型进行图像配准,提高了图像配准的速度和效率,提升了多帧融合降噪算法图像融合的图像质量。
图5为本发明实施例另一实现图像配准的方法的流程图,如图5所示,包括:
步骤500、获取不同时刻同一场景的多帧图像。
需要说明的是,获取不同时刻同一场景的多帧图像的方法与多帧融合降噪算法中获取图像方法相同。以通过手机进行图像拍摄为例,按照预设的图像拍摄的数量参数采用相同的拍摄参数进行拍摄,图像拍摄的数量参数通常设置为拍摄的图像为3张;采用相同的拍摄参数,例如采用相同的曝光、对焦、测光等对同一场景快速的连续拍摄,从而获得本发明实施例进行图像配准的不同时刻同一场景的多帧图像。
步骤501、采用ORB特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
需要说明的是,目前比较流行的表达特征描述算子的算法包括Sift、Surf、FAST、ORB等。定向简介(ORB,ORiented Brief)特征提取算法的前身是Brief特征提取算法,Brief特征提取算法是由洛桑联邦理工学院EPFL的卡隆德尔(Calonder)在欧洲计算机视觉国际会议(ECCV)2010上提出的一种可以快速计算且表达方式为二进制编码的特征描述算子,主要思路就是在特征点附近随机选取若干点对,将这些点对的灰度值的大小,组合成一个二进制串,并将这个二进制串作为该特征点的特征描述算子。Brief特征提取算法最大的优点在于速度快,主要有不具有旋转不变性和对抗噪声性能差的缺点。ORB特征提取算法对Brief特征提取算法存在的缺点进行了改进,使算法在具有旋转不变性的同时具有较好的抗噪能力,同时ORB特征提取算法计算速度仍保持有Brief特征提取算法的计算速度优势,ORB算法的计算速度是SIFT算法的100倍,是SURF算法的10倍。以SIFI算法为例,在SIFT算法中,梯度直方图把第一峰值的方向设置为特征点的主方向,如果次峰值的量度达到峰值的80%,则把第二个峰值的方向也设定为主方向,该算法相对更耗时。而在ORB特征提取算法中,特征点的主方向是通过矩(moment)计算得来,有了主方向之后,就可以依据该主方向提取特征描述算子。ORB算法不直接使用像素点与像素点之间进行比较,而是选择以该像素为中心的一个区域作为比较对象,因此提高了算法的抗噪能力。
步骤502、对提取的每个帧图像的特征点进行配对;
本步骤中,对每个帧图像提取的特征点进行配对包括:
通过knn算法对每个帧图像提取的特征点进行配对。
图6为本发明实施例对两帧图像进行特征点配对的示意图,如图6所示,左右两侧的两帧图像完成了特征点的配对。
需要说明的是,采用K最近邻算法对每个帧图像提取的特征点进行配对属于本领域技术人员的惯用技术手段,在此不再赘述。
步骤503、通过Ransac算法进行对配对结果中错误的匹配进行排除。
需要说明的是,由于特征点受到噪声或算法参数设置不合理的影响会出现错误匹配,需要对错误的匹配进行排除,本发明实施例方法利用随机一致性采样Ransac算法进行匹配对优化,排除错误的匹配,保证之后计算的空间变换矩阵精度。另外,对于其他任何可以进行错误匹配排除的算法都可以应用与本发明实施例,Ransac算法只是本发明方法的可选实施例。
图7为本发明实施例对错误配对进行排除后的示意图,如图7所示,对两帧图像中的错误匹配的特征点进行了排除处理。
步骤504、通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
本步骤中,预设的图像配准变换模型为透视变换模型。
需要说明的是,透射变换模型是一种更具有灵活性,一个透射变换可以将矩形转变成梯形,它描述了将空间内一个平面投影到另一个空间平面内的方法。透射变换模型的矩阵表达式为:
Figure PCTCN2016105706-appb-000009
本发明实施例采用透视变换模型主要考虑到手持终端进行拍摄时,如手机在连续拍摄多帧图像时,手机的抖动运动基本上不在同一个平面,采用透视变换模型进行图像配准可以有效的解决由于拍摄抖动造成的多帧图像不再同一平面的问题。图8为本发明实施例图像抖动的示意图,如图8所示,拍摄图像过程中,手机因为抖动从实线位置1抖动到虚线位置2。
步骤505、将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
图9(a)为本发明实施例去噪前图像,图9(b)为本发明实施例去噪后图像,通过图9(a)和图9(b)从整体图像的显示效果上分析,图9(b)去噪后的图像质量得到了有效的提高。
图9(a)和图9(b)通过方框圈定了图中电话机的第一局部和安全出口提示灯的第二局部。图10(a)为本发明实施例去噪前图像的第一局部示意图,图10(b)为本发明实施例去噪后图像的第一局部示意图,通过图10 (a)和图10(b)从第一局部的图像的显示效果上分析,图10(b)去噪后的第一局部的图像清晰度得到了提高;图11(a)为本发明实施例去噪前图像的第二局部示意图,图11(b)为本发明实施例去噪后图像的第二局部示意图,通过图11(a)和图11(b)从第二局部的图像的显示效果上分析,图11(b)去噪后的第二局部的图像清晰度得到了提高。
图12为本发明实施例一种实现图像配准的终端的结构框图,如图12所示,包括:提取单元1201、配对单元1202和配准单元1203;其中,
提取单元1201设置为,采用定向简介ORB特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
配对单元1202设置为,对提取的每个帧图像的特征点进行配对;
配对单元1202是设置为,
通过K最近邻(knn)算法对每个帧图像提取的特征点进行配对。
配准单元1203设置为,通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
配准单元1203是设置为,通过透视变换模型对完成特征点配对的多帧图像进行图像配准。
本发明实施例终端还包括获取单元1206,设置为,获取所述不同时刻同一场景的多帧图像。
本发明实施例终端还包括排除单元1204,设置为所述配准单元1203进行图像配准之前,通过随机一致性采样Ransac算法进行对配对单元1202配对结果中错误的匹配进行排除。
本发明实施例终端还包括加权单元1205,设置为配准单元进行图像配准后,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
本发明实施例还提出了一种实现图像配准的终端,包括:获取单元、提取单元、配对单元、排除单元、配准单元和加权单元;其中,
获取单元设置为,获取所述不同时刻同一场景的多帧图像。
需要说明的是,获取单元一般可以通过相机进行获取,获取后的图像可 以存储在存储器中。
提取单元设置为,采用定向简介ORB特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
配对单元设置为,对提取的每个帧图像的特征点进行配对;
配对单元是设置为,
通过K最近邻(knn)算法对每个帧图像提取的特征点进行配对。
排除单元,设置为所述配准单元进行图像配准之前,通过随机一致性采样Ransac算法进行对配对单元配对结果中错误的匹配进行排除。
配准单元设置为,通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
配准单元设置为,通过透视变换模型对完成特征点配对的多帧图像进行图像配准。
加权单元,设置为配准单元进行图像配准后,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
需要说明的是,提取、配对、排除、加权等处理过程一般通过控制器进行处理实现。
本发明实施例还提供了一种计算机存储介质,所述计算机存储介质中存储有计算机可执行指令,所述计算机可执行指令用于执行上述实施例所述的方法。
本领域普通技术人员可以理解上述方法中的全部或部分步骤可通过程序来指令相关硬件(例如处理器)完成,所述程序可以存储于计算机可读存储介质中,如只读存储器、磁盘或光盘等。可选地,上述实施例的全部或部分步骤也可以使用一个或多个集成电路来实现。相应地,上述实施例中的各模块/单元可以采用硬件的形式实现,例如通过集成电路来实现其相应功能, 也可以采用软件功能模块的形式实现,例如通过处理器执行存储于存储器中的程序/指令来实现其相应功能。本发明不限制于任何特定形式的硬件和软件的结合。
虽然本发明所揭露的实施方式如上,但所述的内容仅为便于理解本发明而采用的实施方式,并非用以限定本发明。任何本发明所属领域内的技术人员,在不脱离本发明所揭露的精神和范围的前提下,可以在实施的形式及细节上进行任何的修改与变化,但本发明的专利保护范围,仍须以所附的权利要求书所界定的范围为准。
工业实用性
上述技术方案提高了图像配准的速度和效率,提升了多帧融合降噪算法图像融合的图像质量。

Claims (20)

  1. 一种实现图像配准的终端,包括:提取单元、配对单元和配准单元;其中,
    提取单元设置为,采用特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
    配对单元设置为,对提取的每个帧图像的特征点进行配对;
    配准单元设置为,通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
  2. 根据权利要求1所述的终端,该终端还包括:
    获取单元设置为,获取所述不同时刻同一场景的多帧图像。
  3. 根据权利要求1所述的方法,其中,
    所述配对单元是设置为,通过K最近邻knn算法对每个帧图像提取的特征点进行配对。
  4. 根据权利要求1、2或3所述的终端,其中,
    配准单元是设置为,通过透视变换模型对完成特征点配对的多帧图像进行图像配准。
  5. 根据权利要求1、2或3所述的终端,该终端还包括:
    排除单元,设置为所述配准单元进行图像配准之前,通过错误匹配排除算法对配对单元配对结果中错误的匹配进行排除。
  6. 根据权利要求1、2或3所述的终端,该终端还包括:
    加权单元,设置为配准单元进行图像配准后,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
  7. 根据权利要求1所述的终端,其中,
    所述征提取算法包括:定向简介ORB算法。
  8. 根据权利要求1所述的终端,其中,
    所述错误匹配排除算法包括:随机一致性采样Ransac算法。
  9. 根据权利要求6所述的终端,其中,加权单元,是设置为通过如下方式实现将完成图像配准的每个帧图像进行加权求平均,获得去噪图像:
    当噪声干扰满足高斯模型时,
    Figure PCTCN2016105706-appb-100001
    其中,D(x,y)为图像在像素点坐标(x,y)处的去噪后的像素值;Ii(x,y)为第i幅图像在像素点坐标(x,y)的真实像素值,n表示图像数。
  10. 根据权利要求2所述的终端,其中,获取单元,是设置为通过如下方式获取所述不同时刻同一场景的多帧图像:
    通过采用相同的拍摄参数对同一场景快速的连续拍摄获取所述不同时刻同一场景的多帧图像。
  11. 一种实现图像配准的方法,包括:
    采用特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点;
    对提取的每个帧图像的特征点进行配对;
    通过预设的图像配准变换模型对完成特征点配对的多帧图像进行图像配准。
  12. 根据权利要求11所述的方法,所述采用特征提取算法利用特征描述算子提取不同时刻同一场景的多帧图像的特征点之前还包括:获取所述不同时刻同一场景的多帧图像。
  13. 根据权利要求11所述的方法,其中,所述对每个帧图像提取的特征点进行配对包括:
    通过K最近邻算法对每个帧图像提取的特征点进行配对。
  14. 根据权利要求11、12或13所述的方法,其中,所述预设的图像配准变换模型为透视变换模型。
  15. 根据权利要求11、12或13所述的方法,该方法还包括:
    所述进行图像配准之前,通过错误匹配排除算法对配对的结果中错误的匹配进行排除。
  16. 根据权利要求11、12或13所述的方法,该方法还包括:
    进行图像配准后,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像。
  17. 根据权利要求11所述的方法,其中,
    所述征提取算法包括:定向简介ORB算法。
  18. 根据权利要求11所述的方法,其中,
    所述错误匹配排除算法包括:随机一致性采样Ransac算法。
  19. 根据权利要求16所述的方法,其中,将完成图像配准的每个帧图像进行加权求平均,获得去噪图像,包括:
    当噪声干扰满足高斯模型时,
    Figure PCTCN2016105706-appb-100002
    其中,D(x,y)为图像在像素点坐标(x,y)处的去噪后的像素值;Ii(x,y)为第i幅图像在像素点坐标(x,y)的真实像素值,n表示图像数。
  20. 根据权利要求12所述的方法,其中,获取所述不同时刻同一场景的多帧图像包括:
    通过采用相同的拍摄参数对同一场景快速的连续拍摄获取所述不同时刻同一场景的多帧图像。
PCT/CN2016/105706 2015-12-21 2016-11-14 一种实现图像配准的方法及终端 WO2017107700A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510966800.3A CN105427263A (zh) 2015-12-21 2015-12-21 一种实现图像配准的方法及终端
CN201510966800.3 2015-12-21

Publications (1)

Publication Number Publication Date
WO2017107700A1 true WO2017107700A1 (zh) 2017-06-29

Family

ID=55505444

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/105706 WO2017107700A1 (zh) 2015-12-21 2016-11-14 一种实现图像配准的方法及终端

Country Status (2)

Country Link
CN (1) CN105427263A (zh)
WO (1) WO2017107700A1 (zh)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108921776A (zh) * 2018-05-31 2018-11-30 深圳市易飞方达科技有限公司 一种基于无人机的图像拼接方法及装置
CN109064385A (zh) * 2018-06-20 2018-12-21 何中 360度全景展示效果生成工具及发布系统
CN109544608A (zh) * 2018-03-22 2019-03-29 广东电网有限责任公司清远供电局 一种无人机图像采集特征配准方法
CN109801220A (zh) * 2019-01-23 2019-05-24 北京工业大学 一种在线求解车载视频拼接中映射参数方法
CN110189368A (zh) * 2019-05-31 2019-08-30 努比亚技术有限公司 图像配准方法、移动终端及计算机可读存储介质
CN110443295A (zh) * 2019-07-30 2019-11-12 上海理工大学 改进的图像匹配与误匹配剔除算法
CN110728705A (zh) * 2019-09-24 2020-01-24 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110782421A (zh) * 2019-09-19 2020-02-11 平安科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN111127529A (zh) * 2019-12-18 2020-05-08 浙江大华技术股份有限公司 图像配准方法及装置、存储介质、电子装置
CN111127311A (zh) * 2019-12-25 2020-05-08 中航华东光电有限公司 基于微重合区域的图像配准方法
CN111932593A (zh) * 2020-07-21 2020-11-13 湖南中联重科智能技术有限公司 基于触摸屏手势校正的图像配准方法、系统及设备
CN112150548A (zh) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质
CN114972030A (zh) * 2022-05-31 2022-08-30 北京智通东方软件科技有限公司 一种图像拼接方法、装置、存储介质与电子设备

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105427263A (zh) * 2015-12-21 2016-03-23 努比亚技术有限公司 一种实现图像配准的方法及终端
CN105611181A (zh) * 2016-03-30 2016-05-25 努比亚技术有限公司 多帧拍摄图像合成装置和方法
CN106097284B (zh) * 2016-07-29 2019-08-30 努比亚技术有限公司 一种夜景图像的处理方法和移动终端
CN106447663A (zh) * 2016-09-30 2017-02-22 深圳市莫廷影像技术有限公司 一种去除重影的眼科oct图像高清配准方法及装置
CN108053369A (zh) * 2017-11-27 2018-05-18 努比亚技术有限公司 一种图像处理的方法、设备及存储介质
CN110261923B (zh) * 2018-08-02 2024-04-26 浙江大华技术股份有限公司 一种违禁品检测方法及装置
SG10202003292XA (en) * 2020-04-09 2021-11-29 Sensetime Int Pte Ltd Matching method and apparatus, electronic device, computer-readable storage medium, and computer program
CN111932587A (zh) * 2020-08-03 2020-11-13 Oppo广东移动通信有限公司 图像处理方法和装置、电子设备、计算机可读存储介质
CN113487659B (zh) * 2021-07-14 2023-10-20 浙江大学 一种图像配准方法、装置、设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003036444A (ja) * 2001-07-24 2003-02-07 Toppan Printing Co Ltd 商品情報構成データ抽出システム、商品情報構成データ抽出方法、商品情報構成データ抽出プログラム、及び商品情報構成データ抽出プログラムを記録した記録媒体
CN102629328A (zh) * 2012-03-12 2012-08-08 北京工业大学 融合颜色的显著特征概率潜在语义模型物体图像识别方法
CN104751465A (zh) * 2015-03-31 2015-07-01 中国科学技术大学 一种基于lk光流约束的orb图像特征配准方法
CN104851094A (zh) * 2015-05-14 2015-08-19 西安电子科技大学 一种基于rgb-d的slam算法的改进方法
CN105427263A (zh) * 2015-12-21 2016-03-23 努比亚技术有限公司 一种实现图像配准的方法及终端

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101276465B (zh) * 2008-04-17 2010-06-16 上海交通大学 广角图像自动拼接方法
CN103516995A (zh) * 2012-06-19 2014-01-15 中南大学 一种基于orb特征的实时全景视频拼接方法和装置
TW201520906A (zh) * 2013-11-29 2015-06-01 Univ Nat Taiwan Science Tech 影像註冊方法
CN104167003B (zh) * 2014-08-29 2017-01-18 福州大学 一种遥感影像的快速配准方法
CN104915949B (zh) * 2015-04-08 2017-09-29 华中科技大学 一种结合点特征和线特征的图像匹配方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003036444A (ja) * 2001-07-24 2003-02-07 Toppan Printing Co Ltd 商品情報構成データ抽出システム、商品情報構成データ抽出方法、商品情報構成データ抽出プログラム、及び商品情報構成データ抽出プログラムを記録した記録媒体
CN102629328A (zh) * 2012-03-12 2012-08-08 北京工业大学 融合颜色的显著特征概率潜在语义模型物体图像识别方法
CN104751465A (zh) * 2015-03-31 2015-07-01 中国科学技术大学 一种基于lk光流约束的orb图像特征配准方法
CN104851094A (zh) * 2015-05-14 2015-08-19 西安电子科技大学 一种基于rgb-d的slam算法的改进方法
CN105427263A (zh) * 2015-12-21 2016-03-23 努比亚技术有限公司 一种实现图像配准的方法及终端

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109544608B (zh) * 2018-03-22 2023-10-24 广东电网有限责任公司清远供电局 一种无人机图像采集特征配准方法
CN109544608A (zh) * 2018-03-22 2019-03-29 广东电网有限责任公司清远供电局 一种无人机图像采集特征配准方法
CN108921776A (zh) * 2018-05-31 2018-11-30 深圳市易飞方达科技有限公司 一种基于无人机的图像拼接方法及装置
CN109064385A (zh) * 2018-06-20 2018-12-21 何中 360度全景展示效果生成工具及发布系统
CN109801220A (zh) * 2019-01-23 2019-05-24 北京工业大学 一种在线求解车载视频拼接中映射参数方法
CN109801220B (zh) * 2019-01-23 2023-03-28 北京工业大学 一种在线求解车载视频拼接中映射参数方法
CN110189368B (zh) * 2019-05-31 2023-09-19 努比亚技术有限公司 图像配准方法、移动终端及计算机可读存储介质
CN110189368A (zh) * 2019-05-31 2019-08-30 努比亚技术有限公司 图像配准方法、移动终端及计算机可读存储介质
CN112150548B (zh) * 2019-06-28 2024-03-29 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质
CN112150548A (zh) * 2019-06-28 2020-12-29 Oppo广东移动通信有限公司 定位方法及装置、终端、存储介质
CN110443295A (zh) * 2019-07-30 2019-11-12 上海理工大学 改进的图像匹配与误匹配剔除算法
CN110782421A (zh) * 2019-09-19 2020-02-11 平安科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN110782421B (zh) * 2019-09-19 2023-09-26 平安科技(深圳)有限公司 图像处理方法、装置、计算机设备及存储介质
CN110728705B (zh) * 2019-09-24 2022-07-15 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN110728705A (zh) * 2019-09-24 2020-01-24 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111127529B (zh) * 2019-12-18 2024-02-02 浙江大华技术股份有限公司 图像配准方法及装置、存储介质、电子装置
CN111127529A (zh) * 2019-12-18 2020-05-08 浙江大华技术股份有限公司 图像配准方法及装置、存储介质、电子装置
CN111127311B (zh) * 2019-12-25 2023-07-18 中航华东光电有限公司 基于微重合区域的图像配准方法
CN111127311A (zh) * 2019-12-25 2020-05-08 中航华东光电有限公司 基于微重合区域的图像配准方法
CN111932593A (zh) * 2020-07-21 2020-11-13 湖南中联重科智能技术有限公司 基于触摸屏手势校正的图像配准方法、系统及设备
CN111932593B (zh) * 2020-07-21 2024-04-09 湖南中联重科智能技术有限公司 基于触摸屏手势校正的图像配准方法、系统及设备
CN114972030A (zh) * 2022-05-31 2022-08-30 北京智通东方软件科技有限公司 一种图像拼接方法、装置、存储介质与电子设备

Also Published As

Publication number Publication date
CN105427263A (zh) 2016-03-23

Similar Documents

Publication Publication Date Title
WO2017107700A1 (zh) 一种实现图像配准的方法及终端
CN108898567B (zh) 图像降噪方法、装置及系统
US9591237B2 (en) Automated generation of panning shots
US10708525B2 (en) Systems and methods for processing low light images
WO2017101626A1 (zh) 一种实现图像处理的方法及装置
JP6961797B2 (ja) プレビュー写真をぼかすための方法および装置ならびにストレージ媒体
Gallo et al. Locally non-rigid registration for mobile HDR photography
US20170289454A1 (en) Method and apparatus for video content stabilization
EP4044579A1 (en) Main body detection method and apparatus, and electronic device and computer readable storage medium
US20140176731A1 (en) Determining Image Alignment Failure
US20180198970A1 (en) High dynamic range imaging using camera arrays
WO2023098045A1 (zh) 图像对齐方法、装置、计算机设备和存储介质
EP4057623A1 (en) Subject detection method and apparatus, electronic device, and computer-readable storage medium
CN110062157B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN111163265A (zh) 图像处理方法、装置、移动终端及计算机存储介质
US8995784B2 (en) Structure descriptors for image processing
WO2019134505A1 (zh) 图像虚化方法、存储介质及电子设备
US10257417B2 (en) Method and apparatus for generating panoramic images
US20170351932A1 (en) Method, apparatus and computer program product for blur estimation
CN114096994A (zh) 图像对齐方法及装置、电子设备、存储介质
US20240205363A1 (en) Sliding Window for Image Keypoint Detection and Descriptor Generation
CN110047126B (zh) 渲染图像的方法、装置、电子设备和计算机可读存储介质
US20230016350A1 (en) Configurable keypoint descriptor generation
US11810266B2 (en) Pattern radius adjustment for keypoint descriptor generation
WO2017092261A1 (zh) 一种摄像头模组、移动终端及其拍摄图像的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16877511

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16877511

Country of ref document: EP

Kind code of ref document: A1