WO2022141178A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2022141178A1
WO2022141178A1 PCT/CN2020/141338 CN2020141338W WO2022141178A1 WO 2022141178 A1 WO2022141178 A1 WO 2022141178A1 CN 2020141338 W CN2020141338 W CN 2020141338W WO 2022141178 A1 WO2022141178 A1 WO 2022141178A1
Authority
WO
WIPO (PCT)
Prior art keywords
image block
image
current frame
target
frame
Prior art date
Application number
PCT/CN2020/141338
Other languages
English (en)
French (fr)
Inventor
唐克坦
廖文山
卢庆博
彭超
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Priority to PCT/CN2020/141338 priority Critical patent/WO2022141178A1/zh
Publication of WO2022141178A1 publication Critical patent/WO2022141178A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/223Analysis of motion using block-matching

Definitions

  • the present application relates to the technical field of image processing, and in particular, to an image processing method and apparatus.
  • the present application provides an image processing method and apparatus.
  • an image processing method comprising:
  • mapping relationship between the pixel points on the current frame and the pixel points on the reference frame is determined according to the matching point pair.
  • an image processing method comprising:
  • the image block on the current frame and the image block on the reference frame are subjected to matching processing to obtain the matched image block;
  • mapping relationship between the positional relationship between the pixel point on the current frame and the pixel point on the reference frame is determined according to the matching point pair.
  • an image processing apparatus includes a processor, a memory, and a computer program stored on the memory for execution by the processor, the processor executing the computer program
  • the apparatus includes a processor, a memory, and a computer program stored on the memory for execution by the processor, the processor executing the computer program
  • mapping relationship between the pixel points on the current frame and the pixel points on the reference frame is determined according to the matching point pair.
  • an image processing apparatus includes a processor, a memory, and a computer program stored on the memory for execution by the processor, the processor executing the computer program
  • the apparatus includes a processor, a memory, and a computer program stored on the memory for execution by the processor, the processor executing the computer program
  • mapping relationship between the positional relationship between the pixel point on the current frame and the pixel point on the reference frame is determined according to the matching point pair.
  • a computer-readable storage medium on which computer program instructions are stored.
  • the instructions are executed by a processor, the camera control method mentioned in the first aspect can be implemented.
  • a plurality of image blocks can be determined from the current frame and the reference frame respectively, and then the image blocks on the current frame and the image blocks on the reference frame are matched.
  • the matching point pairs are determined by image blocks, which can make the distribution of the matching point pairs in the image more uniform, and improve the accuracy of determining the relative motion of the two frames of images. sex.
  • FIG. 1 is a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 2 is a schematic diagram of determining adjacent image blocks according to an embodiment of the present application.
  • FIG. 3 is a flowchart of an image processing method according to an embodiment of the present application.
  • FIG. 4 is a schematic diagram of an image processing method according to an embodiment of the present application.
  • FIG. 5 is a schematic diagram of a logical structure of an image processing apparatus according to an embodiment of the present application
  • FIG. 6 is a schematic diagram of a logical structure of another image processing apparatus according to an embodiment of the present application.
  • an embodiment of the present application provides an image processing method.
  • determining the relative motion of two frames of images it is not necessary to extract feature points from the images, but to determine multiple image blocks from the two frames of images respectively. Match a plurality of image blocks, determine matching point pairs based on the matched image blocks, and then use the matching point pairs to fit the relative motion of the two frames of images. Since there is no need to extract feature points, the amount of calculation can be greatly reduced, the speed of determining the relative motion of two frames of images can be improved, and it is also suitable for some images with weak texture; The distribution in the images is more uniform, which improves the accuracy of determining the relative motion of the two frames of images.
  • the image processing method in the embodiments of the present application may be executed by any device having an image processing function, and the device may be a notebook computer, a mobile phone, a camera that captures images, or a cloud server, or the like.
  • the current frame in this embodiment of the present application may be an image frame to be processed, and the reference frame may be one or more image frames collected before or after the current frame.
  • the reference frame may be the previous frame or the current frame.
  • the next frame can also be the first two frames or the last two frames, which can be set according to actual needs.
  • FIG. 1 it is a flowchart of an image processing method according to an embodiment of the present application, which may specifically include the following steps:
  • S106 Determine the mapping relationship between the pixels on the current frame and the pixels on the reference frame according to the matching point pair.
  • multiple image blocks may be determined from the current frame and the reference frame respectively.
  • the current frame or the reference frame can be directly and evenly divided into multiple image blocks of equal area, or divided into multiple image blocks of unequal area, or only from the image
  • Multiple image patches are determined from specific locations in the , for example, multiple image patches are determined only from densely textured areas.
  • the image blocks on the current frame and the reference frame can be matched to obtain multiple pairs of matched image blocks.
  • matching point pairs can be determined according to the matched image blocks, thereby obtaining multiple pairs of matching point pairs. For example, for each pair of matched image blocks, the center pixels of the two image blocks may be taken as a pair of matching points.
  • multiple pairs of matching points can also be determined according to a pair of matching image blocks.
  • the relative motion of the current frame and the reference frame can be determined based on the multiple matching point pairs, that is, the relative motion between the pixels on the current frame and the pixels on the reference frame can be determined.
  • Mapping relations may be represented by a homography matrix, for example, a homography matrix (H) representing the mapping relationship of pixels of two frames of images may be fitted according to multiple matching pixel point pairs.
  • H homography matrix
  • a random consistency algorithm can be used to determine the homography matrix.
  • the mapping relationship can also be represented by a motion vector. For example, a motion vector (u, v) of the current frame relative to the reference frame can be determined, where u represents the displacement in the horizontal direction, and v represents the vertical displacement. displacement in the straight direction.
  • the image collected by the image acquisition device has high definition and a large amount of data, but when matching image blocks, the high definition may not be needed. Therefore, in order to reduce the amount of calculation, the image blocks and Before the image blocks on the reference frame are matched, the current frame and the reference frame may be down-sampled respectively.
  • the bilinear difference method can be used, or the average value of multiple adjacent pixels can be directly taken as the pixel value of a pixel after downsampling, which can be specifically Set according to actual needs.
  • the similarity between the image blocks can be determined according to the gray value of the image blocks. Therefore, for color images, the color image can be converted into a grayscale image first.
  • the grayscale value of the image collected by some image acquisition devices is relatively high precision, for example, it may be 12bit or 16bit. Degree values may not need that much precision. Therefore, in order to reduce the amount of calculation, in some embodiments, the current frame and the reference frame may be first converted into a grayscale image whose bit width of the gray value is the target bit width, and the target bit width may be smaller than or equal to the acquisition current frame and the bit width of the image sensor for the reference frame.
  • the target bit width can be determined according to actual needs.
  • the target bit width can be set to 8 bits.
  • the target bit width can be set to 4bit.
  • the target bit width can be 8 bits.
  • the bit width of the image sensor is 16 bits.
  • the gray value of the current frame and the reference frame can be shifted directly according to the difference between the bit width of the image sensor and the target bit width processing to convert the grayscale values to the target bit width. For example, assuming that the target bit width is 8 bits and the image sensor bit width is 12 bits, the gray value of the current frame and the gray value of the reference frame can be directly shifted to the right by 4 bits (that is, the gray values are divided by 2 4 ). ). However, because the bit width of the image sensor does not represent the real bit width of the captured image, for example, suppose the bit width of the image sensor is 12 bits, but the bit width of the gray value of the captured image may not exceed 10 bits.
  • the bit width of the image sensor performs shift processing on the image, and the effect may not be very good, resulting in insufficient gray value accuracy of the image, which may affect the final matching accuracy. Therefore, in some embodiments, before converting the gray value of the current frame and the gray value of the reference frame into the target bit width, the first bit width can be determined according to the gray value of the pixel in the target frame, and then The difference between the first bit width and the target bit width is shifted to the gray value of the current frame and the gray value of the reference frame to obtain a gray image with the gray value of the target bit width.
  • the target frame may be any frame in the current frame or the reference frame, for example, may be a frame with a larger grayscale value of the pixels in the current frame or the reference frame.
  • the first bit width can be the real bit width that reflects the actual gray value of the current frame and the reference frame.
  • the gray value of the pixel with the largest gray value in the image cannot represent the real gray value range of the pixel in the image, because one or several pixels with the largest gray value may be noise.
  • the first bit width when determining the first bit width according to the grayscale values of the pixels in the target frame, the first bit width may be determined according to the grayscale values of some pixels in the target frame. The degree value determines the first bit width. Part of the pixels may be several pixels in the target frame after the gray values of the pixels are sorted from large to small, that is, the gray values of some pixels will be greater than the gray values of the rest of the pixels in the target frame.
  • the target gray value can be determined according to the gray value of some pixel points, and then the first bit width can be determined according to the target gray value.
  • the first bit width is determined by the part of the pixel points with the largest gray value in the target frame, which can compare the actual bit width of the real image.
  • the target gray value may be the average value of the gray value of the part of the pixels, and the average value of several pixels with the largest gray value in the target frame is determined to represent the maximum gray value in the target frame.
  • the target gray value may be the minimum value of the gray value of the part of the pixels, for example, the part of the pixels may be the first 10 pixels in the gray value. Therefore, the first The gray value of 10 pixels is taken as the maximum gray value in the target frame.
  • the proportion of the number of partial pixels to the total number of pixels in the target frame does not exceed 1%.
  • the pixels with the gray value in the top 1% in the target frame can well represent the maximum value of the gray value in the target frame, so the target can be determined according to the pixels in the target frame with the gray value in the top 1%.
  • Gray value the target gray value can be the mean value or the minimum value of the gray value of the top 1% pixels.
  • the target gray value may be the minimum value among the gray values of some pixel points.
  • the target gray value may be the gray value of the pixels whose gray values are ranked in the top 1% of the target frame.
  • the gray value corresponding to the smallest pixel point, or the target gray value may be the gray value corresponding to the pixel point with the smallest gray value among the pixel points in the target frame whose gray value ranks 1.5%. Therefore, when determining the target gray value, the target gray value can be determined according to the mean value and variance of each pixel in the target frame.
  • the Gaussian distribution map determined according to the mean and variance can well reflect the proportion of pixels in the image whose gray value is less than a certain gray value in the image.
  • the gray value and variance of each pixel in the target frame can be determined, and then the target gray value is determined according to the gray value and variance, and the bit width corresponding to the target gray value is used as the first bit width.
  • the mean value of the gray value of each pixel in the target frame is expressed as mean
  • the variance is expressed as ⁇ .
  • the number of pixels in the target frame greater than mean+3 ⁇ is the same as the number of pixels in the target frame.
  • the mean+3 ⁇ can be taken as the gray value of the pixel with the smallest gray value among the pixels whose gray value ranks in the top 1% as the target gray value, and then mean+3 ⁇ can be taken as the gray value of the pixel with the smallest gray value.
  • the corresponding bit width is taken as the first bit width.
  • a grayscale histogram when determining the first bit width, in order to minimize the amount of calculation and improve the processing speed, may also be determined according to the grayscale values of the pixels in the target frame, and then based on the grayscale histogram Determine the first width. Since the grayscale histogram can represent the proportion of pixels with different grayscale values, the proportion of pixels less than a certain grayscale value in the image can be quickly determined according to the grayscale histogram.
  • the grayscale histogram determined based on the grayscale value of the pixel point of the target frame can be used to represent the proportion of the pixel point in different grayscale gradients, and the boundary value of each grayscale gradient is 2 k , where k is an integer.
  • the abscissa of the grayscale histogram represents 128, 256, 512, 1024, 2048, 4096, 8192, 16384, and 32768
  • the ordinate of each histogram can represent the pixels whose grayscale value is less than 128.
  • the proportion of pixels whose gray value is between 128 and 256 the proportion of pixels whose gray value is between 256 and 512, and so on.
  • the gray gradient to which the target pixel belongs can be determined according to the proportion of the pixels in each gray value gradient in the gray histogram, where the gray value in the target frame is greater than
  • the ratio of the number of pixels with the gray value of the target pixel to the total number of pixels in the target frame is less than the preset ratio, that is, the target pixel can be ranked in the top 1% of the pixels with the smallest gray value.
  • the first bit width can be determined according to the bit width corresponding to the upper limit boundary value of the grayscale gradient to which the target pixel belongs, for example, the bit width corresponding to the upper limit boundary value can be directly used as the first bit width.
  • the target pixel is the pixel with the smallest gray value among the pixels whose gray value is ranked in the top 1%, and the proportion of the pixel with a pixel value greater than 32768 is 0.1%, and the proportion of the pixel with a pixel value of 16384 to 32768 is 0.1%.
  • the ratio is 0.4%, and the proportion of pixels with gray values of 8192 to 16384 is 0.6%.
  • the bit width corresponding to 16384 can be used as the first bit width.
  • the width of the first bit determined in this way is an integer.
  • the width of the first bit can also be a decimal. For example, if the grayscale value is less than 2, the proportion of pixels with 10.5 is exactly 1 %, therefore, 10.5 can be taken as the first bit width.
  • the grayscale of the target pixel can be determined first according to the grayscale histogram degree value, where the ratio of the number of pixels with a gray value greater than that of the target pixel in the target frame to the total number of pixels in the target frame is less than the preset ratio, that is, the target pixel can be ranked in the gray value The pixel with the smallest gray value in the top 1% of the pixels, or the pixel with the smallest gray value in the top 2% of the pixels.
  • the bit width corresponding to the gray value of the target pixel is determined, and the first bit width is determined according to the bit width corresponding to the target pixel and the target bit width. For example, assuming that the target bit width is 8, if the gray value is just in the top 1% of the pixels, the minimum gray value corresponding to the bit width is k, then k > 7, then the first bit width is 8, If 6 ⁇ k ⁇ 7, the width of the first bit is 7, and so on.
  • the current frame and the reference frame in order to distribute the matching point pairs determined according to the image blocks in the current frame and the reference frame as evenly as possible, so that the mapping relationship between the pixels of the current frame and the reference frame determined according to the matching point pairs is more accurate, in When the image block on the current frame and the image block on the reference frame are determined, the current frame and the reference frame can be equally divided into multiple image blocks, and then the multiple image blocks are matched.
  • the number and size of the image blocks can be determined based on the number of matching point pairs that can be obtained. For example, to obtain 300 matching point pairs, the current frame and the reference frame can be divided into 300 image blocks.
  • multiple feature points may also be determined from the current frame and the reference frame, and then the current frame and the reference frame are respectively centered on these feature points. Determine multiple image blocks. In this way, it can be ensured that the image blocks determined in the current frame and the reference frame are not image blocks corresponding to the flat area, the matching accuracy of the image blocks can be improved, and only valid image blocks can be matched, and the processing speed can also be improved. .
  • multiple target image blocks may be first determined from the current frame or the reference frame, and the grayscale values of the multiple target image blocks are related to the multiple target image blocks.
  • the gray value difference between the adjacent image blocks of the image block is greater than the preset gray value threshold, and then the pixel point with the largest gray value gradient is determined from the multiple target image blocks, and the pixel point with the largest gray value gradient is used as the multi-target image block. feature points.
  • an image block can be determined in the current frame, such as a 3 ⁇ 3 image block, and then the image block can be moved one image block in each direction, such as up, down, left, and right. or multiple pixels, obtain its adjacent image block, and then compare the difference between the adjacent image block and the image block. If the difference is large, it means that the image block is not a flat area, so the image block can be determined as the target image block.
  • the luminance value of the image block on the current frame or the image block on the reference frame is less than the preset luminance threshold, it means that the image block is too dark, and thus the image block can be discarded.
  • the preset brightness threshold may be determined according to actual conditions or empirical values.
  • the number of feature points of the image block on the current frame or the image block on the reference frame is less than the preset number, it means that the image block is a flat area, and therefore, it can also be discarded.
  • the position change amount of the matching image block may be continuously adjusted based on the grayscale difference between the image block to be matched and the matching image block,
  • the matched image blocks of the to-be-matched image blocks are continuously updated based on the position change, and after multiple iterations, the image blocks that match the to-be-matched image blocks can be determined.
  • the difference between the gray value of the image block to be matched and the gray value of the initial matching image block of the image block to be matched in the reference frame can be determined, and then based on the difference Determine the position change of the image block to be matched relative to the initial matching image block, and then perform some series of transformations such as translation, affine, etc. on the initial matching image block based on the position change, and then replace the image block obtained by the transformation process.
  • the initial matching image block the above steps are continuously repeated until the preset condition is satisfied, and then the iterative processing is stopped. At this time, the updated initial matching image block is the image block that is finally paired with the to-be-matched image block.
  • the condition for stopping the iteration may be that the position change amount is less than a preset threshold, and the position change amount is less than the preset threshold value, indicating that the image block to be matched and the current initial matching image block are very similar at this time, so the iteration can be stopped. the process of.
  • the condition for stopping the iteration may also be that the number of iterations reaches a preset number of times, that is, the determination of the to-be-matched image block and the initial matched image block is performed repeatedly, according to the difference between the gray values of the to-be-matched image block and the initial matched image block. The value determines the position change amount, and performs transformation processing on the initial matching image block according to the position change amount, and the number of steps of updating the initial matching image block reaches a preset number of times.
  • the initial matching image block of the image block to be matched in the reference frame may be an image block at a pixel position corresponding to the image block to be matched in the reference frame.
  • the image block to be matched is an image block composed of 9 pixels in the first 3 rows and the first 3 columns in the current frame. Therefore, the initial matching image block is the first 3 rows and the first 3 columns in the reference frame. The image block composed of 9 pixels. Then, the position change amount is determined according to the difference of the gray value of the two image blocks, so as to perform transformation processing on the initial matching image block, and continuously update the initial matching image block.
  • the initial matching image block in order to ensure that the initial matching image block and the image to be matched are relatively close to reduce the number of iterations, the initial matching image block can also be determined based on the estimated mapping relationship, and the estimated mapping relationship is the estimated The mapping relationship between the pixels of the current frame and the pixels of the reference frame.
  • the estimated mapping relationship can be determined based on the motion state parameters of the image sensor.
  • the image sensor is mounted on a drone or a gimbal, so it can be determined according to the inertia set on the drone or gimbal.
  • the measurement unit IMU determines the motion state parameter of the image sensor, and determines the relative motion of the current frame relative to the reference frame according to the motion state parameter of the image sensor, so as to determine the estimated mapping relationship.
  • the estimated mapping relationship may also be determined according to the mapping relationship of at least two image frames acquired before the current frame.
  • the reference frame is the previous frame of the current frame, so the relative motion between the current frame and the reference frame can be estimated according to the relative motion between the reference frame and the previous frame of the reference frame, so as to determine the estimated value. Mapping relations.
  • the mapping relationship between the current frame and the reference frame may also be preliminarily determined by using some relatively simple algorithms with a small amount of calculation, as the estimated mapping relationship, so as to determine the initial matching image block.
  • the mapping relationship between the current frame and the reference frame may also be preliminarily determined by using some relatively simple algorithms with a small amount of calculation, as the estimated mapping relationship, so as to determine the initial matching image block.
  • the position change amount is continuously determined based on the gray difference of the two image blocks, and then a series of transformation processes such as translation and affine are performed on the initially matched image block according to the position change amount. Therefore, It is very likely that the initial matching image block will exceed the boundary of the original image block after a series of transformation processing, that is, the pixel value of some pixel points becomes 0. If the image block is continued to be used for multiple iterations, the matching accuracy will be seriously reduced. Therefore, before performing the above-mentioned iterative processing, the initial matching image block may be expanded with edge processing, that is, expanding the adjacent pixels of the initial matching image block in the reference frame into the initial matching image block. For example, assuming that the initial matching image block is a 3 ⁇ 3 image block, the surrounding 16 pixels or 25 pixels can be expanded as the initial matching image block, and then the subsequent transformation processing can be performed.
  • edge processing that is, expanding the adjacent pixels of the initial matching image block in the reference frame into the initial matching image block. For example, assuming that the initial matching image block
  • a relatively high matching accuracy can be obtained.
  • a correlation coefficient of each pair of matched image blocks may also be determined, and the correlation coefficient may represent the degree of similarity of each pair of matched image blocks. Since the two frames of images collected by the image sensor at different times will have a certain brightness difference, in order to eliminate the influence of the brightness difference on the matching results, when determining the gray value difference between the image block to be matched and the initial matching image block, you can first The brightness compensation is performed on the initial matching image block by using the predetermined brightness compensation parameters, so that the brightness change of the two frames of images affects the matching accuracy.
  • matching processing may also be performed on the image blocks on the two frames of images based on the grayscale histograms of the image blocks. For example, for each to-be-matched image block in the current frame, the following steps may be performed respectively to determine the matched image block: the horizontal and vertical directions of the to-be-matched image block and each image block of the reference frame may be counted separately. Gray histogram, then traverse each image block in the reference frame, determine the matching error between the gray histogram of each image block and the gray histogram of the image block to be matched, and then determine each image block from the reference frame according to the matching error Determine the matching image block of the image block to be matched.
  • the image block with the smallest matching error between the grayscale histogram of the reference frame and the grayscale histogram of the image block to be matched can be taken as the matching image block.
  • Matching images by grayscale histogram can greatly reduce the amount of calculation and improve the processing speed. Compared with the method of continuously iterating the grayscale differences of the image blocks to determine the matching image blocks in the above embodiment, the processing speed will be faster.
  • the displacement of the matching image block corresponding to the smallest matching error relative to the image block to be matched is an integer pixel, that is, the matching accuracy can only reach the pixel level accuracy.
  • the matching errors may be interpolated first, A minimum matching error corresponding to sub-pixel precision is obtained, and then a matching image block is determined based on the minimum matching error corresponding to sub-pixel precision.
  • 2 points adjacent to the current minimum matching error can be used for interpolation to re-determine a minimum matching error at a sub-pixel level, or 4 points adjacent to the current minimum matching error can be used to interpolate Interpolate to get the minimum matching error at the sub-pixel level.
  • the image blocks are matched by the difference of their grayscale histograms in the horizontal direction and the vertical direction. Since the grayscale histogram is a one-dimensional histogram, the processing speed will be greatly increased.
  • the application scenarios on the current frame and the reference frame may be determined first, and then a processing method for performing matching processing on the image blocks on the current frame and the image blocks on the reference frame is determined according to the application scenarios. For example, for video image frames, due to their high requirements for real-time performance, they require relatively fast processing speed, and their requirements for matching accuracy are not high, so the gray histogram of the image block can be used to compare the two The way in which the image blocks on the frame image are matched.
  • the requirements for real-time performance are relatively low, that is, the requirements for processing speed are relatively low, but the requirements for matching accuracy are relatively high. Therefore, for photo-type image frames, high processing speed and low matching accuracy can be adopted.
  • the position variation of the matching image can be continuously adjusted based on the pixel difference between the image to be matched and the image block to be matched, and the image block that matches the image block to be matched can be determined after multiple iterations.
  • the confidence of the mapping relationship can also be determined at the same time, and then the current frame and the reference frame are subjected to follow-up images in combination with the confidence. deal with.
  • the denoising strength may be determined in combination with the confidence of the mapping relationship, or the strength of the filter may be determined in combination with the confidence, so as to fuse the frames before and after.
  • a correlation coefficient representing the similarity of a pair of matched image blocks may be output, and thus the confidence may be determined based on the correlation coefficient.
  • the confidence may be based on the number of inlier (also called inlier) pairs in the random consensus algorithm, the weight of the matching point pairs. Projection error and threshold determination of ingroup pairs of random consensus algorithms.
  • the real-time requirements and matching accuracy of the application scenarios can be Choose the appropriate matching processing method according to the requirements.
  • the embodiment of the present application further provides another image processing method, as shown in FIG. 3 , the method includes:
  • the matching processing method includes:
  • Transform processing is performed on the initial matching image block based on the position change amount, and the initial matching image block is replaced with an image block obtained by the transform processing.
  • the matching processing method includes:
  • the matching image block of the image block to be matched is determined from the image blocks of the reference frame based on the matching error.
  • the embodiment of the present application provides a method with high precision, small amount of calculation, and strong resistance to illumination changes, which is used to determine the relative motion of two frames of images.
  • the processing flow of the whole method is shown in Figure 4.
  • the entire image processing module includes a preprocessing module, which is used to perform a series of preprocessing on the input current frame and reference frame, and a motion estimation module, which is used to determine the current frame and the reference frame. Relative motion of pixels.
  • the specific processing flow is as follows:
  • Bilinear interpolation can be used to downsample the current frame and the reference frame to reduce the image to no more than 1000 pixels on the long side, while reducing noise.
  • the gray value of the image can be shifted based on the actual bit width of the image, and the gray value of the image can be converted into 8bit, and of course, it can also be converted into an image with other bit widths. It can be set according to the actual situation.
  • the bit width corresponding to the smallest pixel value in the top 1% of the pixels represents the real bit width of the image.
  • the reason for not using a single maximum bit width is to avoid the influence of very few very bright noises.
  • the following two methods can be used to determine the true bit width of an image.
  • Method 1 The gray value of the image can be converted into 8bit according to the mean variance of the gray value of the pixel points in the image. Specifically, the mean value of the gray value of the pixel points in the image and the variance ⁇ of the gray value can be determined. According to the assumption of Gaussian distribution, the proportion of pixels greater than mean+3 ⁇ is less than 1%, so it can be considered that the bit of mean+3 ⁇ is less than 1%. The width represents the true bit width of the image. Image pixels can be determined
  • k shift the gray value of the pixel in the image to the right by k-8 bits. For example, if k is 10, the gray value of the pixel in the image is shifted to the right by 2 bits, that is, the gray value of the pixel in the image is uniformly divided by 4.
  • Method 2 The gray value of the image can be converted to 8bit according to the histogram of the image. Specific steps are as follows:
  • the histogram can be counted according to the threshold of 2 k .
  • the k corresponding to the smallest gray value in the pixels whose gray value ranks in the top 1% can be taken out, which is the real bit width of the image.
  • the minimum gray value of the pixels whose gray value ranks in the top 1% is 2000, then the corresponding k is 11, so the real bit width of the image is 11.
  • An image can be divided into multiple image blocks in two ways:
  • Method 1 Divide the image into multiple image blocks evenly.
  • the number of image blocks can be determined according to the number of matching point pairs to be obtained. For example, if 300 matching point pairs need to be obtained, the image will be divided into 300 images. piece.
  • Method 2 Multiple feature points can be extracted from the image first, and then multiple image blocks can be determined with the feature points as the center.
  • the initial matching image block can also be determined according to the preliminary estimated relative motion of the current frame and the reference frame, wherein the relative motion of the current frame and the reference frame can be determined based on the motion state parameters of the image sensor collected by the IMU, or can be determined by some small amount of calculation.
  • the global motion estimation algorithm is determined, or it can also be determined according to the relative motion of at least two frames of images acquired before the current frame.
  • the correlation coefficient between the two can be determined according to their similarity.
  • the grayscale histograms of the image block to be matched in the horizontal direction and the vertical direction can be determined, and then the horizontal and vertical direction of each image block in the reference frame can be determined. Grayscale histogram in the vertical direction.
  • interpolation processing can be performed on the minimum matching error. For example, interpolation processing can be performed according to the minimum error on the matching error curve and its adjacent four other errors to obtain the minimum error at the sub-pixel level, thereby determining to be Matching image patches for matching image patches.
  • the ECC algorithm performs image block matching, since it needs to go through multiple iterations to obtain the matching result, its processing speed is slower than that of GMV, but its processing accuracy is higher. Since the GMV algorithm only needs to match the one-dimensional grayscale histogram, its processing speed will be faster, and its processing accuracy will be worse than that of ECC.
  • whether to use the ECC algorithm to match the image blocks or the GMV algorithm to match the image blocks can be selected according to the application scene of the image. For example, for video image frames, due to their high requirements on real-time performance, they require relatively fast processing speed, and their requirements on matching accuracy are not high, so the GMV algorithm can be used for matching. For photo-type image frames, the requirements for real-time performance are relatively low, that is, the requirements for processing speed are relatively low, but the requirements for matching accuracy are relatively high. Therefore, for photo-type image frames, the ECC algorithm can be used for matching.
  • the center pixel points of each pair of matched image blocks can be taken as a pair of matched point pairs.
  • a random consensus algorithm can be used to determine the homography matrix H, and at the same time, the number of gregarious point pairs and the number of matching point pairs can be output.
  • the confidence of the mapping relationship can also be determined at the same time, and then the current frame and the reference frame are subjected to subsequent image processing in combination with the confidence.
  • the denoising strength may be determined in combination with the confidence of the mapping relationship, or the strength of the filter may be determined in combination with the confidence, so as to fuse the frames before and after.
  • each pair of image blocks will output a correlation coefficient, which is used to represent the similarity of the two image blocks, and its value is in the [0,1] interval, so the average value of the correlation coefficients of all image blocks can be taken as Confidence.
  • formula (2) can be used to determine the confidence ⁇ of the entire match.
  • N inlier represents the number of inliers
  • E represents the average reprojection error of the point pair
  • T is the inlier threshold of RANSAC, which is generally 1.
  • the number of inner points N inlier should be at least 30. When N inlier > 90, it can be considered that the number of inner points is sufficient, and the quality of the matching is good enough; the average reprojection error less than 0.2 is a very good match, and the error can be ignored. When it is greater than 0.2, the larger the error is, the worse the matching quality is.
  • an embodiment of the present application further provides an image processing apparatus 50 .
  • the apparatus 50 includes a processor 51 , a memory 52 , and a computer stored in the memory 52 for execution by the processor 51 .
  • program, the processor 51 implements the following steps when executing the computer program:
  • mapping relationship between the pixel points on the current frame and the pixel points on the reference frame is determined according to the matching point pair.
  • the processor before the processor is configured to match the image block on the current frame with the image block on the reference frame, the processor is further configured to:
  • the processor before the processor is configured to match the image block on the current frame with the image block on the reference frame, the processor is further configured to:
  • the current frame and the reference frame are respectively converted into a grayscale image whose bit width is a target bit width, and the target bit width is less than or equal to that of the image sensor that collected the current frame and the reference frame. bit width.
  • the processor when the processor is configured to convert the current frame and the reference frame into a grayscale image whose grayscale value bit width is a target bit width, the processor is specifically configured to:
  • right-shift processing is performed on the grayscale value of the current frame and the grayscale value of the reference frame to obtain the grayscale image.
  • the processor when the processor is configured to determine the first bit width based on the grayscale values of the pixels in the target frame, it is specifically configured to:
  • the target gray value is determined based on the gray value of the partial pixel points, and the first bit width is determined according to the target gray value.
  • the target grayscale value is the average value of the grayscale values of the partial pixel points.
  • the target grayscale value is the minimum value among the grayscale values of the partial pixel points.
  • the proportion of the number of the partial pixels to the total number of pixels of the target frame does not exceed 1%.
  • the target gray value is the minimum value among the gray values of the partial pixel points
  • the processor is configured to determine the target gray value based on the gray value of the partial pixel point, and according to When the target gray value determines the first bit width, it is specifically used for:
  • the target gray value is determined according to the mean value and the variance, and a bit width corresponding to the target gray value is used as the first bit width.
  • the processor when the processor is configured to determine the first bit width based on the grayscale values of the pixels in the target frame, it is specifically configured to:
  • the first bit width is determined according to the grayscale histogram.
  • the grayscale histogram is used to represent the proportion of pixel points in different grayscale gradients, and the boundary value of each grayscale gradient is 2 k , where k is an integer; the processor uses When determining the first bit width according to the grayscale histogram, it is specifically used for:
  • the grayscale gradient to which the target pixel belongs is determined according to the ratio, wherein the number of pixels with a grayscale value greater than that of the target pixel in the target frame and the total number of pixels in the target frame The proportion is less than the preset proportion;
  • the first bit width is determined according to the bit width corresponding to the upper boundary value of the grayscale gradient to which the target pixel belongs.
  • the bit width of the gray value of the gray histogram is the target bit width, and when the processor is configured to determine the first width according to the gray histogram, specifically: :
  • the bit width corresponding to the gray value of the target pixel is determined according to the gray histogram, wherein the number of pixels in the target frame whose gray value is greater than the gray value of the target pixel is the same as the number of pixels in the target frame.
  • the proportion of the total number of pixels is less than the preset proportion;
  • the first bit width is determined according to the bit width corresponding to the gray value of the target pixel point and the target bit width.
  • the image blocks on the current frame and the image blocks on the reference frame are based on:
  • a plurality of feature points are respectively determined on the current frame and the reference frame, and a plurality of image blocks are determined with the plurality of feature points as the center.
  • the plurality of feature points are determined based on:
  • a plurality of target image blocks are determined from the image blocks of the current frame or the image blocks of the reference frame, and the grayscale values of the plurality of target image blocks are the same as the grayscale values of the adjacent image blocks of the plurality of target image blocks.
  • the value difference is greater than the preset grayscale threshold
  • the pixel points with the largest gray value gradient are respectively determined from the multiple target image blocks as the multiple feature points.
  • the processor before the processor is configured to perform matching processing on the image block on the current frame and the image block on the reference frame, the processor is further configured to:
  • the processor when the processor is configured to determine whether to discard the image block on the current frame based on the brightness or texture of the image block on the current frame, the processor is specifically configured to:
  • the image block on the current frame is less than the preset number, the image block is discarded.
  • the processor when the processor is configured to perform matching processing on the image block on the current frame and the image block on the reference frame, the processor is specifically configured to:
  • Transform processing is performed on the initial matching image block based on the position change amount, and the initial matching image block is replaced with an image block obtained by the transform processing.
  • the preset conditions include:
  • the position change amount is less than a preset threshold
  • the step of determining the difference between the grayscale values of the image block to be matched and the initial matching image block of the image block to be matched in the reference frame at the corresponding pixel position is repeated for a preset number of times.
  • the processor before the processor is configured to perform transformation processing on the matched image block based on the position change amount, the processor is further configured to:
  • the adjacent pixels of the matched image block are expanded into the matched image block.
  • the matching image blocks are determined based on the estimated mapping relationship.
  • the estimated mapping relationship is determined based on a motion state parameter of the image sensor.
  • the estimated mapping relationship is determined based on a positional change relationship between at least image frames acquired before the current frame.
  • performing matching processing on the image block on the current frame and the image block on the reference frame includes:
  • a matching image block of the to-be-matched image block is determined from the image blocks of the reference frame based on the matching error.
  • the processor before the processor is configured to determine the matching image block of the image block to be matched from the image blocks of the reference frame based on the matching error, the processor is further configured to:
  • the processor is further configured to:
  • the processor when the processor is configured to perform matching processing on the image block on the current frame and the image block on the reference frame, the processor is specifically configured to:
  • a processing method for performing matching processing on the image block on the current frame and the image block on the reference frame is determined, and the image block on the current frame is processed based on the determined processing method. and performing matching processing with the image blocks on the reference frame.
  • the mapping relationship is characterized by a homography matrix or a motion vector.
  • the processor is further configured to:
  • Image processing is performed on the current frame or the reference frame based on the confidence.
  • the confidence is determined based on a correlation coefficient used to characterize the similarity of the matched image blocks.
  • the mapping relationship is determined by applying a random consistency algorithm to the matching point pairs, and the confidence level is based on the number of gregarious point pairs, the reprojection error of the matching point pairs, and the gregarious point pair threshold of the random consistency algorithm Sure.
  • an embodiment of the present application further provides another image processing apparatus.
  • the apparatus 60 includes a processor 61 , a memory 62 , and a computer stored in the memory 62 for execution by the processor 61 .
  • program, the processor 61 implements the following steps when executing the computer program:
  • mapping relationship between the positional relationship between the pixel point on the current frame and the pixel point on the reference frame is determined according to the matching point pair.
  • the matching processing method includes:
  • Transform processing is performed on the initial matching image block based on the position change amount, and the initial matching image block is replaced with an image block obtained by the transform processing.
  • the matching processing method includes:
  • the matching image block of the image block to be matched is determined from the image blocks of the reference frame based on the matching error.
  • an embodiment of the present specification further provides a computer storage medium, where a program is stored in the storage medium, and when the program is executed by a processor, the image processing method in any of the foregoing embodiments is implemented.
  • Embodiments of the present specification may take the form of a computer program product embodied on one or more storage media having program code embodied therein, including but not limited to disk storage, CD-ROM, optical storage, and the like.
  • Computer-usable storage media includes permanent and non-permanent, removable and non-removable media, and storage of information can be accomplished by any method or technology.
  • Information may be computer readable instructions, data structures, modules of programs, or other data.
  • Examples of computer storage media include, but are not limited to, phase-change memory (PRAM), static random access memory (SRAM), dynamic random access memory (DRAM), other types of random access memory (RAM), read only memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Flash Memory or other memory technology, Compact Disc Read Only Memory (CD-ROM), Digital Versatile Disc (DVD) or other optical storage, Magnetic tape cassettes, magnetic tape magnetic disk storage or other magnetic storage devices or any other non-transmission medium that can be used to store information that can be accessed by a computing device.
  • PRAM phase-change memory
  • SRAM static random access memory
  • DRAM dynamic random access memory
  • RAM random access memory
  • ROM read only memory
  • EEPROM Electrically Erasable Programmable Read Only Memory
  • Flash Memory or other memory technology
  • CD-ROM Compact Disc Read Only Memory
  • CD-ROM Compact Disc Read Only Memory
  • DVD Digital Versatile Disc
  • Magnetic tape cassettes magnetic tape magnetic disk storage or other magnetic storage devices or any other non-

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

一种图像处理方法及装置,所述方法包括:获取当前帧以及所述当前帧的参考帧(S102);对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块(S104);基于所述匹配的图像块确定匹配点对(S106);根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系(S108)。由于无需提取特征点,可以大大减小计算量,提高确定两帧图像相对运动的速度,另外,通过图像块确定匹配点对,可以使得匹配点对在图像中的分布更加均匀,提高了确定的两帧图像的相对运动的准确性。

Description

图像处理方法及装置 技术领域
本申请涉及图像处理技术领域,具体而言,涉及一种图像处理方法及装置。
背景技术
在对图像进行处理时,通常需要确定两帧图像的相对运动,并根据两帧图像的相对运动对这两帧图像进行融合、去噪、增稳等后续处理。相关技术在对两帧图像进行运动估计时,要么计算量较大,影响图像处理速度,要么确定的相对运动不准确,影响图像处理的效果。因此,有必要提供一种计算量较小又能精确确定两帧图像相对运动的方案。
发明内容
有鉴于此,本申请提供一种图像处理方法及装置。
根据本申请的第一方面,提供一种图像处理方法,所述方法包括:
获取当前帧以及所述当前帧的参考帧;
对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
基于所述匹配的图像块确定匹配点对;
根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
根据本申请的第二方面,提供一种图像处理方法,所述方法包括:
获取当前帧以及所述当前帧的参考帧;
基于所述当前帧或所述参考帧的应用场景确定匹配处理方式;
基于所述匹配处理方式对所述当前帧上的图像块和所述参考帧上的图 像块进行匹配处理,得到匹配的图像块;
基于所述匹配的图像块确定匹配点对;
根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的位置关系映射关系。
根据本申请的第三方面,提供一种图像处理装置,所述装置包括处理器、存储器、存储于所述存储器上可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取当前帧以及所述当前帧的参考帧;
对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
基于所述匹配的图像块确定匹配点对;
根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
根据本申请的第四方面,提供一种图像处理装置,所述装置包括处理器、存储器、存储于所述存储器上可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
获取当前帧以及所述当前帧的参考帧;
基于所述当前帧或所述参考帧的应用场景确定匹配处理方式;
基于所述匹配处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
基于所述匹配的图像块确定匹配点对;
根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的位置关系映射关系。
根据本申请的第五方面,提供一种计算机可读存储介质,其上存储有 计算机程序指令,当该指令被处理器执行时,可实现上述第一方面提及的相机的控制方法。
应用本申请提供的方案,在确定两帧图像的相对运动时,可以分别从当前帧和参考帧确定多个图像块,然后对当前帧上的图像块和参考帧上的图像块进行匹配处理,得到匹配的图像块,并根据匹配的多对图像块确定多对匹配点对,根据多对匹配点对确定当前帧和参考帧的像素点之间的映射关系,由于无需提取特征点,可以大大减小计算量,提高确定两帧图像相对运动的速度,另外,通过图像块确定匹配点对,可以使得匹配点对在图像中的分布更加均匀,提高了确定的两帧图像的相对运动的准确性。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1是本申请一个实施例一种图像处理方法流程图。
图2是本申请实施例的一种确定邻近图像块的示意图。
图3是本申请实施例的一种图像处理方法流程图。
图4是本申请实施例的一种图像处理方法示意图。
图5是本申请一个实施例的一种图像处理装置的逻辑结构示意图
图6是本申请一个实施例的另一种图像处理装置的逻辑结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没 有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
在对图像进行处理时,通常需要确定两帧图像的相对运动,并根据两帧图像的相对运动对这两帧图像进行融合、去噪、增稳等后续处理。现有的确定两帧图像相对运动主要是基于关键点描述的方法,比如SIFT(Scale-invariant feature transform,尺度不变性特征变换)算法、ORB等,这些方法要么就是精度不够,影响后续的图像处理效果,比如,ORB(Oriented FAST and Rotated BRIEF)算法,要么计算量太大,影响图像处理的效率,比如SIFT算法。因而,迫切需要一种高精度、计算量小的确定两帧图像相对运动的方案。
基于此,本申请实施例提供一种图像处理方法,在确定两帧图像的相对运动时,无需从图像中提取特征点,而是分别从两帧图像确定多个图像块,对两帧图像上的多个图像块进行匹配,基于匹配的图像块确定匹配点对,再利用匹配点对拟合出两帧图像的相对运动。由于无需提取特征点,可以大大减小计算量,提高确定两帧图像相对运动的速度,并且对于一些纹理较弱的图像也适用;另外,通过图像块确定匹配点对,可以使得匹配点对在图像中的分布更加均匀,提高了确定的两帧图像的相对运动的准确性。
本申请实施例中的图像处理方法可以由任一具有图像处理功能的设备执行,该设备可以是笔记本电脑、手机、采集图像的相机或者云端服务器等。
本申请实施例中的当前帧可以是待进行图像处理的图像帧,参考帧可以在当前帧之前或者之后采集的一帧或者多帧图像帧,比如,参考帧可以是当前帧的前一帧或后一帧,也可以是前两帧或后两帧,具体可以根据实际需求设置。
如图1所示,为本申请实施例的图像处理方法的流程图,具体可包括以下步骤:
S102、获取当前帧以及所述当前帧的参考帧;
S104、对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
S105、基于所述匹配的图像块确定匹配点对;
S106、根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
获取到当前帧以及当前帧的参考帧后,可以分别从当前帧和参考帧中确定多个图像块。从当前帧和参考帧中确定多个图像块时,可以直接将当前帧或参考帧均匀划分成多个等面积的图像块,也可以划分成多个面积不等的图像块,或者只从图像中的特定位置确定出多个图像块,比如,只从纹理密集的地方确定出多个图像块。在确定出当前帧和参考帧上的图像块后,可以对当前帧和参考帧上的图像块进行匹配,得到多对匹配的图像块。然后可以根据匹配的图像块确定出匹配点对,从而得到多对匹配点对。比如,针对每对匹配的图像块,可以取两个图像块的中心像素点作为一对匹配点对。当然,也可以根据一对匹配图像块确定出多对匹配点对。
在得到当前帧和参考帧上的多对匹配点对后,即可以基于这多对匹配点对确定当前帧和参考帧的相对运动,即当前帧上的像素点和参考帧上的像素点的映射关系。在一些实施例中,映射关系可以采用单应性矩阵表征,比如,可以根据多对匹配像素点点对拟合出表征两帧图像的像素点的映射关系的单应性矩阵(H)。当然,为了保证拟合出的单应性矩阵(H)的准确性,可以采用随机一致性算法确定单应性矩阵。在一些实施例中,映射关系也可以采用运动向量表征,比如,可以确定当前帧相对于参考帧的运动向量(u,v),其中,u表示在水平方向上的位移量,v表示在竖直方向上的位移量。
通常图像采集装置采集的图像清晰度较高,数据量比较大,而在对图像块进行匹配时,可以无需那么高的清晰度,所以,为了减小计算量,对当前帧上的图像块和参考帧上的图像块进行匹配之前,可以分别对当前帧 和参考帧进行下采样处理。对当前帧和参考帧进行下采样时,可以采用双线性差值的方法,也可以直接将相邻的多个像素点取均值,作为下采样后的一个像素点的像素值,具体可以是根据实际需求设置。
在对当前帧上的图像块和参考帧上的图像块进行匹配时,可以根据图像块的灰度值确定图像块之间的相似性。因此,针对彩色图像,可以先将彩色图像转化为灰度图像,有些图像采集装置采集的图像的灰度值精度比较高,比如,可能是12bit或者16bit,而在对图像块进行匹配时,灰度值可以无需那么高的精度。因而,为了减小计算量,在一些实施例中,可以先分别将当前帧和参考帧转化为灰度值的位宽为目标位宽的灰度图,目标位宽可以小于或等于采集当前帧和参考帧的图像传感器的位宽。目标位宽可以根据实际需求确定,比如,如果灰度值为8bit即可以得到比较准确的匹配结果,则目标位宽可以设置为8bit,当然,如果4bit即可以得到比较准确的匹配结果,则目标位宽可以设置为4bit。一般而言,为了保证匹配精度,又不会造成太大的计算量,目标位宽可以取8bit。比如,图像传感器的位宽为16bit,在获取到当前帧和参考帧后,可以将当前帧和参考帧的像素点的灰度值转化为8bit,从而可以大大减小计算量。
在将当前帧和参考帧的灰度值转化为目标位宽时,可以直接根据图像传感器的位宽和目标位宽的差值对当前帧的灰度值和参考帧的灰度值进行移位处理,从而将灰度值转化为目标位宽。举个例子,假设目标位宽为8bit,图像传感器的位宽为12bit,因而可以直接将当前帧的灰度值和参考帧的灰度值右移4位(即灰度值均除以2 4)。但是因为图像传感器的位宽并不能代表采集的图像的真实位宽,举个例子,假设图像传感器的位宽为12bit,但是采集的图像的灰度值的位宽可能不超过10bit,如果直接以图像传感器的位宽对图像进行移位处理,效果可能不太好,导致图像的灰度值精度不够,可能会影响最终的匹配精度。所以,在一些实施例中,在将当前帧的灰度值和参考帧的灰度值转化为目标位宽之前,可以根据目标帧中像素点的灰度值确定第一位宽,然后再根据第一位宽与目标位宽的差值对当前帧的灰 度值和参考帧的灰度值进行移位处理,得到灰度值为目标位宽的灰度图。其中,目标帧可以是当前帧或参考帧中的任一帧,比如可以是当前帧或参考帧中像素点的灰度值较大的一帧。第一位宽可以是反映当前帧和参考帧实际灰度值情况的真实位宽,通过根据图像中像素点的灰度值确定图像的真实位宽,然后再对图像进行移位处理,既可以减小计算量,又可以保证图像块的匹配精度。
通常图像中灰度值最大像素点的灰度值也不能代表图像中像素点的真实灰度值范围,因为,灰度值最大的一个或几个像素点有可能是噪声。为了可以确定出准确代表图像真实位宽的第一位宽,在一些实施例中,在根据目标帧中像素点的灰度值确定第一位宽时,可以根据目标帧中部分像素点的灰度值确定第一位宽。部分像素点可以是目标帧中像素点灰度值从大到小排序后,排在前后的若干个像素点,即部分像素点的灰度值会大于目标帧中其余像素点的灰度值。可以根据部分像素点的灰度值确定目标灰度值,然后根据目标灰度值确定第一位宽。通过目标帧中灰度值最大的部分像素点确定第一位宽,可以比较真实反映图像的实际位宽。
在一些实施例中,目标灰度值可以是该部分像素点的灰度值的平均值,通过确定目标帧中灰度值最大的若干像素点的均值,以代表目标帧中的最大灰度值。当然,在一些实施例中,目标灰度值可以是该部分像素点的灰度值的最小值,比如,部分像素点可以是灰度值排在前面的10个像素点,因而,可以取第10个像素点的灰度值作为目标帧中的最大灰度值。
在一些实施例中部分像素点的数量与目标帧中像素点总数量的占比不超过1%。通常,目标帧中灰度值排在前1%的像素点可以很好的代表目标帧中灰度值的最大值,因而可以根据目标帧中灰度值排在前1%的像素点确定目标灰度值,目标灰度值可以是这排在前1%的像素点的灰度值的均值或是最小值。
在一些实施例中,目标灰度值可以是部分像素点的灰度值中的最小值,比如,目标灰度值可以是目标帧中灰度值排在前1%的像素点中灰度值最 小的像素点对应的灰度值,或者目标灰度值可以是目标帧中灰度值排在1.5%的像素点中灰度值最小的像素点对应的灰度值。所以,在确定目标灰度值时,可以根据目标帧中各像素点的均值和方差确定目标灰度值。根据均值和方差确定的高斯分布图,可以很好的反映图像中灰度值小于某个特定灰度值的像素点在图像中的占比。比如,可以知道图像中灰度值排在前1%的像素点对应的灰度值。因而,可以确定目标帧中各像素点的灰度值和方差,然后根据灰度值和方差确定目标灰度值,并以目标灰度值对应的位宽作为第一位宽。举个例子,假设目标帧中各像素点的灰度值均值表示为mean,方差表示为σ,按照高斯分布的假设,目标帧中大于mean+3σ的像素点数量与目标帧中像素点重数量的比例小于1%,因此可以将mean+3σ的作为灰度值排在前1%的像素点中灰度值最小的像素点的灰度值,并作为目标灰度值,然后将mean+3σ对应的位宽作为第一位宽。
在一些实施例中,在确定第一位宽时,为了尽量减小计算量,提高处理速度,也可以先根据目标帧中像素点的灰度值确定灰度直方图,然后根据灰度直方图确定第一位宽。由于灰度直方图可以表示不同灰度值的像素点的占比,因而根据灰度直方图可以迅速确定小于某个灰度值的像素点在图像中的占比。
在一些实施例中,基于目标帧像素点的灰度值确定的灰度直方图可以用于表征不同灰度梯度内的像素点的占比,每个灰度梯度的边界值为2 k,其中k为整数,比如,灰度直方图的横坐标分别表示128、256、512、1024,2048、4096、8192、16384、32768,各直方图的纵坐标可以分别表示灰度值小于128的像素点的占比,灰度值位于128~256之间的像素点的占比,灰度值位于256~512之间的像素点的占比,以此类推。在根据灰度值确定第一位宽时,可以根据灰度直方图中各灰度值梯度内的像素点的占比确定目标像素点所属的灰度梯度,其中,目标帧中灰度值大于目标像素点的灰度值的像素点数量与目标帧中像素点总数量的占比小于预设占比,即目标像素点可以灰度值排在前1%的像素点中灰度值最小的像素点,或者灰度 值排在前2%的像素点中灰度值最小的像素点。然后可以根据目标像素点所属的灰度梯度的上限边界值对应的位宽确定第一位宽,比如可以直接将上限边界值对应的位宽作为第一位宽。假设目标像素点为灰度值排在前1%的像素点中灰度值最小的像素点,而像素值大于32768的像素点占比为0.1%,像素值为16384~32768的像素点的占比为0.4%,而像素点灰度值为8192~16384的像素点的占比为0.6%,因此,可以确定灰度值排在前1%的像素点中的最小灰度值位于8192~16384这个灰度梯度内,所以,可以将16384对应的位宽作为第一位宽。当然,通过此种方式确定的第一位宽均为整数,在一些实施例中,第一位宽也可以为小数,举个例子,如果灰度值小于2 10.5的像素点占比刚好为1%,因而,可以取10.5作为第一位宽。
在一些实施例中,如果灰度直方图的灰度值的位宽即为目标位宽,则根据灰度直方图确定第一位宽时,可以先根据灰度直方图确定目标像素点的灰度值,其中,目标帧中灰度值大于目标像素点的灰度值的像素点数量与目标帧中像素点总数量的占比小于预设占比,即目标像素点可以灰度值排在前1%的像素点中灰度值最小的像素点,或者灰度值排在前2%的像素点中灰度值最小的像素点。然后确定目标像素点的灰度值对应的位宽,并根据目标像素点对应的位宽和目标位宽确定第一位宽。举个例子,假设目标位宽为8,如果灰度值刚好排在前1%的像素点中的最小灰度值对应的位宽为k,那么k>7,则第一位宽取8,如果6<k<7,则第一位宽取7,以此类推。
在一些实施例中,为了根据图像块确定的匹配点对尽可能均匀分布于当前帧和参考帧中,从而根据匹配点对确定的当前帧和参考帧像素点之间的映射关系更加准确,在确定当前帧上的图像块和参考帧上的图像块时,可以分别将当前帧和参考帧平均分割成多个图像块,然后对这多个图像块进行匹配。其中,图像块的数量和大小可以基于想到得到的匹配点对的数量确定,比如想得到300对匹配点对,即可以将当前帧和参考帧划分成300个图像块。
当然,由于图像中有些区域为平坦区域,对于平坦区域对应的图像块,其匹配精度往往较差。所以,在一些实施例,在当前帧和参考帧中确定图像块时,也可以先从当前帧和参考帧中确定多个特征点,然后以这些特征点为中心分别在当前帧和参考帧中确定多个图像块。通过这种方式,可以保证在当前帧和参考帧中确定的图像块都不是平坦区域对应的图像块,可以提高图像块的匹配精度,并且只对有效的图像块进行匹配,也可以提高处理速度。
在一些实施中,在当前帧以及参考帧中确定多个特征点时,可以先从当前帧或参考帧中确定多个目标图像块,该多个目标图像块的灰度值与该多个目标图像块的邻近图像块的灰度值差异大于预设灰度阈值,然后分别从这多个目标图像块中确定灰度值梯度最大的像素点,将该灰度值梯度最大的像素点作为多个特征点。以在当前帧中确定目标图像块为例,如图2所示,可以在当前帧确定一个图像块,比如3×3的图像块,然后可以将该图像块分别沿上下左右等各方向移动一个或多个像素,得到其邻近图像块,然后比较邻近图像块和该图像块的差异,如果差异较大,则说明该图像块不是平坦区域,因而可以将该图像块确定为目标图像块。
当然,由于图像采集过程中存在过爆或者欠爆现象,导致当前帧的图像块和参考帧的图像块中,存在一些图像块的亮度过暗或过亮,如果采用这些图像块进行匹配,匹配精度会比较差,计算出来的两帧图像的相对运动也不够准确。因此,在一些实施例中,在对当前帧的图像块和参考帧的图像块进行匹配之前,可以根据图像块的亮度确定是否舍弃该图像块。此外,对于一些平坦区域对应的图像块,其匹配精度也往往比较差,因而,在对当前帧的图像块和参考帧的图像块进行匹配之前,也可以根据图像块的纹理确定是否舍弃该图像块。
在一些实施例中,如果当前帧上图像块或者参考帧上的图像块的亮度值小于预设亮度阈值,则说明该图像块过暗,因而可以舍弃该图像块。其中,预设亮度阈值可以根据实际情况或者经验值确定。在一些实施例中, 如果当前帧上图像块或者参考帧上的图像块的特征点数量小于预设数量,说明此图像块为平坦区域,因而,也可以将其舍弃。
在一些实施例中,在对当前帧上的图像块和参考帧上的图像块进行匹配处理时,可以基于待匹配图像块和匹配图像块的灰度差异不断调整匹配图像块的位置变化量,基于位置变化不断更新待匹配图像块的匹配图像块,经过多次迭代处理,可以确定出与待匹配图像块匹配的图像块。比如,针对当前帧中的每一个待匹配图像块,可以确定该待匹配图像块的灰度值与待匹配图像块在参考帧中的初始匹配图像块的灰度值的差异,然后根据该差异确定该待匹配图像块相对于该初始匹配图像块的位置变化量,然后可以基于该位置变化量对初始匹配图像块进行平移、仿射等一些列变换处理,然后用变换处理得到的图像块替换该初始匹配图像块,不断重复上述步骤,直至预设条件满足,则停止迭代处理,此时更新后的初始匹配图像块即为与该待匹配图像块最终配对的图像块。
在一些实施例中,停止迭代的条件可以是位置变化量小于预设阈值,位置变化量小于预设阈值说明此时待匹配图像块和当前的初始匹配图像块已非常相似,因而,可以停止迭代的过程。在一些实施例中,停止迭代的条件也可以是迭代次数达到预设次数,即重复执行确定待匹配图像块与初始匹配图像块,根据待匹配图像块和初始匹配图像块的灰度值的差值确定位置变化量,并根据位置变化量对初始匹配图像块进行变换处理,更新初始匹配图像块的步骤的次数达到预设次数。
在一些实施例中,待匹配图像块在参考帧中的初始匹配图像块可以是参考帧中该待匹配图像块对应像素位置上的图像块。举个例子,假设待匹配图像块为当前帧中前3行和前3列中的9个像素点构成的图像块,因此,其初始匹配图像块为参考帧中前3行和前3列中的9个像素点构成的图像块。然后再根据两个图像块的灰度值差异确定位置变化量,以对初始匹配图像块进行变换处理,不断更新初始匹配图像块。在一些实施例中,为了确保初始匹配图像块和待匹配图像是比较接近的,以减少迭代的次数,初 始匹配图像块也可以基于预估的映射关系确定,预估的映射关系即为估计得到的当前帧像素点与参考帧像素点之间的映射关系。在一些实施例中,预估的映射关系可以基于图像传感器的运动状态参数确定,举个例子,图像传感器搭载于无人机或者云台上,因而可以根据无人机或云台上设置的惯性测量单元IMU确定图像传感器的运动状态参数,根据图像传感器的运动状态参数确定当前帧相对于参考帧的相对运动,从而确定该预估的映射关系。在一些实施例中,预估的映射关系也可以根据在当前帧之前采集的至少两帧图像帧的映射关系确定。举个例子,参考帧为当前帧的前一帧,因而可以根据参考帧与参考帧的前一帧之间相对运动,估计出当前帧和参考帧之间的相对运动,从而确定该预估的映射关系。在一些实施例中,也可以先采用一些计算量小、比较简单的算法初步确定当前帧与参考帧之间的映射关系,作为预估的映射关系,以确定初始匹配图像块。通过初步估计当前帧和参考帧之间的相对运动,然后确定待匹配图像块的在参考帧的初始匹配图像块,可以减少后续迭代的次数,大大提高处理速度。
在一些实施例中,由于在迭代过程中,不断基于两个图像块的灰差异确定位置变化量,然后根据位置变化量对初始匹配图像块进行一系列的平移、仿射等变换处理,因而,很有可能初始匹配图像块经过一系列变换处理后,会超出原有图像块的边界,即部分像素点的像素值变为0,如果继续使用该图像块进行多次迭代,会严重降低匹配精度,因此,可以在进行上述迭代处理之前,可以先对初始匹配图像块进行扩边处理,即将参考帧中初始匹配图像块的邻近像素点扩充到所述初始匹配图像块中。举个例子,假设初始匹配图像块为3×3的图像块,可以将四周的16个像素点或者25个像素点扩充进来,作为初始匹配图像块,再进行后续的变换处理。
通过对两帧图像上的图像块的灰度差异进行多次迭代处理,以确定配对的图像块,可以得到比较高的匹配精度。在一些实施例中,通过上述方法完成图像块的配对后,还可以确定每对匹配的图像块的相关系数,相关系数可以表征每对匹配的图像块的相似程度。由于图像传感器在不同时刻 采集的两帧图像会存在一定的亮度差异,为了消除亮度差异对匹配结果的影响,在确定待匹配图像块与初始匹配图像块之间的灰度值差异时,可以先采用预先确定的亮度补偿参数对初始匹配图像块进行亮度补偿,从而两帧图像的亮度变化对匹配精度的影响。
在一些实施例中,在对当前帧上的图像块和参考帧上的图像块进行匹配处理时,也可以基于图像块的灰度直方图对两帧图像上的图像块进行匹配处理。比如,针对当前帧中的每一个待匹配图像块,可以分别执行以下步骤以确定与其匹配的图像块:可以分别统计待匹配图像块和参考帧的各图像块在水平方向和竖直方向上的灰度直方图,然后遍历参考帧中的各图像块,确定各图像块的灰度直方图与待匹配图像块的灰度直方图的匹配误差,然后根据匹配误差确定从参考帧的各图像块中确定待匹配图像块的匹配图像块。通常而言,可取参考帧中灰度直方图与待匹配图像块灰度直方图匹配误差最小的图像块作为匹配图像块。通过灰度直方图对图像进行匹配,可以大大减小计算量,提高处理速度。相比于上述实施例中采用不断对图像块的灰度差异进行迭代以确定匹配图像块的方式,其处理速度会更快。
由于参考帧上的图像块都是通过移动整数个像素得到的,因而,最小的匹配误差对应的匹配图像块相对于待匹配图像块的位移量都为整数像素,即匹配精度只能达到像素级别的精度。在一些实施例中,在根据匹配误差确定待匹配图像块的匹配图像块时,为了实现在亚像素上的进行匹配,使得匹配精度达到亚像素级别的精度,可以先对匹配误差进行插值处理,得到对应于亚像素精度的最小的匹配误差,然后再基于对应于亚像素精度的最小的匹配误差确定匹配图像块。其中,对匹配误差进行插值时,可以采用当前的最小匹配误差邻近的2个点进行插值,重新确定一个亚像素级别的最小的匹配误差,也可以采用当前的最小匹配误差邻近的4个点插值进行插值,以得到亚像素级别的最小匹配误差。
由于当前帧和参考帧的亮度存在一定的差异,为了消除两帧图像的亮 度差异对运动估计带来的影响,在对两帧图像上的图像块的灰度直方图进行匹配处理之前,可以先对两帧图像上的图像块的灰度直方图进行零均值归一化处理,然后再进行匹配,从而可以消除亮度变化的影响。
通过图像块在水平方向和竖直方向上的灰度直方图的差异对图像块进行匹配,由于灰度直方图为一维的直方图,其处理速度会大大增加。
在对当前帧上的图像块和参考帧上的图像块进行匹配处理时,由于不同的匹配处理方式的匹配精度和处理速度均不一样,而不同的场景对处理速度和匹配精度的要求也不一样。所以,在一些实施例中,可以先确定当前帧和参考帧上的应用场景,然后根据应用场景确定对当前帧上的图像块和参考帧上的图像块进行匹配处理的处理方式。举个例子,对于视频类的图像帧,由于其对实时性要求比较高,因而要求比较快的处理速度,并且其对匹配精度要求不高,因而可以采用根据图像块的灰度直方图对两帧图像上的图像块进行匹配处理的方式。对于照片类的图像帧,其对实时性要求比较低,即对处理速度要求比较低,但是其对匹配精度要求比较高,所以,针对照片类的图像帧,可以采用处理速度快,匹配精度低的匹配处理方式,比如,可以基于待匹配图像和匹配图像块的像素差异不断调整匹配图像的位置变化量,经过多次迭代确定出与待匹配图像块匹配的图像块。
在一些实施例中,在确定当前帧的像素点和参考帧的像素点之间的映射关系后,还可以同时确定该映射关系的置信度,然后结合置信度对当前帧和参考帧进行后续图像处理。比如,在对图像进行去噪处理时,可以结合映射关系的置信度确定去噪强度,或者结合置信度确定滤波器的强度,以对前后帧进行融合。
在一些实施例中,在对当前帧的图像块和参考帧的图像块进行匹配时,可以输出用于表征匹配的一对图像块的相似性的相关系数,因而置信度可以基于相关系数确定。在一些实施例中,如果映射关系通过对匹配点对采用随机一致性算法确定,那么置信度可以基于随机一致性算法中合群点(inlier,也叫内点)对的数量、匹配点对的重投影误差以及随机一致性算 法的合群点对的阈值确定。
当然,由于不同的应用场景,在确定两帧图像的相对运动时,其对图像块的匹配精度和匹配处理的速度有不同的需求,因而,可以根据应用场景对实时性的需求和对匹配精度的需求选择合适的匹配处理方式。
基于此,本申请实施例还提供另一种图像处理方法,如图3所示,所述方法包括:
S302、获取当前帧以及所述当前帧的参考帧;
S304、基于所述当前帧或所述参考帧的应用场景确定匹配处理方式;
S306、基于所述匹配处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
S308、基于所述匹配的图像块确定匹配点对;
S3010、根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的位置关系映射关系。
在一些实施例中,所述匹配处理方式包括:
针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
根据所述差异确定所述待匹配图像相对于所述初始匹配图像块的位置变化量;
基于所述位置变化量对所述初始匹配图像块进行变换处理,采用变换处理得到的图像块替换所述初始匹配图像块。
在一些实施例中,所述匹配处理方式包括:
针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图 与所述待匹配图像块的所述灰度直方图确定匹配误差;
基于所述匹配误差确定从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
其中,具体的处理细节可参考上述实施例中的描述,在此不再赘述。
为了进一步解释本申请实施例的图像处理方法,以下结合一个具体的实施例加以解释。
在对图像进行去噪、融合等处理时,通常需要对两帧图像进行运动估计,以确定两帧图像的相对运动,相关技术中在确定两帧图像的相对运动时要么计算量大,处理速度慢,要么精度较低,要么无法消除两帧图像亮度变化的影响。本申请实施例提供了一种高精度、计算量小、抗光照变化能力强方法,用于确定两帧图像的相对运动。整个方法的处理流程如图4所示,整个图像处理模块包括预处理模块,用于对输入的当期帧和参考帧进行一系列的预处理,运动估计模块,用于确定当期帧和参考帧的像素点相对运动。具体处理流程如下:
1、转化为灰度图
如果当前帧和参考帧为彩色图像,则将其转化为灰度图像。
2、下采样处理
可以采用双线性插值方式对当前帧和参考帧进行下采样,将图像缩小到长边不超过1000像素,同时减小噪声。
3、将图像灰度值转化为8bit
在将图像转化指定位宽的图像时,可以基于图像真实位宽对图像的灰度值进行移位处理,将图像的灰度值转化为8bit,当然也可以转化成其他位宽的图像,具体可根据实际情况设置。
具体的实现原理:找到将图像中像素点灰度值按从大到小排列后,排在前1%的像素点中的最小像素值对应的位宽来代表图像的真实位宽。之所以不用单一最大值的位宽,是为了避免极少数很亮的噪声的影响。可以采用以下两种方法确定图像的真实位宽。
方法一:可以根据图像中像素点灰度值的均值方差,将图像的灰度值转化为8bit。具体的,可以确定图像中像素点的灰度值的均值mean,以及灰度值的方差σ,按照高斯分布的假设,大于mean+3σ的像素比例小于1%,因此可以认为mean+3σ的位宽代表了图像的真实位宽。可以确定图像像素
点灰度值的高斯分布图中3σ对应的比特位宽k,具体如公式(1)
k=log2(mean+3σ),
如果k>8,则将图像中像素点的灰度值右移k-8比特。比如k为10,则将图像中像素点的灰度值右移2比特,即将图像中像素点的灰度值统一除以4。
方法二:可以根据图像的直方图,将图像的灰度值转为为8bit。具体步骤如下:
首先,可以按2 k为阈值统计直方图,比如灰度直方图的阈值可以分别是:int thresh[8]={256,512,1024,2048,4096,8192,16384,32768};
可以取出灰度值排在前1%的像素中最小灰度值对应的k,即为图像的真实位宽。比如,灰度值排在前1%的像素中最小灰度值为2000,则其对应的k为11,因而图像的真实位宽为11。
在一些场景中,如果可以获取图像AE(Automatic Exposure,自动曝光)时统计的灰度直方图(假设为8比特直方图),则直接用AE的灰度直方图。仍然是取灰度值排在前1%的像素点中的最小灰度值对应的位宽k,如果k>7,则图像真实位宽=sensor位宽,否则如果k>6,则图像真实位宽=sensor位宽-1,以此类推。
4、将当前帧和参考帧划分为多个图像块
可以采用以下两种方式将图像划分成多个图像块:
方法一:直接将图像平均划分成多个图像块,图像块的数量可以根据想要得到的匹配点对的数量确定,比如,需要得到300对匹配点对,则将将图像划分成300个图像块。
方法二:可以先从图像中提取多个特征点,然后以特征点为中心确定 多个图像块。
5、对图像块进行匹配处理,得到匹配的图像块。
在对图像块进行匹配处理时,可以采用两种自行设计的算法,以下分别称为ECC(Enhance Correlation Coefficient,增强相关系数)算法和GMV算法,两种算法的实现原理如下:
ECC算法具体流程如下:
(1)针对每个图像块,如果其灰度值的均值或方差小于预设值,或者其特征点数量小于预设数量(纹理太弱),则将其舍弃。
(2)确定当前帧中待匹配图像块在参考帧中的初始匹配图像块,其中,初始匹配图像块,可以是参考帧中待匹配图像块对应像素位置的图像块。初始匹配图像块也可以根据初步估计的当前帧和参考帧的相对运动确定,其中,当前帧和参考帧的相对运动可以基于IMU采集的图像传感器的运动状态参数确定,也可以通过一些计算量小的全局运动估计算法确定,或者也可以根据当前帧之前采集的至少两帧图像的相对运动确定。
(3)确定对初始匹配图像块进行亮度补偿的亮度补偿参数,以根据亮度补偿参数对初始匹配图像块的灰度值进行匹配,从而消除两帧图像的亮度变化对匹配精度的影响。
(4)对初始匹配图像进行扩边处理,将参考帧中初始匹配图像块四周的一些像素点扩充到初始匹配图像块中,以避免对初始匹配图像块进行变化处理后,像素点像素值变成0,影响匹配精度。
(5)确定待匹配图像块和补偿后的初始匹配图像块之间的灰度差异,根据灰度差异确定对初始匹配图像块进行变换处理的变换矩阵。
(6)根据变化矩阵对初始匹配图像块进行一系列的平移、仿射等变化处理,得到变化处理后的图像块。
(7)利用变化处理后的图像块更新初始匹配图像块,然后重复执行上述步骤,直至迭代次数到达预设次数,或者两个图像块的灰度差异小于预设阈值,从而确定待匹配图像块的匹配图像块。
(8)针对每一对匹配得到的图像块,可以根据其相似程度确定两者的相关系数。
GMV算法的具体流程如下:
(1)针对每个图像块,如果其灰度值的均值或方差小于预设值,或者其特征点数量小于预设数量(纹理太弱),则将其舍弃。
(2)针对当前帧上的待匹配图像块,可以确定待匹配图像块在水平方向上和竖直方向上的灰度直方图,然后确定参考帧中的各图像块在在水平方向上和竖直方向上的灰度直方图。
(3)为了消除两帧图像的亮度变化(即光照)对匹配精度的影响,可以对各图像块的灰度直方图进行零均值归一化处理。
(4)统计待匹配图像块在水平方向上和竖直方向上的灰度直方图与参考帧中的各图像块在在水平方向上和竖直方向上的灰度直方图的匹配误差,得到误差曲线,其中,误差无线的横坐标可以表示参考图像上的各图像块与待匹配图像块的运动向量(比如,平移1个像素、平移2个像素),纵坐标表示匹配误差的大小。
(5)由于上述方法得到的匹配误差是在整像素级别下的匹配误差,其精度为整像素上的精度。为了进一步提高匹配精度,可以对最小匹配误差进行插值处理,比如,可以根据匹配误差曲线上的最小误差与其邻近的四个其他误差进行插值处理,得到亚像素点级别上的最小误差,从而确定待匹配图像块的匹配图像块。
ECC算法在进行图像块的匹配时,由于其需要经过多次迭代,以获得匹配结果,因而其处理速度相比于GMV要慢一些,但是其处理精度要一些。GMV算法由于只需对一维的灰度直方图进行匹配,因而其处理速度会快一些,处理精度相比ECC会差一些。
实际应用中,可以根据图像的应用场景选择是采用ECC算法对图像块进行匹配,还是GMV算法对图像块进行匹配。比如,对于视频类的图像帧,由于其对实时性要求比较高,因而要求比较快的处理速度,并且其对 匹配精度要求不高,因而可以采用GMV算法进行匹配。对于照片类的图像帧,其对实时性要求比较低,即对处理速度要求比较低,但是其对匹配精度要求比较高,所以,针对照片类的图像帧,可以采用ECC算法进行匹配。
6、根据匹配的图像块确定匹配点对。
可以取每对匹配图像块的中心像素点,作为一对匹配点对。
7、根据匹配点对确定单应性矩阵H。
可以采用随机一致性算法确定单应性矩阵H,同时还可以输出合群点对的数量,匹配点对的数量。
8、置信度的计算
在确定当前帧的像素点和参考帧的像素点之间的映射关系H后,还可以同时确定该映射关系的置信度,然后结合置信度对当前帧和参考帧进行后续图像处理。比如,在对图像进行去噪处理时,可以结合映射关系的置信度确定去噪强度,或者结合置信度确定滤波器的强度,以对前后帧进行融合。
针对ECC算法,由于每对图像块都会输出一个相关系数,用于表征两个图像块的相似性,其值在[0,1]区间,因而可以将所有的图像块的相关系数取均值,作为置信度。
针对GMV算法,可以采用公式(2)来确定整个匹配的置信度ρ。
Figure PCTCN2020141338-appb-000001
其中N inlier表示内点数,E表示点对的平均重投影误差,T是RANSAC的内点阈值,一般取1。
内点数N inlier至少要30个,当N inlier>90时可以认为内点数已经足够多,匹配的质量已经足够好了;平均重投影误差小于0.2是非常好的匹配,误差是可以忽略的。当大于0.2时,误差越大则匹配的质量越差。
相应的,本申请实施例还提供一种图像处理装置50,如图5所示,所述装置50包括处理器51、存储器52、存储于所述存储器52可供所述处理器51执行的计算机程序,所述处理器51执行所述计算机程序时实现以下步骤:
获取当前帧以及所述当前帧的参考帧;
对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
基于所述匹配的图像块确定匹配点对;
根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
在一些实施例中,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配之前,还用于:
分别对所述当前帧和所述参考帧进行下采样处理。
在一些实施例中,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配之前,还用于:
分别将所述当前帧和所述参考帧转化为灰度值的位宽为目标位宽的灰度图,所述目标位宽小于或等于采集所述当前帧和所述参考帧的图像传感器的位宽。
在一些实施例中,所述处理器用于分别将所述当前帧和所述参考帧转化为灰度值的位宽为目标位宽的灰度图时,具体用于:
基于所述目标帧中像素点的灰度值确定第一位宽,所述目标帧为所述当前帧或所述参考帧;
根据所述第一位宽与所述目标位宽的差值分别对所述当前帧的灰度值和所述参考帧的灰度值进行右移位处理,得到所述灰度图。
在一些实施例中,所述处理器用于基于所述目标帧中像素点的灰度值确定第一位宽时,具体用于:
从所述目标帧中的像素点选出部分像素点,所述部分像素点的灰度值 大于所述目标帧中其余像素点的灰度值;
基于所述部分像素点的灰度值确定目标灰度值,并根据所述目标灰度值确定所述第一位宽。
在一些实施例中,所述目标灰度值为所述部分像素点的灰度值的均值;或
所述目标灰度值为所述部分像素点的灰度值中的最小值。
在一些实施例中,所述部分像素点的数量与所述目标帧的像素点总数量的占比不超过1%。
在一些实施例中,所述目标灰度值为所述部分像素点的灰度值中的最小值,所述处理器用于基于所述部分像素点的灰度值确定目标灰度值,并根据所述目标灰度值确定所述第一位宽时,具体用于:
确定所述目标帧的各像素点的灰度值的均值和方差;
根据所述均值和所述方差确定所述目标灰度值,并以所述目标灰度值对应的位宽作为所述第一位宽。
在一些实施例中,所述处理器用于基于所述目标帧中像素点的灰度值确定第一位宽时,具体用于:
基于所述目标帧中像素点的灰度值确定灰度直方图;
根据所述灰度直方图确定所述第一位宽。
在一些实施例中,所述灰度直方图用于表征不同灰度梯度内的像素点的占比,每个所述灰度梯度的边界值为2 k,其中k为整数;所述处理器用于根据所述灰度直方图确定所述第一位宽时,具体用于:
根据所述占比确定目标像素点所属的灰度梯度,其中,所述目标帧中灰度值大于所述目标像素点的灰度值的像素点数量与所述目标帧中像素点总数量的占比小于预设占比;
根据所述目标像素点所属的灰度梯度的上限边界值对应的位宽确定所述第一位宽。
在一些实施例中,所述灰度直方图的灰度值的位宽为所述目标位宽,所 述处理器用于根据所述灰度直方图确定所述第一位宽时,具体用于:
根据所述灰度直方图确定目标像素点的灰度值对应的位宽,其中,所述目标帧中灰度值大于所述目标像素点的灰度值的像素点数量与所述目标帧中像素点总数量的占比小于预设占比;
根据所述目标像素点的灰度值对应的位宽和所述目标位宽确定所述第一位宽。
在一些实施例中,所述当前帧上的图像块和所述参考帧上的图像块基于以下方式:
分别将所述当前帧和所述参考帧平均分割成多个图像块;或
分别在所述当前帧和所述参考帧上确定多个特征点,以所述多个特征点为中心确定多个图像块。
在一些实施例中,所述多个特征点基于以下方式确定:
从所述当前帧的图像块或所述参考帧的图像块中确定多个目标图像块,所述多个目标图像块的灰度值与所述多个目标图像块的邻近图像块的灰度值差异大于预设灰度阈值;
分别从所述多个目标图像块中确定灰度值梯度最大的像素点,作为所述多个特征点。
在一些实施例中,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理之前,还用于:
基于所述当前帧上的图像块的亮度或纹理确定是否舍弃所述当前帧上的图像块;以及基于所述参考帧上的图像块的亮度或纹理确定是否舍弃所述参考帧上的图像块。
在一些实施例中,所述处理器用于基于所述当前帧上的图像块的亮度或纹理确定是否舍弃所述当前帧上的图像块时,具体用于:
若所述当前帧上的图像块的亮度的平均值小于预设亮度阈值,或所述当前帧上的图像块的亮度的方差小于预设方差,则舍弃所述当前帧上的图像块;或
若所述当前帧上的图像块中的特征点的数量小于预设数量,则舍弃所述图像块。
在一些实施例中,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理时,具体用于:
针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
根据所述差异确定所述待匹配图像相对于所述初始匹配图像块的位置变化量;
基于所述位置变化量对所述初始匹配图像块进行变换处理,采用变换处理得到的图像块替换所述初始匹配图像块。
在一些实施例中,所述预设条件包括:
所述位置变化量小于预设阈值;或
重复执行确定所述待匹配图像块与所述待匹配图像块在所述参考帧中的初始匹配图像块在对应像素位置上的灰度值的差值的步骤的次数达到预设次数。
在一些实施例中,所述处理器用于基于所述位置变化量对所述匹配图像块进行变换处理之前,还用于:
将所述匹配图像块的邻近像素点扩充到所述匹配图像块中。
在一些实施例中,所述匹配图像块基于预估的映射关系确定。
在一些实施例中,所述预估的映射关系基于所述图像传感器的运动状态参数确定;或
所述预估的映射关系基于在所述当前帧之前采集的至少图像帧之间的位置变化关系确定。
在一些实施例中,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,包括:
针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图与所述待匹配图像块的所述灰度直方图确定匹配误差;
基于所述匹配误差从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
在一些实施例中,所述处理器用于基于所述匹配误差确定从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块之前,还用于:
对所述匹配误差进行插值处理,以使所述待匹配图像块和所述匹配图像块的匹配精度为亚像素级别的精度。
在一些实施例中,所述处理器还用于:
对所述灰度直方图进行零均值归一化处理。
在一些实施例中,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理时,具体用于:
确定所述当前帧和所述参考帧的应用场景;
基于所述应用场景的实时性需求确定对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理的处理方式,并基于确定的处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理。
在一些实施例中,所述映射关系通过单应性矩阵或运动向量表征。
在一些实施例中,所述处理器还用于:
确定所述映射关系的置信度;
基于所述置信度对所述当前帧或所述参考帧进行图像处理。
在一些实施例中,所述置信度基于相关系数确定,所述相关系数用于表征所述匹配的图像块的相似性;或
所述映射关系通过对所述匹配点对采用随机一致性算法确定,所述置信度基于合群点对的数量、所述匹配点对的重投影误差以及所述随机一致 性算法的合群点对阈值确定。
相应的,本申请实施例还提供另一种图像处理装置,如图6所示,所述装置60包括处理器61、存储器62、存储于所述存储器62可供所述处理器61执行的计算机程序,所述处理器61执行所述计算机程序时实现以下步骤:
获取当前帧以及所述当前帧的参考帧;
基于所述当前帧或所述参考帧的应用场景确定匹配处理方式;
基于所述匹配处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
基于所述匹配的图像块确定匹配点对;
根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的位置关系映射关系。
在一些实施例中,所述匹配处理方式包括:
针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
根据所述差异确定所述待匹配图像相对于所述初始匹配图像块的位置变化量;
基于所述位置变化量对所述初始匹配图像块进行变换处理,采用变换处理得到的图像块替换所述初始匹配图像块。
在一些实施例中,所述匹配处理方式包括:
针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图与所述待匹配图像块的所述灰度直方图确定匹配误差;
基于所述匹配误差确定从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
相应地,本说明书实施例还提供一种计算机存储介质,所述存储介质中存储有程序,所述程序被处理器执行时实现上述任一实施例中图像处理方法。
本说明书实施例可采用在一个或多个其中包含有程序代码的存储介质(包括但不限于磁盘存储器、CD-ROM、光学存储器等)上实施的计算机程序产品的形式。计算机可用存储介质包括永久性和非永久性、可移动和非可移动媒体,可以由任何方法或技术来实现信息存储。信息可以是计算机可读指令、数据结构、程序的模块或其他数据。计算机的存储介质的例子包括但不限于:相变内存(PRAM)、静态随机存取存储器(SRAM)、动态随机存取存储器(DRAM)、其他类型的随机存取存储器(RAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、快闪记忆体或其他内存技术、只读光盘只读存储器(CD-ROM)、数字多功能光盘(DVD)或其他光学存储、磁盒式磁带,磁带磁磁盘存储或其他磁性存储设备或任何其他非传输介质,可用于存储可以被计算设备访问的信息。
对于装置实施例而言,由于其基本对应于方法实施例,所以相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有 明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上对本发明实施例所提供的方法和装置进行了详细介绍,本文中应用了具体个例对本发明的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本发明的方法及其核心思想;同时,对于本领域的一般技术人员,依据本发明的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本发明的限制。

Claims (60)

  1. 一种图像处理方法,其特征在于,所述方法包括:
    获取当前帧以及所述当前帧的参考帧;
    对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
    基于所述匹配的图像块确定匹配点对;
    根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
  2. 根据权利要求1所述的方法,其特征在于,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配之前,还包括:
    分别对所述当前帧和所述参考帧进行下采样处理。
  3. 根据权利要求1或2所述的方法,其特征在于,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配之前,还包括:
    分别将所述当前帧和所述参考帧转化为灰度值的位宽为目标位宽的灰度图,所述目标位宽小于或等于采集所述当前帧和所述参考帧的图像传感器的位宽。
  4. 根据权利要求3所述的方法,其特征在于,分别将所述当前帧和所述参考帧转化为灰度值的位宽为目标位宽的灰度图,包括:
    基于所述目标帧中像素点的灰度值确定第一位宽,所述目标帧为所述当前帧或所述参考帧;
    根据所述第一位宽与所述目标位宽的差值分别对所述当前帧的灰度值和所述参考帧的灰度值进行右移位处理,得到所述灰度图。
  5. 根据权利要求4所述的方法,其特征在于,基于所述目标帧中像素点的灰度值确定第一位宽,包括:
    从所述目标帧中的像素点选出部分像素点,所述部分像素点的灰度值大于所述目标帧中其余像素点的灰度值;
    基于所述部分像素点的灰度值确定目标灰度值,并根据所述目标灰度值确定所述第一位宽。
  6. 根据权利要求5所述的方法,其特征在于,所述目标灰度值为所述部分像素点的灰度值的均值;或
    所述目标灰度值为所述部分像素点的灰度值中的最小值。
  7. 根据权利要求5或6所述的方法,其特征在于,所述部分像素点的数量与所述目标帧的像素点总数量的占比不超过1%。
  8. 根据权利要求5或7所述的方法,其特征在于,所述目标灰度值为所述部分像素点的灰度值中的最小值,基于所述部分像素点的灰度值确定目标灰度值,并根据所述目标灰度值确定所述第一位宽,包括:
    确定所述目标帧的各像素点的灰度值的均值和方差;
    根据所述均值和所述方差确定所述目标灰度值,并以所述目标灰度值对应的位宽作为所述第一位宽。
  9. 根据权利要求4所述的方法,其特征在于,基于所述目标帧中像素点的灰度值确定第一位宽,包括:
    基于所述目标帧中像素点的灰度值确定灰度直方图;
    根据所述灰度直方图确定所述第一位宽。
  10. 根据权利要求9所述的方法,其特征在于,所述灰度直方图用于表征不同灰度梯度内的像素点的占比,每个所述灰度梯度的边界值为2 k,其中k为整数;根据所述灰度直方图确定所述第一位宽,包括:
    根据所述占比确定目标像素点所属的灰度梯度,其中,所述目标帧中灰度值大于所述目标像素点的灰度值的像素点数量与所述目标帧中像素点总数量的占比小于预设占比;
    将所述目标像素点所属的灰度梯度的上限边界值对应的位宽作为所述第一位宽。
  11. 根据权利要求9所述的方法,其特征在于,所述灰度直方图的灰度值的位宽为所述目标位宽,根据所述灰度直方图确定所述第一位宽,包括:
    根据所述灰度直方图确定目标像素点的灰度值对应的位宽,其中,所述目标帧中灰度值大于所述目标像素点的灰度值的像素点数量与所述目标帧中像素点总数量的占比小于预设占比;
    根据所述目标像素点的灰度值对应的位宽和所述目标位宽确定所述第一位宽。
  12. 根据权利要求1-11任一项所述的方法,其特征在于,所述当前帧上的图像块和所述参考帧上的图像块基于以下方式:
    分别将所述当前帧和所述参考帧平均分割成多个图像块;或
    分别在所述当前帧和所述参考帧上确定多个特征点,以所述多个特征点为中心确定多个图像块。
  13. 根据权利要求12所述的方法,其特征在于,所述多个特征点基于以下方式确定:
    从所述当前帧或所述参考帧确定多个目标图像块,所述多个目标图像块的灰度值与所述多个目标图像块的邻近图像块的灰度值差异大于预设灰度阈值;
    分别从所述多个目标图像块中确定灰度值梯度最大的像素点,作为所述多个特征点。
  14. 根据权利要求1-13任一项所述的方法,其特征在于,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理之前,还包括:
    基于所述当前帧上的图像块的亮度或纹理确定是否舍弃所述当前帧上的图像块;以及基于所述参考帧上的图像块的亮度或纹理确定是否舍弃所述参考帧上的图像块。
  15. 根据权利要求14所述的方法,其特征在于,基于所述当前帧上的图像块的亮度或纹理确定是否舍弃所述当前帧上的图像块,包括:
    若所述当前帧上的图像块的亮度的平均值小于预设亮度阈值,或所述当前帧上的图像块的亮度的方差小于预设方差,则舍弃所述当前帧上的图像块;或
    若所述当前帧上的图像块中的特征点的数量小于预设数量,则舍弃所述图像块。
  16. 根据权利要求1-15任一项所述的方法,其特征在于,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,包括:
    针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
    确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
    根据所述差异确定所述待匹配图像块相对于所述初始匹配图像块的位置变化量;
    基于所述位置变化量对所述初始匹配图像块进行变换处理,采用变换处理得到的图像块替换所述初始匹配图像块。
  17. 根据权利要求16所述的方法,其特征在于,所述预设条件包括:
    所述位置变化量小于预设阈值;或
    重复执行确定所述待匹配图像块与所述待匹配图像块在所述参考帧中的初始匹配图像块在对应像素位置上的灰度值的差值的步骤的次数达到预设次数。
  18. 根据权利要求16或17所述的方法,其特征在于,基于所述位置变化量对所述初始匹配图像块进行变换处理之前,还包括:
    将所述初始匹配图像块的邻近像素点扩充到所述初始匹配图像块中。
  19. 根据权利要求16-18任一项所述的方法,其特征在于,所述初始匹配图像块基于预估的映射关系确定。
  20. 根据权利要求19所述的方法,其特征在于,所述预估的映射关系基于所述图像传感器的运动状态参数确定;或
    所述预估的映射关系基于在所述当前帧之前采集的至少两帧图像帧之间的位置变化关系确定。
  21. 根据权利要求1-15任一项所述的方法,其特征在于,对所述当前帧 上的图像块和所述参考帧上的图像块进行匹配处理,包括:
    针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
    分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
    遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图与所述待匹配图像块的所述灰度直方图确定匹配误差;
    基于所述匹配误差从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
  22. 根据权利要求21所述的方法,其特征在于,基于所述匹配误差从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块之前,还包括:
    对所述匹配误差进行插值处理,以使所述待匹配图像块和所述匹配图像块的匹配精度为亚像素级别的精度。
  23. 根据权利要求21或22所述的方法,其特征在于,所述方法还包括:
    对所述灰度直方图进行零均值归一化处理。
  24. 根据权利要求1-23任一项所述的方法,其特征在于,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,包括:
    确定所述当前帧和所述参考帧的应用场景;
    基于所述应用场景确定对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理的处理方式,并基于确定的处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理。
  25. 根据权利要求1-24任一项所述的方法,其特征在于,所述映射关系通过单应性矩阵或运动向量表征。
  26. 根据权利要求1-25任一项所述的方法,其特征在于,所述方法还包括:
    确定所述映射关系的置信度;
    基于所述置信度对所述当前帧或所述参考帧进行图像处理。
  27. 根据权利要求26所述的方法,其特征在于,所述置信度基于相关系 数确定,所述相关系数用于表征所述匹配的图像块的相似性;或
    所述映射关系通过对所述匹配点对采用随机一致性算法确定,所述置信度基于合群点对的数量、所述匹配点对的重投影误差以及所述随机一致性算法的合群点对阈值确定。
  28. 一种图像处理方法,其特征在于,所述方法包括:
    获取当前帧以及所述当前帧的参考帧;
    基于所述当前帧或所述参考帧的应用场景确定匹配处理方式;
    基于所述匹配处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
    基于所述匹配的图像块确定匹配点对;
    根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
  29. 根据权利要求28所述的方法,其特征在于,所述匹配处理方式包括:
    针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
    确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
    根据所述差异确定所述待匹配图像相对于所述初始匹配图像块的位置变化量;
    基于所述位置变化量对所述匹配图像块进行变换处理,采用变换处理后的图像块替换所述初始匹配图像块。
  30. 根据权利要求28或29所述的方法,其特征在于,所述匹配处理方式包括:
    针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
    分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
    遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图与 所述待匹配图像块的所述灰度直方图确定匹配误差;
    基于所述匹配误差从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
  31. 一种图像处理装置,其特征在于,所述装置包括处理器、存储器、存储于所述存储器可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
    获取当前帧以及所述当前帧的参考帧;
    对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
    基于所述匹配的图像块确定匹配点对;
    根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的映射关系。
  32. 根据权利要求31所述的装置,其特征在于,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配之前,还用于:
    分别对所述当前帧和所述参考帧进行下采样处理。
  33. 根据权利要求31或32所述的装置,其特征在于,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配之前,还用于:
    分别将所述当前帧和所述参考帧转化为灰度值的位宽为目标位宽的灰度图,所述目标位宽小于或等于采集所述当前帧和所述参考帧的图像传感器的位宽。
  34. 根据权利要求33所述的装置,其特征在于,所述处理器用于分别将所述当前帧和所述参考帧转化为灰度值的位宽为目标位宽的灰度图时,具体用于:
    基于所述目标帧中像素点的灰度值确定第一位宽,所述目标帧为所述当前帧或所述参考帧;
    根据所述第一位宽与所述目标位宽的差值分别对所述当前帧的灰度值和所述参考帧的灰度值进行右移位处理,得到所述灰度图。
  35. 根据权利要求34所述的装置,其特征在于,所述处理器用于基于所述目标帧中像素点的灰度值确定第一位宽时,具体用于:
    从所述目标帧中的像素点选出部分像素点,所述部分像素点的灰度值大于所述目标帧中其余像素点的灰度值;
    基于所述部分像素点的灰度值确定目标灰度值,并根据所述目标灰度值确定所述第一位宽。
  36. 根据权利要求35所述的装置,其特征在于,所述目标灰度值为所述部分像素点的灰度值的均值;或
    所述目标灰度值为所述部分像素点的灰度值中的最小值。
  37. 根据权利要求35或36所述的方法,其特征在于,所述部分像素点的数量与所述目标帧的像素点总数量的占比不超过1%。
  38. 根据权利要求35-或37所述的装置,其特征在于,所述目标灰度值为所述部分像素点的灰度值中的最小值,所述处理器用于基于所述部分像素点的灰度值确定目标灰度值,并根据所述目标灰度值确定所述第一位宽时,具体用于:
    确定所述目标帧的各像素点的灰度值的均值和方差;
    根据所述均值和所述方差确定所述目标灰度值,并以所述目标灰度值对应的位宽作为所述第一位宽。
  39. 根据权利要求34所述的装置,其特征在于,所述处理器用于基于所述目标帧中像素点的灰度值确定第一位宽时,具体用于:
    基于所述目标帧中像素点的灰度值确定灰度直方图;
    根据所述灰度直方图确定所述第一位宽。
  40. 根据权利要求39所述的装置,其特征在于,所述灰度直方图用于表征不同灰度梯度内的像素点的占比,每个所述灰度梯度的边界值为2 k,其中k为整数;所述处理器用于根据所述灰度直方图确定所述第一位宽时,具体用于:
    根据所述占比确定目标像素点所属的灰度梯度,其中,所述目标帧中灰度值大于所述目标像素点的灰度值的像素点数量与所述目标帧中像素点总数 量的占比小于预设占比;
    根据所述目标像素点所属的灰度梯度的上限边界值对应的位宽确定所述第一位宽。
  41. 根据权利要求39所述的装置,其特征在于,所述灰度直方图的灰度值的位宽为所述目标位宽,所述处理器用于根据所述灰度直方图确定所述第一位宽时,具体用于:
    根据所述灰度直方图确定目标像素点的灰度值对应的位宽,其中,所述目标帧中灰度值大于所述目标像素点的灰度值的像素点数量与所述目标帧中像素点总数量的占比小于预设占比;
    根据所述目标像素点的灰度值对应的位宽和所述目标位宽确定所述第一位宽。
  42. 根据权利要求31-41任一项所述的装置,其特征在于,所述当前帧上的图像块和所述参考帧上的图像块基于以下方式:
    分别将所述当前帧和所述参考帧平均分割成多个图像块;或
    分别在所述当前帧和所述参考帧上确定多个特征点,以所述多个特征点为中心确定多个图像块。
  43. 根据权利要求42所述的装置,其特征在于,所述多个特征点基于以下方式确定:
    从所述当前帧或所述参考帧中确定多个目标图像块,所述多个目标图像块的灰度值与所述多个目标图像块的邻近图像块的灰度值差异大于预设灰度阈值;
    分别从所述多个目标图像块中确定灰度值梯度最大的像素点,作为所述多个特征点。
  44. 根据权利要求41-43任一项所述的装置,其特征在于,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理之前,还用于:
    基于所述当前帧上的图像块的亮度或纹理确定是否舍弃所述当前帧上的 图像块;以及基于所述参考帧上的图像块的亮度或纹理确定是否舍弃所述参考帧上的图像块。
  45. 根据权利要求44所述的装置,其特征在于,所述处理器用于基于所述当前帧上的图像块的亮度或纹理确定是否舍弃所述当前帧上的图像块时,具体用于:
    若所述当前帧上的图像块的亮度的平均值小于预设亮度阈值,或所述当前帧上的图像块的亮度的方差小于预设方差,则舍弃所述当前帧上的图像块;或
    若所述当前帧上的图像块中的特征点的数量小于预设数量,则舍弃所述图像块。
  46. 根据权利要求31-45任一项所述的装置,其特征在于,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理时,具体用于:
    针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
    确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
    根据所述差异确定所述待匹配图像块相对于所述初始匹配图像块的位置变化量;
    基于所述位置变化量对所述初始匹配图像块进行变换处理,采用变换处理得到的图像块替换所述初始匹配图像块。
  47. 根据权利要求46所述的装置,其特征在于,所述预设条件包括:
    所述位置变化量小于预设阈值;或
    重复执行确定所述待匹配图像块与所述待匹配图像块在所述参考帧中的初始匹配图像块在对应像素位置上的灰度值的差值的步骤的次数达到预设次数。
  48. 根据权利要求46所述的装置,其特征在于,所述处理器用于基于所 述位置变化量对所述初始匹配图像块进行变换处理之前,还用于:
    将所述初始匹配图像块的邻近像素点扩充到所述初始匹配图像块中。
  49. 根据权利要求46-48任一项所述的装置,其特征在于,所述初始匹配图像块基于预估的映射关系确定。
  50. 根据权利要求49所述的装置,其特征在于,所述预估的映射关系基于所述图像传感器的运动状态参数确定;或
    所述预估的映射关系基于在所述当前帧之前采集的至少两帧图像帧之间的位置变化关系确定。
  51. 根据权利要求31-45任一项所述的装置,其特征在于,对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,包括:
    针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
    分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
    遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图与所述待匹配图像块的所述灰度直方图确定匹配误差;
    基于所述匹配误差从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
  52. 根据权利要求51所述的装置,其特征在于,所述处理器用于基于所述匹配误差从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块之前,还用于:
    对所述匹配误差进行插值处理,以使所述待匹配图像块和所述匹配图像块的匹配精度为亚像素级别的精度。
  53. 根据权利要求51或52所述的装置,其特征在于,所述处理器还用于:
    对所述灰度直方图进行零均值归一化处理。
  54. 根据权利要求31-53任一项所述的装置,其特征在于,所述处理器用于对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理时,具体 用于:
    确定所述当前帧和所述参考帧的应用场景;
    基于所述应用场景确定对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理的处理方式,并基于确定的处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理。
  55. 根据权利要求31-54任一项所述的装置,其特征在于,所述映射关系通过单应性矩阵或运动向量表征。
  56. 根据权利要求1-55任一项所述的装置,其特征在于,所述处理器还用于:
    确定所述映射关系的置信度;
    基于所述置信度对所述当前帧或所述参考帧进行图像处理。
  57. 根据权利要求56所述的装置,其特征在于,所述置信度基于相关系数确定,所述相关系数用于表征所述匹配的图像块的相似性;或
    所述映射关系通过对所述匹配点对采用随机一致性算法确定,所述置信度基于合群点对的数量、所述匹配点对的重投影误差以及所述随机一致性算法的合群点对阈值确定。
  58. 一种图像处理装置,其特征在于,所述装置包括处理器、存储器、存储于所述存储器可供所述处理器执行的计算机程序,所述处理器执行所述计算机程序时实现以下步骤:
    获取当前帧以及所述当前帧的参考帧;
    基于所述当前帧或所述参考帧的应用场景确定匹配处理方式;
    基于所述匹配处理方式对所述当前帧上的图像块和所述参考帧上的图像块进行匹配处理,得到匹配的图像块;
    基于所述匹配的图像块确定匹配点对;
    根据所述匹配点对确定所述当前帧上的像素点与所述参考帧上的像素点的位置关系映射关系。
  59. 根据权利要求58所述的装置,其特征在于,所述匹配处理方式包括:
    针对所述当前帧中的每一个待匹配图像块,重复执行以下步骤直至满足预设条件:
    确定所述待匹配图像块的灰度值与所述待匹配图像块在所述参考帧中的初始匹配图像块的灰度值的差异;
    根据所述差异确定所述待匹配图像相对于所述初始匹配图像块的位置变化量;
    基于所述位置变化量对所述初始匹配图像块进行变换处理,采用变换处理得到的图像块替换所述初始匹配图像块。
  60. 根据权利要求58或59所述的装置,其特征在于,所述匹配处理方式包括:
    针对所述当前帧中的每一个待匹配图像块,分别执行以下步骤:
    分别统计所述待匹配图像块和所述参考帧的各图像块在水平方向和竖直方向上的灰度直方图;
    遍历所述参考帧中的各图像块,确定所述各图像块的所述灰度直方图与所述待匹配图像块的所述灰度直方图确定匹配误差;
    基于所述匹配误差确定从所述参考帧的各图像块中确定所述待匹配图像块的匹配图像块。
PCT/CN2020/141338 2020-12-30 2020-12-30 图像处理方法及装置 WO2022141178A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141338 WO2022141178A1 (zh) 2020-12-30 2020-12-30 图像处理方法及装置

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/141338 WO2022141178A1 (zh) 2020-12-30 2020-12-30 图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2022141178A1 true WO2022141178A1 (zh) 2022-07-07

Family

ID=82258800

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/141338 WO2022141178A1 (zh) 2020-12-30 2020-12-30 图像处理方法及装置

Country Status (1)

Country Link
WO (1) WO2022141178A1 (zh)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115560419A (zh) * 2022-09-23 2023-01-03 冠奕达防爆电器有限公司 一种防爆分析小屋及其运行方法
CN116309672A (zh) * 2023-05-23 2023-06-23 武汉地震工程研究院有限公司 一种基于led标靶的夜间桥梁动挠度测量方法与装置
CN116363129A (zh) * 2023-05-31 2023-06-30 山东辰欣佛都药业股份有限公司 一种滴眼剂生产用智能灯检系统
CN116721099A (zh) * 2023-08-09 2023-09-08 山东奥洛瑞医疗科技有限公司 一种基于聚类的肝脏ct影像的图像分割方法
CN116884236A (zh) * 2023-06-26 2023-10-13 中关村科学城城市大脑股份有限公司 交通流量采集设备和交通流量采集方法
CN117115152A (zh) * 2023-10-23 2023-11-24 汉中禹龙科技新材料有限公司 基于图像处理的钢绞线生产监测方法
CN117237245A (zh) * 2023-11-16 2023-12-15 湖南云箭智能科技有限公司 一种基于人工智能与物联网的工业物料质量监测方法
CN117408929A (zh) * 2023-12-12 2024-01-16 日照天一生物医疗科技有限公司 基于图像特征的肿瘤ct图像区域动态增强方法
CN117456428A (zh) * 2023-12-22 2024-01-26 杭州臻善信息技术有限公司 基于视频图像特征分析的垃圾投放行为检测方法
CN117911956A (zh) * 2024-03-19 2024-04-19 洋县阿拉丁生物工程有限责任公司 用于食品加工设备的加工环境动态监测方法及系统
CN117974656A (zh) * 2024-03-29 2024-05-03 深圳市众翔奕精密科技有限公司 基于电子辅料加工下的材料切片方法及系统
CN117911956B (zh) * 2024-03-19 2024-05-31 洋县阿拉丁生物工程有限责任公司 用于食品加工设备的加工环境动态监测方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080069421A1 (en) * 2006-09-14 2008-03-20 Siemens Medical Solutions Usa Inc. Efficient Border Extraction Of Image Feature
CN103428408A (zh) * 2013-07-18 2013-12-04 北京理工大学 一种适用于帧间的图像稳像方法
CN107506795A (zh) * 2017-08-23 2017-12-22 国家计算机网络与信息安全管理中心 一种面向图像匹配的局部灰度直方图特征描述子建立方法和图像匹配方法
CN108257155A (zh) * 2018-01-17 2018-07-06 中国科学院光电技术研究所 一种基于局部和全局耦合的扩展目标稳定跟踪点提取方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080069421A1 (en) * 2006-09-14 2008-03-20 Siemens Medical Solutions Usa Inc. Efficient Border Extraction Of Image Feature
CN103428408A (zh) * 2013-07-18 2013-12-04 北京理工大学 一种适用于帧间的图像稳像方法
CN107506795A (zh) * 2017-08-23 2017-12-22 国家计算机网络与信息安全管理中心 一种面向图像匹配的局部灰度直方图特征描述子建立方法和图像匹配方法
CN108257155A (zh) * 2018-01-17 2018-07-06 中国科学院光电技术研究所 一种基于局部和全局耦合的扩展目标稳定跟踪点提取方法

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115560419A (zh) * 2022-09-23 2023-01-03 冠奕达防爆电器有限公司 一种防爆分析小屋及其运行方法
CN116309672A (zh) * 2023-05-23 2023-06-23 武汉地震工程研究院有限公司 一种基于led标靶的夜间桥梁动挠度测量方法与装置
CN116309672B (zh) * 2023-05-23 2023-08-01 武汉地震工程研究院有限公司 一种基于led标靶的夜间桥梁动挠度测量方法与装置
CN116363129A (zh) * 2023-05-31 2023-06-30 山东辰欣佛都药业股份有限公司 一种滴眼剂生产用智能灯检系统
CN116363129B (zh) * 2023-05-31 2023-08-22 山东辰欣佛都药业股份有限公司 一种滴眼剂生产用智能灯检系统
CN116884236A (zh) * 2023-06-26 2023-10-13 中关村科学城城市大脑股份有限公司 交通流量采集设备和交通流量采集方法
CN116884236B (zh) * 2023-06-26 2024-04-16 中关村科学城城市大脑股份有限公司 交通流量采集设备和交通流量采集方法
CN116721099B (zh) * 2023-08-09 2023-11-21 山东奥洛瑞医疗科技有限公司 一种基于聚类的肝脏ct影像的图像分割方法
CN116721099A (zh) * 2023-08-09 2023-09-08 山东奥洛瑞医疗科技有限公司 一种基于聚类的肝脏ct影像的图像分割方法
CN117115152B (zh) * 2023-10-23 2024-02-06 汉中禹龙科技新材料有限公司 基于图像处理的钢绞线生产监测方法
CN117115152A (zh) * 2023-10-23 2023-11-24 汉中禹龙科技新材料有限公司 基于图像处理的钢绞线生产监测方法
CN117237245A (zh) * 2023-11-16 2023-12-15 湖南云箭智能科技有限公司 一种基于人工智能与物联网的工业物料质量监测方法
CN117237245B (zh) * 2023-11-16 2024-01-26 湖南云箭智能科技有限公司 一种基于人工智能与物联网的工业物料质量监测方法
CN117408929A (zh) * 2023-12-12 2024-01-16 日照天一生物医疗科技有限公司 基于图像特征的肿瘤ct图像区域动态增强方法
CN117408929B (zh) * 2023-12-12 2024-02-09 日照天一生物医疗科技有限公司 基于图像特征的肿瘤ct图像区域动态增强方法
CN117456428B (zh) * 2023-12-22 2024-03-29 杭州臻善信息技术有限公司 基于视频图像特征分析的垃圾投放行为检测方法
CN117456428A (zh) * 2023-12-22 2024-01-26 杭州臻善信息技术有限公司 基于视频图像特征分析的垃圾投放行为检测方法
CN117911956A (zh) * 2024-03-19 2024-04-19 洋县阿拉丁生物工程有限责任公司 用于食品加工设备的加工环境动态监测方法及系统
CN117911956B (zh) * 2024-03-19 2024-05-31 洋县阿拉丁生物工程有限责任公司 用于食品加工设备的加工环境动态监测方法及系统
CN117974656A (zh) * 2024-03-29 2024-05-03 深圳市众翔奕精密科技有限公司 基于电子辅料加工下的材料切片方法及系统

Similar Documents

Publication Publication Date Title
WO2022141178A1 (zh) 图像处理方法及装置
CN109993712B (zh) 图像处理模型的训练方法、图像处理方法及相关设备
WO2018136373A1 (en) Image fusion and hdr imaging
WO2021114868A1 (zh) 降噪方法、终端及存储介质
WO2017100971A1 (zh) 一种失焦模糊图像的去模糊方法和装置
CN113222866B (zh) 灰度图像增强方法、计算机可读介质及计算机系统
CN112364865B (zh) 一种复杂场景中运动小目标的检测方法
CN113327206B (zh) 基于人工智能的输电线智能巡检系统的图像模糊处理方法
CN114359665B (zh) 全任务人脸识别模型的训练方法及装置、人脸识别方法
WO2019010932A1 (zh) 一种利于模糊核估计的图像区域选择方法和系统
CN109003307B (zh) 基于水下双目视觉测量的捕鱼网目尺寸设计方法
CN115410030A (zh) 目标检测方法、装置、计算机设备及存储介质
CN114155285B (zh) 基于灰度直方图的图像配准方法
CN106846250B (zh) 一种基于多尺度滤波的超分辨率重建方法
CN109740448B (zh) 基于相关滤波和图像分割的航拍视频目标鲁棒跟踪方法
WO2021051382A1 (zh) 白平衡处理方法和设备、可移动平台、相机
CN111127353A (zh) 一种基于块配准和匹配的高动态图像去鬼影方法
CN114581318A (zh) 一种低照明度图像增强方法及系统
WO2024051591A1 (zh) 用于估算视频旋转的方法、装置、电子设备和存储介质
CN111369435B (zh) 基于自适应稳定模型的彩色图像深度上采样方法及系统
CN116091322B (zh) 超分辨率图像重建方法和计算机设备
CN113011433B (zh) 一种滤波参数调整方法及装置
Xia et al. A coarse-to-fine ghost removal scheme for HDR imaging
CN115564639A (zh) 背景虚化方法、装置、计算机设备和存储介质
CN112150532A (zh) 图像处理的方法、装置、电子设备和计算机可读介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20967520

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20967520

Country of ref document: EP

Kind code of ref document: A1