CN113516684A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN113516684A
CN113516684A CN202110765388.4A CN202110765388A CN113516684A CN 113516684 A CN113516684 A CN 113516684A CN 202110765388 A CN202110765388 A CN 202110765388A CN 113516684 A CN113516684 A CN 113516684A
Authority
CN
China
Prior art keywords
image
information
projection
target
motion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110765388.4A
Other languages
Chinese (zh)
Inventor
王�义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Hangzhou Co Ltd
Original Assignee
Vivo Mobile Communication Hangzhou Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Hangzhou Co Ltd filed Critical Vivo Mobile Communication Hangzhou Co Ltd
Priority to CN202110765388.4A priority Critical patent/CN113516684A/en
Publication of CN113516684A publication Critical patent/CN113516684A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20068Projection on vertical or horizontal image axis

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a storage medium, and belongs to the technical field of communication. The method mainly comprises the steps of obtaining a first image, a second image and the rotation motion information of the electronic equipment, wherein the first image and the second image are adjacent frame images; carrying out affine transformation on the first image according to the rotation motion information to obtain a third image, wherein the similarity between the third image and the second image is higher than a first preset similarity; determining displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image; and generating motion estimation confidence information according to the second image, the third image and the displacement vector information, wherein the motion estimation confidence information is used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present application belongs to the field of communication technologies, and in particular, to an image processing method, apparatus, device, and storage medium.
Background
With the development of electronic equipment technology, the functions of electronic equipment are more and more abundant and diversified. For example, a video or a shot image can be recorded by the electronic device and processed to meet the requirement of a user for recording the video or the shot image.
Currently, the processing of video or images includes motion estimation, which is the process used to determine motion vector information describing the transformation of a two-dimensional image. However, the conventional motion estimation method, whether based on an optical flow method or a block matching method, cannot cope with the complex motion situation of the electronic device, so that the determined motion vector information is inaccurate, the image registration fails, and the imaging quality of the final image is affected.
Disclosure of Invention
An object of the embodiments of the present application is to provide an image processing method, an image processing apparatus, an image processing device, and a storage medium, which can solve the problem that currently, determining motion vector information is inaccurate.
In a first aspect, an embodiment of the present application provides an image processing method, which is applied to an electronic device, and the method may include:
acquiring a first image, a second image and the rotation motion information of the electronic equipment, wherein the first image and the second image are adjacent frame images;
carrying out affine transformation on the first image according to the rotation motion information to obtain a third image, wherein the similarity between the third image and the second image is higher than a first preset similarity;
determining displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image;
and generating motion estimation confidence information according to the second image, the third image and the displacement vector information, wherein the motion estimation confidence information is used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image.
In a second aspect, an embodiment of the present application provides an image processing apparatus, which is applied to an electronic device, and the apparatus may include:
the device comprises an acquisition module, a display module and a display module, wherein the acquisition module is used for acquiring a first image, a second image and the rotation motion information of the electronic equipment, and the first image and the second image are adjacent frame images;
the processing module is used for carrying out affine transformation on the first image according to the rotation motion information to obtain a third image, and the similarity between the third image and the second image is higher than the first preset similarity;
the determining module is used for determining displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image;
and the generating module is used for generating motion estimation confidence information according to the second image, the third image and the displacement vector information, wherein the motion estimation confidence information is used for evaluating the confidence value of each pixel block in the first image in the motion vector information of the second image.
In a third aspect, the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the image processing method as shown in the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the image processing method as shown in the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the steps of the image processing method according to the first aspect.
In the embodiment of the application, after the first image is subjected to affine transformation of the rotational motion information, the obtained third image can be regarded as a front frame image and a rear frame image which do not have axial camera rotational motion any more, and better conforms to a motion state only containing horizontal and vertical plane motion. Then, the first target image projection information of the second image and the second target image projection information of the third image are adopted to carry out global motion estimation, and the displacement vector information of the third image relative to the second image is obtained through calculation.
In addition, the image processing method provided by the embodiment of the application further generates motion estimation confidence information used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image according to the second image, the third image and the displacement vector information, so that the subsequent algorithm operation is more flexible, the motion estimation result is used more robustly, and the final noise reduction, super-resolution, HDR (high resolution ratio) effect and the like of the image are improved. Therefore, the image processing method provided by the embodiment of the application can improve the accuracy of determining the motion vector information, and simultaneously carry out noise reduction processing on the image, thereby improving the imaging quality of the image.
Drawings
Fig. 1 is a schematic diagram of a processing architecture according to an embodiment of the present application;
fig. 2 is a second schematic diagram of a processing architecture according to an embodiment of the present application;
fig. 3 is a flowchart of an image processing method according to an embodiment of the present application;
fig. 4 is a schematic diagram of obtaining a three-axis rotational motion vector according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure;
fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that embodiments of the application may be practiced in sequences other than those illustrated or described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense and do not limit the number of terms, e.g., the first term can be one or more than one. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
Motion estimation is a process for determining motion vectors describing a two-dimensional image transformation. Motion estimation is an important component of image processing technologies such as video noise reduction, multi-frame fusion, sub-pixel super-resolution, High-Dynamic Range (HDR), video image stabilization, and the like.
Conventional motion estimation can be classified into motion estimation in an optical flow manner or motion estimation based on a block matching method. Taking the motion estimation based on the block matching method as an example, the main process of the motion estimation based on the block matching method is to divide each frame of the image sequence into a plurality of macroblocks which are not overlapped with each other, and to consider the displacement amount of all pixels in the macroblocks to be the same. Then, in a given specific search range from each macro block to the reference frame, a block most similar to the current block, i.e. a matching block, is found out according to a certain matching criterion, and the relative displacement between the matching block and the current block is a motion vector. When the video is compressed, the current block can be completely restored only by storing the motion vector and the residual data. However, the conventional motion estimation, whether based on an optical flow mode or based on a block matching method, cannot cope with the complex motion situation of the electronic device, that is, the motion estimation capability of axial rotation of the shooting device body is weak, and the axial rotation motion cannot be described by simple two-dimensional horizontal or vertical direction estimation, so that the effective estimation of global overall motion is lacked, which results in inaccurate determination of motion vector information, failure of subsequent registration, influence on the definition, noise and trailing defects of final image processing, and reduction of the imaging quality of the final image.
Based on the above problems, embodiments of the present application provide an image processing method based on motion estimation, which can determine rotational motion information of an electronic device more accurately by obtaining a three-axis rotational motion vector of the electronic device, and improve accuracy of determining motion vector information. In addition, before global motion estimation, namely motion estimation confidence information is generated, affine transformation can be carried out on the first image according to the rotation motion information to obtain a third image, and displacement vector information of the third image relative to the second image is determined according to first target image projection information of the second image and second target image projection information of the third image, so that the influence of axial rotation motion of the electronic equipment on the global motion estimation can be removed, and the method is beneficial to more flexible operation of a subsequent algorithm, more robustly uses a motion estimation result, and improves final noise reduction, super-resolution, HDR (high resolution) effects and the like of the image. Therefore, the image processing method provided by the embodiment of the application can improve the accuracy of determining the motion vector information, and simultaneously carry out noise reduction processing on the image, thereby improving the imaging quality of the image.
Based on this, the image processing method provided by the embodiment of the present application is described in detail below with reference to fig. 1 to 4 through a specific embodiment and an application scenario thereof.
An embodiment of the present application provides a processing architecture, which may include an electronic device, as shown in fig. 1. The electronic device 10 may include a device for detecting angular motion of a momentum moment sensitive housing of a high-speed rotation body such as a camera and a gyroscope about one or two axes orthogonal to the rotation axis with respect to an inertia space. The photographing device, such as a front camera and a rear camera in an electronic apparatus, is used to photograph a first image and a second image. The angular motion detection device is used for acquiring three-axis rotation motion vectors of the electronic equipment, so that the image processing device in the electronic equipment determines the rotation motion information of the electronic equipment according to the three-axis rotation motion vectors acquired by the gyroscope.
Based on the processing architecture, an application scenario of the image processing method provided by the embodiment of the present application is described.
As shown in fig. 2, during the recording of the video by the user through the electronic device 10, the electronic device obtains at least two adjacent frame images, i.e., a first image and a second image, in the video recorded by the shooting device, and obtains a three-axis rotation motion vector of the electronic device measured by the gyroscope. And calculating a rotational motion affine matrix based on the first image, the second image and the three-axis rotational motion vector, and determining the rotational motion affine matrix as rotational motion information.
And then, carrying out affine transformation on the first image according to the rotation motion information to obtain a third image, wherein the similarity between the third image and the second image is higher than the first preset similarity even if the third image is closer to the second image, so that the displacement vector information of the third image relative to the second image is calculated later to reduce the noise of the image.
Moreover, the gray projection can be adopted to estimate the global motion vector, namely the displacement vector information, corresponding to the image after affine transformation, and the motion estimation confidence information, namely the reference item of the subsequent motion estimation, can be determined. Here, the gray projection in the embodiment of the present application may include image mapping, projection filtering, and correlation calculation, and the processes included in the gray projection are described in detail below.
And image mapping, namely respectively carrying out image mapping on the second image and the third image to obtain a first projection sequence corresponding to the second image and a second projection sequence corresponding to the third image.
And projection filtering, namely calculating a first image projection value corresponding to the first projection sequence and a second image projection value corresponding to the second projection sequence. And filtering the first image projection value and the second image projection value respectively to obtain first target image projection information corresponding to the first image projection value and second target image projection information corresponding to the second image projection value.
Performing correlation calculation, namely performing cross-correlation calculation on the first row projection curve and the second row projection curve under the condition that the first target image projection information comprises a first row projection curve and the second target image projection information comprises a second row projection curve to obtain a curve peak value of a correlation curve of the first row projection curve and the second row projection curve; and determining a curve peak of the correlation curve as the displacement vector information of the third image relative to the second image.
Then, in order to facilitate more flexible operation of a subsequent algorithm and more robustly use a motion estimation result to improve final noise reduction, super-resolution, HDR effect, and the like of an image, in the embodiment of the present application, a motion estimation confidence system may be constructed according to a global motion vector, and used as a reference item for subsequent image processing, that is, motion estimation confidence information may be generated according to a second image, a third image, and displacement vector information, where the motion estimation confidence information is used to evaluate a confidence value of motion vector information of each pixel block in the first image in the second image.
It should be noted that the image processing method provided in the embodiment of the present application may be applied to the above-mentioned scene in which images (or continuously shot photos) in a recorded video are processed in real time during the process of recording a video by a user, and may also be applied to a scene in which images of a recorded video (or continuously shot photos) are processed, where the image processing method provided in the embodiment of the present application may be applied to any scene in which images of at least two images are processed (a video is recorded by an electronic device or continuous images are shot).
Therefore, in the embodiment of the application, after the first image is subjected to affine transformation of the rotational motion information, the obtained third image can be regarded as a front frame image and a rear frame image which do not have axial camera rotational motion any more, and better conforms to a motion state only containing horizontal and vertical plane motion. Then, the first target image projection information of the second image and the second target image projection information of the third image are adopted to carry out global motion estimation, and the displacement vector information of the third image relative to the second image is obtained through calculation.
In addition, the image processing method provided by the embodiment of the application further generates motion estimation confidence information used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image according to the second image, the third image and the displacement vector information, so that the subsequent algorithm operation is more flexible, the motion estimation result is used more robustly, and the final noise reduction, super-resolution, HDR (high resolution ratio) effect and the like of the image are improved. Therefore, the image processing method provided by the embodiment of the application can improve the accuracy of determining the motion vector information, and simultaneously carry out noise reduction processing on the image, thereby improving the imaging quality of the image.
According to the application scenario, the following describes in detail the image processing method provided by the embodiment of the present application with reference to fig. 3.
Fig. 3 is a flowchart of an image processing method according to an embodiment of the present application.
As shown in fig. 3, the image processing method may be applied to the electronic devices shown in fig. 1 and fig. 2, and based on this, may specifically include the following steps:
step 310, a first image, a second image and rotation motion information of the electronic device are obtained, wherein the first image and the second image are adjacent frame images. And 320, performing affine transformation on the first image according to the rotation motion information to obtain a third image, wherein the similarity between the third image and the second image is higher than the first preset similarity. Step 330, determining displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image. Step 340, generating motion estimation confidence information according to the second image, the third image and the displacement vector information, wherein the motion estimation confidence information is used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image.
Therefore, after the first image is subjected to affine transformation of the rotation motion information, the obtained third image can be regarded as a front frame image and a rear frame image which do not have axial camera rotation motion any more, and better conforms to the motion state only containing horizontal and vertical plane motion. Then, the first target image projection information of the second image and the second target image projection information of the third image are adopted to carry out global motion estimation, and the displacement vector information of the third image relative to the second image is obtained through calculation.
In addition, the image processing method provided by the embodiment of the application further generates motion estimation confidence information used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image according to the second image, the third image and the displacement vector information, so that the subsequent algorithm operation is more flexible, the motion estimation result is used more robustly, and the final noise reduction, super-resolution, HDR (high resolution ratio) effect and the like of the image are improved.
Therefore, the image processing method provided by the embodiment of the application can improve the accuracy of determining the motion vector information, and simultaneously carry out noise reduction processing on the image, thereby improving the imaging quality of the image.
The above steps are described in detail below, specifically as follows:
referring first to step 310, in one or more alternative embodiments, prior to step 310, the image processing method may further include steps 3101-3103, as described below.
Step 3101, three-axis rotational motion vectors of the electronic device are obtained.
Step 3102, a rotational motion affine matrix is calculated based on the first image, the second image and the three-axis rotational motion vectors.
At step 3103, a rotational motion affine matrix is determined as the rotational motion information.
For example, as shown in fig. 4, in the embodiment of the present application, a three-axis rotation motion vector of the electronic device measured by the gyroscope, that is, an X value in a horizontal direction, a Y value in a vertical direction, and a Z value in a vertical direction may be acquired. Based on the three-axis rotation motion vector, a rotation motion affine matrix of the electronic device is calculated through a direct method, and the rotation motion affine matrix can be used for representing the attitude information of the electronic device, wherein the attitude information can comprise information that the position is not changed, only motion information such as shooting height and direction is changed, the position and shooting height and direction are changed, and certainly, the position and shooting height and direction are not changed.
Thus, if the posture information of the electronic device changes T, so that the relative position of the pixel block a in the first image in the pixel block a1 in the second image changes, T1 meeting the preset change can be selected from different values of the change T to minimize the error between the gray values of the pixel blocks, and at this time, T1 is the posture transformation matrix, i.e., the rotational motion affine matrix, generated by the shooting device in the electronic device.
Referring to step 320, in one or more alternative embodiments, the affine transformation is performed on the first image according to the rotation motion information to obtain a third image, and a similarity between the third image and the second image is higher than a first preset similarity.
The affine transformation in the embodiment of the present application refers to a transformation in which one vector space is linearly transformed once and then translated into another vector space in geometry. Here, the affine transformation mainly includes that the first image subjected to affine transformation is changed into a third image, and the third image is made to be closer to the second image, so that the first image after affine transformation can be regarded as that the front frame and the rear frame do not have axial camera rotation motion any more, and therefore, the motion state that only horizontal plane motion and vertical plane motion are included in the gray level projection involved in step 330 is better met, and the subsequent algorithm operation is facilitated to be more flexible.
Next, step 330 is involved to estimate the global motion vector corresponding to the affine transformed image by using gray projection, and determine the motion estimation confidence information as the reference item of the subsequent motion estimation. Here, the gray projection in the embodiment of the present application may include image mapping, projection filtering, and correlation calculation, and the processes included in the gray projection are described in detail below. Before step 330, the image processing method may further include steps 3301-3303, as described below.
Image mapping, i.e., step 3301, performs image mapping on the second image and the third image respectively to obtain a first projection sequence corresponding to the second image and a second projection sequence corresponding to the third image.
Illustratively, the image mapping is to map two-dimensional gray scale information of each frame of input image, i.e. the first image and the third image, into two independent one-dimensional projection sequences, which can be calculated by the following formula (1).
Gk(i)=∑jGk(i,j),Gk(j)=∑iGk(i,j), (1)
Wherein G isk(i) Is the gray value G of the ith row of the k frame imagek(j) Is the gray value G of the jth column of the kth frame imagek(i, j) is the gray value of the pixel at the location of the k-th frame image (i, j).
Projection filtering, step 3302, calculates a first image projection value corresponding to the first projection sequence and a second image projection value corresponding to the second projection sequence.
Further, adding sequence values corresponding to each column in the first projection sequence to obtain a first image projection value corresponding to the first projection sequence; and adding the sequence values corresponding to each column in the second projection sequence to obtain a second image projection value corresponding to the second projection sequence.
And 3303, filtering the first image projection value and the second image projection value, respectively, to obtain first target image projection information corresponding to the first image projection value and second target image projection information corresponding to the second image projection value.
For example, the projection filtering is to filter the image projection values, because when the offset is large, the edge information adversely affects the peak of the cross-correlation in the cross-correlation operation, which requires removing the projection values at the edges. Cosine filtering is usually adopted, so that the projection value of the middle area can be reserved, and the information amplitude of the edge area is reduced, thereby ensuring the correctness of the following correlation calculation and improving the correction precision.
Based on this, in one or more alternative embodiments, in the case that the first target image projection information includes a first row projection curve and the second target image projection information includes a second row projection curve, the step 330 may specifically include:
performing cross-correlation calculation on the first row projection curve and the second row projection curve to obtain a curve peak value of a correlation curve of the first row projection curve and the second row projection curve;
and determining the peak value of the correlation curve as the displacement vector information of the third image relative to the second image.
Illustratively, the correlation calculation is to perform cross-correlation calculation on the projection curves of the line and the column of the k frame image, such as the third image, and the reference frame image, such as the second image, and determine the displacement vector of the line and the column of the current frame image relative to the reference frame image according to the peak values of the two correlation curves.
Then, referring to step 340, in one or more alternative embodiments, step 340 can specifically include steps 3401-3403, as specifically shown below. :
and 3401, segmenting the third image based on the image sequence to obtain a plurality of non-overlapping first pixel blocks, wherein the plurality of first pixel blocks comprise a first target pixel block.
And 3402, determining a matching pixel block in the second image according to the first target pixel block and the displacement vector information.
Further, a target area is determined on the second image by taking the first target pixel block as a base point and combining the position vector information, the target area comprises a plurality of second target pixel blocks, and the displacement vector information of each second target pixel block and the first target pixel block in the plurality of second target pixel blocks meets a preset offset threshold;
and determining pixel blocks meeting preset matching conditions in the second target pixel blocks as matching pixel blocks.
And 3403, generating motion estimation confidence information according to the first target pixel block and the matching pixel block.
Illustratively, the third image is divided into a plurality of non-overlapping pixel blocks in raster scan order, and the size of the pixel blocks is often fixed. Then, a first target pixel block in the third image is taken as a basic point, a plurality of second target pixel blocks are searched in combination with the relative offset vector, namely the range provided by the displacement vector information of the first target pixel block, if the range meets the range corresponding to the preset offset threshold, and the optimal matching pixel block is determined in the plurality of second target pixel blocks in combination with the preset matching condition. In this way, when the selected first target pixel block is small, the complexity of the calculation can be simplified.
In addition, since video sequences generally have relatively strong temporal correlation, motion vector fields have strong spatial correlation. In this way, in the embodiment of the present application, after affine transformation is performed on the first image, the first image can be regarded as that the front frame and the rear frame do not have axial camera rotational motion any more, and the motion state that only horizontal and vertical plane motion is included in the gray level projection involved in step 330 is better met, at this time, the global motion vector, i.e., the displacement vector information calculated in step 303 is used as the initial value of the local motion estimation, and the accuracy of determining the motion vector information can be improved.
Based on this, after step 3403, the image processing method may further include:
detecting a range of the object motion estimation confidence value included in the motion estimation confidence information;
under the condition that the range of the target motion estimation confidence value is a first confidence value range, determining that the confidence value of motion vector information of each pixel block in the first image in the second image is not credible, and denoising the first image without matching the pixel block and the position vector information;
and under the condition that the range of the target motion estimation confidence value is a second confidence value range, determining and evaluating confidence value confidence of motion vector information of each pixel block in the first image in the second image, and denoising the first image by matching the pixel block and the position vector information.
For example, a motion estimation confidence value for assisting the subsequent module may be generated according to the global motion estimation result, and the range of the motion estimation confidence value in the embodiment of the present application may be set to [0,1 ]. If the range of the confidence value of the target motion estimation is between 0 and 0.45 of the first confidence value, the information of the displacement vector is completely untrustworthy, and therefore the first image is not subjected to noise reduction by using the matching pixel block and the position vector information. On the contrary, if the range of the confidence value of the target motion estimation is between the second confidence value, namely 0.5-1, the confidence value of the displacement vector information is represented, and the value can effectively assist the follow-up module to flexibly use the motion estimation result and improve the overall performance of the system, so that the noise of the first image can be reduced by matching the pixel block and the position vector information.
Therefore, in the embodiment of the application, after the first image is subjected to affine transformation of the rotational motion information, the obtained third image can be regarded as a front frame image and a rear frame image which do not have axial camera rotational motion any more, and better conforms to a motion state only containing horizontal and vertical plane motion. Then, the first target image projection information of the second image and the second target image projection information of the third image are adopted to carry out global motion estimation, and the displacement vector information of the third image relative to the second image is obtained through calculation.
In addition, the image processing method provided by the embodiment of the application further generates motion estimation confidence information used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image according to the second image, the third image and the displacement vector information, so that the subsequent algorithm operation is more flexible, the motion estimation result is used more robustly, and the final noise reduction, super-resolution, HDR (high resolution ratio) effect and the like of the image are improved. Therefore, the image processing method provided by the embodiment of the application can improve the accuracy of determining the motion vector information, and simultaneously carry out noise reduction processing on the image, thereby improving the imaging quality of the image.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the method of image processing in the image processing apparatus. In the embodiment of the present application, an image processing apparatus executes an image processing method as an example, and an apparatus for image processing provided in the embodiment of the present application is described.
Based on the same inventive concept, the application also provides an image processing device. The details are described with reference to fig. 5.
Fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 5, the image processing apparatus 50 is applied to the electronic devices shown in fig. 1-2, and may specifically include:
an obtaining module 501, configured to obtain a first image, a second image, and rotational motion information of an electronic device, where the first image and the second image are adjacent frame images;
the processing module 502 is configured to perform affine transformation on the first image according to the rotational motion information to obtain a third image, where a similarity between the third image and the second image is higher than a first preset similarity;
a determining module 503, configured to determine displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image;
a generating module 504, configured to generate motion estimation confidence information according to the second image, the third image and the displacement vector information, where the motion estimation confidence information is used to evaluate a confidence value of the motion vector information of each pixel block in the first image in the second image.
The image processing apparatus 120 will be described in detail below, specifically as follows:
in one or more possible embodiments, the image processing apparatus 50 may further include a first calculation module; wherein,
the obtaining module 501 may also be configured to obtain three-axis rotational motion vectors of the electronic device. And the first calculation module is used for calculating a rotational motion affine matrix based on the first image, the second image and the three-axis rotational motion vector. Based on this, the determining module 503 may also be configured to determine a rotational motion affine matrix as the rotational motion information.
In another or more possible embodiments, the image processing apparatus 50 may further include a mapping module, a second calculation module, and a filtering module; wherein,
the mapping module is used for respectively carrying out image mapping on the second image and the third image to obtain a first projection sequence corresponding to the second image and a second projection sequence corresponding to the third image;
the second calculation module is used for calculating a first image projection value corresponding to the first projection sequence and a second image projection value corresponding to the second projection sequence;
and the filtering module is used for respectively filtering the first image projection value and the second image projection value to obtain first target image projection information corresponding to the first image projection value and second target image projection information corresponding to the second image projection value.
In yet another or more possible embodiments, the second calculation module may be specifically configured to add the sequence values corresponding to each column in the first projection sequence to obtain a first image projection value corresponding to the first projection sequence; and adding the sequence values corresponding to each column in the second projection sequence to obtain a second image projection value corresponding to the second projection sequence.
In still another or more possible embodiments, the determining module 503 may be specifically configured to, when the first target image projection information includes a first row projection curve and the second target image projection information includes a second row projection curve, perform a cross-correlation calculation on the first row projection curve and the second row projection curve to obtain a curve peak of a correlation curve of the first row projection curve and the second row projection curve;
and determining the peak value of the correlation curve as the displacement vector information of the third image relative to the second image.
In one or more possible embodiments, the generating module 504 may be specifically configured to segment the third image based on the image sequence to obtain a plurality of non-overlapping first pixel blocks, where the plurality of first pixel blocks include a first target pixel block;
determining a matching pixel block in the second image according to the first target pixel block and the displacement vector information;
and generating motion estimation confidence information according to the first target pixel block and the matching pixel block.
Further, the generating module 504 may be specifically configured to, by taking the first target pixel block as a base point and combining the position vector information, determine a target region on the second image, where the target region includes a plurality of second target pixel blocks, and displacement vector information of each of the plurality of second target pixel blocks and the first target pixel block satisfies a preset offset threshold;
and determining pixel blocks meeting preset matching conditions in the second target pixel blocks as matching pixel blocks.
In yet one or more possible embodiments, the image processing apparatus 50 may further include a detection module for detecting a range of object motion estimation confidence values included in the motion estimation confidence information; based on this, the determining module 503 may be further configured to, in a case that the range of the target motion estimation confidence value is the first confidence value range, determine that the confidence value of the motion vector information of each pixel block in the first image in the second image is evaluated to be untrustworthy, and denoise the first image without matching the pixel block and the position vector information; and determining that the confidence value of the motion vector information of each pixel block in the first image in the second image is evaluated to be credible under the condition that the range of the target motion estimation confidence value is a second confidence value range, and denoising the first image by matching the pixel blocks and the position vector information.
The image processing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in an electronic device. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
The image processing apparatus provided in the embodiment of the present application can implement each process implemented by the method embodiments of fig. 1 to fig. 4, and is not described herein again to avoid repetition.
In the embodiment of the application, after the first image is subjected to affine transformation of the rotational motion information, the obtained third image can be regarded as a front frame image and a rear frame image which do not have axial camera rotational motion any more, and better conforms to a motion state only containing horizontal and vertical plane motion. Then, the first target image projection information of the second image and the second target image projection information of the third image are adopted to carry out global motion estimation, and the displacement vector information of the third image relative to the second image is obtained through calculation.
In addition, the image processing method provided by the embodiment of the application further generates motion estimation confidence information used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image according to the second image, the third image and the displacement vector information, so that the subsequent algorithm operation is more flexible, the motion estimation result is used more robustly, and the final noise reduction, super-resolution, HDR (high resolution ratio) effect and the like of the image are improved.
Therefore, the image processing method provided by the embodiment of the application can improve the accuracy of determining the motion vector information, and simultaneously carry out noise reduction processing on the image, thereby improving the imaging quality of the image.
Optionally, as shown in fig. 6, an electronic device 60 is further provided in this embodiment of the present application, and includes a processor 601, a memory 602, and a program or an instruction stored in the memory 602 and executable on the processor 601, where the program or the instruction is executed by the processor 601 to implement each process of the foregoing embodiment of the image processing method, and can achieve the same technical effect, and no further description is provided here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic device and the non-mobile electronic device described above.
Fig. 7 is a schematic diagram of a hardware structure of an electronic device according to an embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, a processor 710, and a radio 711.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 7 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 710 in this embodiment of the application is configured to obtain a first image, a second image and rotational motion information of an electronic device, where the first image and the second image are adjacent frame images; carrying out affine transformation on the first image according to the rotation motion information to obtain a third image, wherein the similarity between the third image and the second image is higher than a first preset similarity; determining displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image; and generating motion estimation confidence information according to the second image, the third image and the displacement vector information, wherein the motion estimation confidence information is used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image.
The electronic device 700 is described in detail below, specifically as follows:
in one or more possible embodiments, processor 710 may also be configured to obtain a three-axis rotational motion vector for the electronic device; calculating a rotational motion affine matrix based on the first image, the second image and the three-axis rotational motion vector; and determining the rotational motion affine matrix as the rotational motion information.
In another or more possible embodiments, the processor 710 may be further configured to perform image mapping on the second image and the third image, respectively, to obtain a first projection sequence corresponding to the second image and a second projection sequence corresponding to the third image; calculating a first image projection value corresponding to the first projection sequence and a second image projection value corresponding to the second projection sequence; and filtering the first image projection value and the second image projection value respectively to obtain first target image projection information corresponding to the first image projection value and second target image projection information corresponding to the second image projection value.
In one or more possible embodiments, the processor 710 may be specifically configured to add the sequence values corresponding to each column in the first projection sequence to obtain a first image projection value corresponding to the first projection sequence; and adding the sequence values corresponding to each column in the second projection sequence to obtain a second image projection value corresponding to the second projection sequence.
In one or more possible embodiments, the processor 710 may be specifically configured to, in a case that the first target image projection information includes a first row projection curve and the second target image projection information includes a second row projection curve, perform a cross-correlation calculation on the first row projection curve and the second row projection curve to obtain a curve peak of a correlation curve of the first row projection curve and the second row projection curve; and determining a curve peak of the correlation curve as the displacement vector information of the third image relative to the second image.
In one or more possible embodiments, the processor 710 may be specifically configured to segment the third image based on the image sequence to obtain a plurality of non-overlapping first pixel blocks, where the plurality of first pixel blocks include a first target pixel block; determining a matching pixel block in the second image according to the first target pixel block and the displacement vector information; and generating motion estimation confidence information according to the first target pixel block and the matching pixel block.
Further, the processor 710 may be specifically configured to, with the first target pixel block as a base point and in combination with the position vector information, determine a target region on the second image, where the target region includes a plurality of second target pixel blocks, and displacement vector information of each of the plurality of second target pixel blocks and the first target pixel block satisfies a preset offset threshold; and determining a pixel block which meets a preset matching condition in the plurality of second target pixel blocks as a matching pixel block.
In yet another or more possible embodiments, the processor 710 may be further configured to detect a range of object motion estimation confidence values that the motion estimation confidence information includes; under the condition that the range of the target motion estimation confidence value is a first confidence value range, determining that the confidence value of motion vector information of each pixel block in the first image in the second image is not credible, and denoising the first image without matching the pixel block and the position vector information; and determining that the confidence value of the motion vector information of each pixel block in the first image in the second image is evaluated to be credible under the condition that the range of the target motion estimation confidence value is a second confidence value range, and denoising the first image by matching the pixel blocks and the position vector information.
It is to be understood that the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still image or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein. Memory 709 may be used to store software programs as well as various data, including but not limited to applications and operating systems. Processor 710 may integrate an application processor, which primarily handles operating systems, user interfaces, applications, etc., and a modem processor, which primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the embodiment of the image processing method, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device in the above embodiment. The readable storage medium includes a computer-readable storage medium, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
In addition, an embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to execute a program or an instruction to implement each process of the above-mentioned embodiment of the image processing method, and the same technical effect can be achieved.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, or a network device) to execute the method of the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (11)

1. An image processing method, comprising:
acquiring a first image, a second image and rotation motion information of electronic equipment, wherein the first image and the second image are adjacent frame images;
performing affine transformation on the first image according to the rotation motion information to obtain a third image, wherein the similarity between the third image and the second image is higher than a first preset similarity;
determining displacement vector information of the third image relative to the second image according to first target image projection information of the second image and second target image projection information of the third image;
and generating motion estimation confidence information according to the second image, the third image and the displacement vector information, wherein the motion estimation confidence information is used for evaluating the confidence value of the motion vector information of each pixel block in the first image in the second image.
2. The method of claim 1, wherein prior to obtaining the first image, the second image, and the rotational motion information of the electronic device, the method further comprises:
acquiring a three-axis rotation motion vector of the electronic equipment;
computing a rotational motion affine matrix based on the first image, the second image, and the three-axis rotational motion vectors;
determining the rotational motion affine matrix as the rotational motion information.
3. The method of claim 1 or 2, wherein before determining displacement vector information of the third image relative to the second image based on the first target image projection information of the second image and the second target image projection information of the third image, the method further comprises:
respectively carrying out image mapping on the second image and the third image to obtain a first projection sequence corresponding to the second image and a second projection sequence corresponding to the third image;
calculating a first image projection value corresponding to the first projection sequence and a second image projection value corresponding to the second projection sequence;
and respectively filtering the first image projection value and the second image projection value to obtain first target image projection information corresponding to the first image projection value and second target image projection information corresponding to the second image projection value.
4. The method of claim 3, wherein the computing a first image projection value corresponding to the first projection sequence and a second image projection value corresponding to the second projection sequence comprises:
adding sequence values corresponding to each column in the first projection sequence to obtain a first image projection value corresponding to the first projection sequence; and adding the sequence values corresponding to each column in the second projection sequence to obtain a second image projection value corresponding to the second projection sequence.
5. The method of claim 1, wherein the first target image projection information comprises a first line projection curve and the second target image projection information comprises a second line projection curve; the determining displacement vector information of the third image relative to the second image according to the first target image projection information of the second image and the second target image projection information of the third image comprises:
performing cross-correlation calculation on the first row projection curve and the second row projection curve to obtain a curve peak value of a correlation curve of the first row projection curve and the second row projection curve;
determining a curve peak of the correlation curve as displacement vector information of the third image relative to the second image.
6. The method of claim 1, wherein generating motion estimation confidence information from the second image, the third image, and the displacement vector information comprises:
dividing the third image based on the image sequence to obtain a plurality of non-overlapping first pixel blocks, wherein the plurality of first pixel blocks comprise a first target pixel block;
determining a matching pixel block in the second image according to the first target pixel block and the displacement vector information;
and generating motion estimation confidence information according to the first target pixel block and the matching pixel block.
7. The method of claim 6, wherein said determining a matching block of pixels in the second image based on the first target block of pixels and the displacement vector information comprises:
determining a target area on the second image by taking the first target pixel block as a base point and combining the position vector information, wherein the target area comprises a plurality of second target pixel blocks, and the displacement vector information of each second target pixel block in the plurality of second target pixel blocks and the first target pixel block meets a preset offset threshold;
and determining a pixel block which meets a preset matching condition in the plurality of second target pixel blocks as the matching pixel block.
8. The method of claim 6 or 7, wherein after generating motion estimation confidence information based on the first target pixel block and the matching pixel block, the method further comprises:
detecting a range of object motion estimation confidence values included in the motion estimation confidence information;
determining, if the range of the target motion estimation confidence value is a first confidence value range, that motion vector information of each pixel block in the first image in the second image is not trusted and denoising the first image without using the matching pixel block and the position vector information;
and under the condition that the range of the target motion estimation confidence value is a second confidence value range, determining that the motion vector information of each pixel block in the first image in the second image is evaluated to be credible, and denoising the first image through the matched pixel block and the position vector information.
9. An image processing apparatus characterized by comprising:
the device comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first image, a second image and the rotation motion information of electronic equipment, and the first image and the second image are adjacent frame images;
the processing module is used for carrying out affine transformation on the first image according to the rotation motion information to obtain a third image, and the similarity between the third image and the second image is higher than a first preset similarity;
a determining module, configured to determine displacement vector information of the third image relative to the second image according to first target image projection information of the second image and second target image projection information of the third image;
a generating module, configured to generate motion estimation confidence information according to the second image, the third image, and the displacement vector information, where the motion estimation confidence information is used to evaluate a confidence value of motion vector information of each pixel block in the first image in the second image.
10. An electronic device, comprising: a processor, a memory and a program or instructions stored on the memory and executable on the processor, which when executed by the processor, implement the steps of the image processing method according to any one of claims 1 to 8.
11. A readable storage medium, characterized in that it stores thereon a program or instructions which, when executed by a processor, implement the steps of the image processing method according to any one of claims 1 to 8.
CN202110765388.4A 2021-07-06 2021-07-06 Image processing method, device, equipment and storage medium Pending CN113516684A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110765388.4A CN113516684A (en) 2021-07-06 2021-07-06 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110765388.4A CN113516684A (en) 2021-07-06 2021-07-06 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113516684A true CN113516684A (en) 2021-10-19

Family

ID=78066885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110765388.4A Pending CN113516684A (en) 2021-07-06 2021-07-06 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113516684A (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103402045A (en) * 2013-08-20 2013-11-20 长沙超创电子科技有限公司 Image de-spin and stabilization method based on subarea matching and affine model
US20140152862A1 (en) * 2012-11-30 2014-06-05 Canon Kabushiki Kaisha Image processing apparatus, image pickup apparatus, image pickup system, image processing method, and non-transitory computer-readable storage medium
CN111583211A (en) * 2020-04-29 2020-08-25 广东利元亨智能装备股份有限公司 Defect detection method and device and electronic equipment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140152862A1 (en) * 2012-11-30 2014-06-05 Canon Kabushiki Kaisha Image processing apparatus, image pickup apparatus, image pickup system, image processing method, and non-transitory computer-readable storage medium
CN103402045A (en) * 2013-08-20 2013-11-20 长沙超创电子科技有限公司 Image de-spin and stabilization method based on subarea matching and affine model
CN111583211A (en) * 2020-04-29 2020-08-25 广东利元亨智能装备股份有限公司 Defect detection method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
崔智高;李艾华;王涛;李辉;: "基于运动显著图和光流矢量分析的目标分割算法", 仪器仪表学报, no. 07, 15 July 2017 (2017-07-15) *

Similar Documents

Publication Publication Date Title
CN108805917B (en) Method, medium, apparatus and computing device for spatial localization
KR101722803B1 (en) Method, computer program, and device for hybrid tracking of real-time representations of objects in image sequence
Klein et al. Parallel tracking and mapping on a camera phone
US8290212B2 (en) Super-resolving moving vehicles in an unregistered set of video frames
US11138709B2 (en) Image fusion processing module
US10853927B2 (en) Image fusion architecture
WO2015017539A1 (en) Rolling sequential bundle adjustment
CN112506340B (en) Equipment control method, device, electronic equipment and storage medium
CN112348889B (en) Visual positioning method, and related device and equipment
CN113556464A (en) Shooting method and device and electronic equipment
CN111273772A (en) Augmented reality interaction method and device based on slam mapping method
CN105809664B (en) Method and device for generating three-dimensional image
Hannuksela et al. Vision-based motion estimation for interaction with mobile devices
CN113489909B (en) Shooting parameter determining method and device and electronic equipment
CN109509261B (en) Augmented reality method, device and computer storage medium
CN115705651A (en) Video motion estimation method, device, equipment and computer readable storage medium
CN103413326A (en) Method and device for detecting feature points in Fast approximated SIFT algorithm
CN113516684A (en) Image processing method, device, equipment and storage medium
CN114565777A (en) Data processing method and device
Zhen et al. Multi-image motion deblurring aided by inertial sensors
CN116266356A (en) Panoramic video transition rendering method and device and computer equipment
CN114049473A (en) Image processing method and device
CN114241127A (en) Panoramic image generation method and device, electronic equipment and medium
CN111684489A (en) Image processing method and device
CN115278071B (en) Image processing method, device, electronic equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination