CN110766611A - Image processing method, image processing device, storage medium and electronic equipment - Google Patents

Image processing method, image processing device, storage medium and electronic equipment Download PDF

Info

Publication number
CN110766611A
CN110766611A CN201911056817.XA CN201911056817A CN110766611A CN 110766611 A CN110766611 A CN 110766611A CN 201911056817 A CN201911056817 A CN 201911056817A CN 110766611 A CN110766611 A CN 110766611A
Authority
CN
China
Prior art keywords
image
area
region
target
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911056817.XA
Other languages
Chinese (zh)
Inventor
张阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Wodong Tianjun Information Technology Co Ltd
Original Assignee
Beijing Wodong Tianjun Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Wodong Tianjun Information Technology Co Ltd filed Critical Beijing Wodong Tianjun Information Technology Co Ltd
Priority to CN201911056817.XA priority Critical patent/CN110766611A/en
Publication of CN110766611A publication Critical patent/CN110766611A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device, a storage medium and electronic equipment, wherein the method comprises the following steps: determining a first image and a second image from a plurality of images; determining a calibration area based on an area circled by a user in the first image; the calibration area includes: a target region and a fusion region; extracting scale features of the fusion region and the second image, and matching; converting the calibration area and the second image into the same plane space according to a matching result; and fusing the target area and the second image to obtain a fused image. The flexibility of image fusion is improved, the calibration area and the second image are converted into the same plane space according to the matching result by matching the scale features, splicing errors caused by shaking or moving of the mobile phone position when a user shoots are avoided, the quality of the fused image is improved, the fluency of the fused image is improved, and accurate fusion is realized.

Description

Image processing method, image processing device, storage medium and electronic equipment
Technical Field
The present invention relates to the field of image processing and computer technologies, and in particular, to an image processing method and apparatus, a storage medium, and an electronic device.
Background
In life, it is often necessary to process images, for example, the object erasing scene: passers-by or unknown objects often break into the view-finding frame during photographing, and some object in a certain area needs to be removed, so that the real background in a photographed scene is recovered. As another example, the target addition scenario: the object in one picture needs to be added to the other picture, and a better fusion effect is achieved.
In the process of implementing the present invention, the inventor finds that at least the following problems exist when the image processing is performed by using a plurality of frames in a video or an image sequence in the related art:
1) the restoration among multiple frames only adopts pixel copying among multiple images, and the problems of inaccurate matching or color difference of the multiple images exist, so that unnatural and unfused image defects appear in results.
2) The last frame in the sequence is fixedly taken as the result frame, which is not the case in the actual photographed scene.
3) The selection of the target frame adopts a contour comparison mode, because the motion track and the contour shape of a moving object are not fixed, an angle error is introduced in practice, and only the moving part in a picture can be detected, so that all user requirements cannot be met.
4) Only the horizontal and vertical offsets are considered in image stitching, but in practice, the images may have rotation, Z-direction offset, and the like.
5) The extracted position in the target frame is the same area as the coordinates in the result frame, which is not applicable in the case of jitter or rotation of the whole image sequence.
Therefore, a new image processing method, device, storage medium and electronic device are needed, which can realize accurate fusion between images.
The above information disclosed in this background section is only for enhancement of understanding of the background of the invention and therefore it may contain information that does not constitute prior art that is already known to a person of ordinary skill in the art.
Disclosure of Invention
In view of this, the present invention provides an image processing method, an image processing apparatus, a storage medium, and an electronic device, which can increase fluency of fused images and achieve accurate fusion.
Additional features and advantages of the invention will be set forth in the detailed description which follows, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided an image processing method, wherein the method includes: determining a first image and a second image from a plurality of images; determining a calibration area based on an area circled by a user in the first image; the calibration area includes: a target region and a fusion region; extracting scale features of the fusion region and the second image, and matching; converting the calibration area and the second image into the same plane space according to a matching result; and fusing the target area and the second image to obtain a fused image.
According to some embodiments, when the image processing mode is erasing, converting the calibration area and the second image into the same plane space according to the matching result, including: converting the second image into the same plane space of the calibration area; fusing the target area with the second image to obtain a fused image, comprising: and fusing a region corresponding to the target region in the second image to the first image to obtain a fused image.
According to some embodiments, when the image processing mode is addition, converting the calibration region and the second image into the same plane space according to the matching result includes: converting the calibration area into the same plane space of the second image; fusing the target area with the second image to obtain a fused image, comprising: and fusing the target area to the area corresponding to the target area in the second image to obtain a fused image.
According to some embodiments, the method comprises: and fusing the target area and the second image by using a Poisson fusion technology to obtain a fused image.
According to some embodiments, the method further comprises: and restoring the boundary pixels in the fused image by using a pixel restoring algorithm.
According to some embodiments, determining a calibration region based on a region circled by a user in the first image comprises: determining a target area based on an area circled by a user in the first image; expanding the target area to determine a calibration area; and determining a fusion area based on the target area removed from the calibration area.
According to some embodiments, determining a target region based on a region circled by a user in the first image comprises: optimizing a segmentation boundary of a region circled in the first image by utilizing an image segmentation technology based on the region circled in the first image by a user; and determining a target area based on the area corresponding to the segmentation boundary.
According to another aspect of the present invention, there is provided an image processing apparatus, wherein the apparatus comprises: a first determining module configured to determine a first image and a second image from a plurality of images; the second determination module is configured to determine a calibration area based on an area circled in the first image by a user; the calibration area includes: a target region and a fusion region; the matching module is configured to extract the scale features of the fusion area and the second image and perform matching; the conversion module is configured to convert the calibration area and the second image into the same plane space according to a matching result; and the acquisition module is configured to fuse the target area and the second image to acquire a fused image.
According to some embodiments, when the image processing mode is: erasing; the conversion module is configured to convert the second image into a same plane space of the calibration area; the acquisition module is configured to fuse a region corresponding to the target region in the second image to the first image, and acquire a fused image.
According to some embodiments, when the image processing mode is: adding; the conversion module is configured to convert the calibration area into the same plane space of the second image; the obtaining module is configured to fuse the target region to the region corresponding to the target region in the second image, and obtain a fused image.
According to some embodiments, the acquiring module is configured to fuse the target region with the second image using a poisson fusion technique to acquire a fused image.
According to some embodiments, the apparatus further comprises: a restoration module configured to restore boundary pixels in the fused image using a pixel restoration algorithm.
According to some embodiments, the second determination module is configured to determine a target region based on a region circled by a user in the first image; expanding the target area to determine a calibration area; and determining a fusion area based on the target area removed from the calibration area.
According to some embodiments, the second determination module is configured to optimize a segmentation boundary of a region circled in the first image by using an image segmentation technique based on the region circled by a user; and determining a target area based on the area corresponding to the segmentation boundary.
According to a further aspect of the invention, a computer-readable storage medium is provided, on which a computer program is stored, wherein the program realizes the above-mentioned method steps when executed by a processor.
According to still another aspect of the present invention, there is provided an electronic apparatus, including: one or more processors; storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the above-mentioned method steps.
In the embodiment of the invention, a first image and a second image are determined from a plurality of images; determining a calibration area based on an area circled by a user in the first image; the calibration area includes: a target region and a fusion region; extracting scale features of the fusion region and the second image, and matching; converting the calibration area and the second image into the same plane space according to a matching result; and fusing the target area and the second image to obtain a fused image. The flexibility of image fusion is improved, the calibration area and the second image are converted into the same plane space according to the matching result by matching the scale features, splicing errors caused by shaking or moving of the mobile phone position when a user shoots are avoided, the quality of the fused image is improved, the fluency of the fused image is improved, and accurate fusion is realized.
Compared with the prior art in which the last frame is directly set as the result frame, in the embodiment of the invention, the determination process of the result frame and the target frame is increased, the flexibility of image fusion is improved, and the method is suitable for various application scenes, such as user interface flow design and an automatic selection algorithm of the result frame and the target frame.
In the embodiment of the invention, the Poisson fusion technology is adopted for seamless fusion, so that the problem of unnatural splicing caused by boundaries and chromatic aberration during image fusion can be effectively solved, and the image fusion effect is improved.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment;
FIG. 2 is a schematic diagram illustrating a calibration area according to an embodiment of the present invention;
FIG. 3 is a graph illustrating the effects of Poisson fusion, according to an embodiment of the present invention;
FIG. 4 is a diagram illustrating an effect of image fusion according to an embodiment of the present invention;
FIG. 5 is a flow diagram illustrating another method of image processing according to an exemplary embodiment;
FIG. 6 is a flow diagram illustrating yet another image processing method according to an exemplary embodiment;
FIG. 7 is a schematic diagram illustrating a configuration of an image processing apparatus according to an exemplary embodiment;
fig. 8 is a schematic structural diagram of an electronic device according to an exemplary embodiment.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The same reference numerals denote the same or similar parts in the drawings, and thus, a repetitive description thereof will be omitted.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
The block diagrams shown in the figures are functional entities only and do not necessarily correspond to physically separate entities. I.e. these functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor means and/or microcontroller means.
The flow charts shown in the drawings are merely illustrative and do not necessarily include all of the contents and operations/steps, nor do they necessarily have to be performed in the order described. For example, some operations/steps may be decomposed, and some operations/steps may be combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
The following describes the image processing method according to the embodiment of the present invention in detail with reference to specific embodiments. It should be noted that the image processing method provided by the embodiment of the present invention may be executed by any device with computing processing capability, such as a server and/or a terminal device, and the present invention is not limited thereto.
It should be noted that in the embodiment of the present invention, a plurality of frames in a video or image sequence are taken as an example for description.
FIG. 1 is a flow diagram illustrating a method of image processing according to an exemplary embodiment, which may include, but is not limited to, the following steps, as shown in FIG. 1:
in S110, a first image and a second image are determined from the plurality of images.
According to the embodiment of the invention, a plurality of images can be acquired in a camera continuous shooting mode or a video mode. There is a certain correlation between these multiple images, for example, the difference is only in the change of a part of the object.
According to the embodiment of the invention, after the plurality of images are acquired, the plurality of images are automatically or manually displayed by the user, the user performs selection operation, and the first image and the second image are determined from the plurality of images in response to the operation of the user.
In the embodiment of the present invention, the first image and the second image may be designated by the user after browsing all the continuous shot images or videos, or the first image and the second image may be automatically determined by an algorithm, the algorithm may be calculated based on the sharpness (sharpness) of the images, and the algorithm may be used in the continuous shot always tending to determine a clearest picture. There are many algorithms for the sharpness of the image, and commonly used methods are gradient-based methods, such as Brenner gradient method, Tenegrad gradient method, laplace gradient method, variance method, energy gradient method, and the like.
Compared with the prior art in which the last frame is directly set as the result frame, in the embodiment of the invention, the determination process of the result frame and the target frame is increased, the flexibility of image fusion is improved, and the method is suitable for various application scenes, such as user interface flow design and an automatic selection algorithm of the result frame and the target frame.
In S120, a calibration region is determined based on a region circled by a user in the first image, wherein the calibration region includes: a target region and a fusion region.
According to the embodiment of the invention, after the first image and the second image are determined, a target area is determined based on an area circled in the first image by a user, the target area is expanded to determine a calibration area, and a fusion area is determined based on the target area removed from the calibration area.
According to the embodiment of the present invention, the region circled by the user in the first image may be directly determined as the target region, or the segmentation boundary of the circled region may be optimized by using an image segmentation technique based on the region circled by the user in the first image, and the region corresponding to the segmentation boundary may be determined as the target region.
In the embodiment of the invention, an image segmentation technology Grabcut can be adopted to optimize the segmentation boundary and segment the target region, and Grabcut can segment the object based on the boundary given by the user.
In the embodiment of the present invention, after the target region is determined, a region of interest (ROI) sufficient for the outside of the target region may be extended to serve as a calibration region. It should be noted that the calibration area may be a rectangle.
In the embodiment of the invention, the calibration area comprises a target area, and the area with the target area removed in the calibration area is a fusion area. Fig. 2 is a schematic structural diagram of a calibration area according to an embodiment of the present invention. As shown in fig. 2, the calibration area includes: a target region and a fusion region.
In S130, the scale features of the fusion region and the second image are extracted and matched.
In an embodiment of the present invention, the feature scale may be a scale invariant feature (SURF).
According to the embodiment of the invention, the Scale features can be extracted from the fusion region and the second image through a feature extraction algorithm such as Scale-invariant feature transform (SIFT), detection of SURF (speeded up robust features) of Scale and rotation invariant characteristics, and the like. The SURF algorithm and the SIFT algorithm are very advanced algorithms in the Open Source computer vision Library (OpenCV), which provides an application program interface API for SURF algorithms.
OpenCV is a cross-platform computer vision library issued based on BSD open source licensing (Berkeley Software Distribution) that can run on Linux, Windows, Android, and Mac OS operating systems. The method is light and efficient, is composed of a series of C functions and a small number of C + + classes, provides interfaces of languages such as Python (a computer program language), Ruby (an object-oriented programming script language), MATLAB and the like, and realizes a plurality of general algorithms in the aspects of image processing and computer vision.
In the embodiment of the invention, after the scale invariant feature is extracted by using the SURF algorithm and/or the SIFT algorithm, the feature matching is performed on the scale invariant feature in the fusion region and the second image, and the feature matching can also be a method in OpenCV, such as a keypoint descriptor FlankbasedMatcher matching.
It should be noted that the target region in the first image is a region that needs to be added to the second image, or the target region in the first image needs to be an area that needs to be erased, so that the target region in the first image is different from the second image, and if feature matching is performed according to the target region or the calibration region and the second image, matching failure may be caused, or a result is inaccurate, and therefore, in the embodiment of the present invention, feature matching is performed using the fusion region and the second image.
According to the embodiment of the invention, when feature matching is carried out, similarity calculation can be carried out on the scale features extracted from the fusion region and the scale features extracted from the second image, so that the scale features which are closest to the fusion region in the second image can be obtained.
In S140, the calibration region and the second image are converted into the same plane space according to the matching result.
According to the embodiment of the invention, during feature matching, the scale features closest to the fusion region are obtained, so that the unidirectional transformation matrix homographic of the two images is determined, and the region corresponding to the calibration region (the first image) is further obtained from the second image. For example, 5 scale features of A1-A5 are extracted from the fusion region, 20 scale features of B1-B20 are extracted from the second image, 5 features of B1-B5 with the highest similarity to A1-A5 are determined through calculation, a unidirectional transformation matrix of the calibration region (or the first image) and the second image is determined based on the corresponding relation of B1-B5 and A1-A5, and the calibration region can be converted into the same plane space of the second image or the second image can be converted into the same plane space of the calibration region (the first image) according to the unidirectional transformation matrix. It should be noted that, converting the two images into the same plane space, that is, aligning the two images, and determining the mapping relationship between the pixels of the two images.
In an embodiment of the present invention, the image processing mode may include: erase and add. When the image processing mode is: when erasing, the second image can be converted into the same plane space of the calibration area. When the image processing model is an add, the calibration region may be converted to the same planar space of the second image.
It should be noted that, when the image processing mode is erasing, the first image may be a result frame, the second image may be a target frame, and the target frame may be converted into the same plane space of the result frame for erasing a target area in the result frame. When the image processing mode is addition, the first image may be a target frame, the second image may be a result frame, and in order to add a target region in the target frame in the result frame, the target region in the target frame may be converted into the same plane space of the result frame.
In S150, the target region and the second image are fused to obtain a fused image.
According to the embodiment of the invention, when the image processing mode is erasing, the first image may be a result frame, the second image may be a target frame, and a target area in the result frame is erased, and an area corresponding to the target area in the second image may be fused to the first image to obtain a fused image. When the image processing mode is adding, the first image may be a target frame, the second image may be a result frame, and in order to add a target region in the target frame in the result frame, the target region is fused to the region corresponding to the target region in the second image, so as to obtain a fused image.
According to the embodiment of the invention, the target area and the second image can be fused by using a Poisson fusion technology to obtain a fused image.
In the embodiment of the invention, the Poisson fusion technology is adopted for seamless fusion, so that the problem of unnatural splicing caused by boundaries and chromatic aberration during image fusion can be effectively solved, and the image fusion effect is improved. Fig. 3 is a graph showing the effect of poisson fusion according to the embodiment of the present invention, as shown in fig. 3, the left side shows the result of copying only two image pixels, it can be observed from the graph that the middle rectangular region boundary has obvious color difference, and the right side shows the result of using poisson fusion technology, and the rectangular boundary color difference is obviously eliminated.
According to the embodiment of the invention, after image fusion is carried out, the boundary pixels in the fused image can be further recovered by using a pixel recovery algorithm.
In the embodiment of the present invention, for a small number of boundary pixels existing in the fused image, a single-frame pixel recovery Inpainting algorithm may be used for recovery, where the single-frame Inpainting algorithm may be a Method in Opencv, An imaging Inpainting Technique Based On the Fast Marching Method (An image recovery Method in Opencv), and the Method may be used to recover pixels in a small area, so that if the number of residual pixels is small, further recovery may be considered by using the Method.
It should be noted that, when the pixel recovery algorithm is used for recovery, the recovery operation can be repeated for many times, so that the effect is better, and the image fusion quality is improved.
It should be noted that after the fused image is obtained, the fused image may be displayed to the user, and if the user is not satisfied with the fused effect, S120-S150 may be continuously executed to further fuse the fused image, as shown in fig. 4, the left side is the original image, the middle is the effect image after the first erasure, and the right side is the effect image after the second erasure provided by the embodiment of the present invention, so that the fused effect is better through multiple times of image fusion.
In the embodiment of the invention, a first image and a second image are determined from a plurality of images; determining a calibration area based on an area circled by a user in the first image; the calibration area includes: a target region and a fusion region; extracting scale features of the fusion region and the second image, and matching; converting the calibration area and the second image into the same plane space according to a matching result; and fusing the target area and the second image to obtain a fused image. The flexibility of image fusion is improved, the calibration area and the second image are converted into the same plane space according to the matching result by matching the scale features, splicing errors caused by shaking or moving of the mobile phone position when a user shoots are avoided, the quality of the fused image is improved, the fluency of the fused image is improved, and accurate fusion is realized.
The following describes the image processing method provided in the embodiment of the present invention in detail with reference to specific application scenarios.
FIG. 5 is a flow chart illustrating another image processing method according to an exemplary embodiment, which is described with the image processing mode as erasing, as shown in FIG. 5, which may include, but is not limited to, the following processes:
in S510, a result frame and a target frame are determined from the plurality of images.
In S520, a calibration area is determined based on the area circled by the user in the result frame; the calibration area includes: a target region and a fusion region.
In S530, the scale features of the fusion region in the result frame and the target frame are extracted and matched.
In S540, the target frame is converted into the same plane space of the result frame according to the matching result.
In S550, a region corresponding to the target region in the target frame is fused with the result frame, and a fused image is obtained.
In S560, the boundary pixels in the fused image are restored using a pixel restoration algorithm.
In the embodiment of the invention, the flexibility of image fusion is improved, the calibration area and the second image are converted into the same plane space according to the matching result by matching the scale characteristics, so that the splicing error caused by shaking or moving the position of a mobile phone when a user shoots is avoided, the quality of the fused image is improved, the fluency of the fused image is improved, and the accurate fusion is realized.
Fig. 6 is a flowchart illustrating yet another image processing method according to an exemplary embodiment, which is described with an image processing mode as an addition, as shown in fig. 6, the method may include, but is not limited to, the following processes:
in S610, a result frame and a target frame are determined from the plurality of images.
In S620, a calibration area is determined based on an area circled by the user in the target frame; the calibration area includes: a target region and a fusion region.
In S630, the scale features of the fusion region in the target frame and the result frame are extracted and matched.
In S640, the target frame is converted into the same plane space of the result frame according to the matching result.
In S650, a region corresponding to the target region in the target frame is fused with the result frame, and a fused image is obtained.
In S660, the boundary pixels in the fused image are restored using a pixel restoration algorithm.
In the embodiment of the invention, the flexibility of image fusion is improved, the calibration area and the second image are converted into the same plane space according to the matching result by matching the scale characteristics, so that the splicing error caused by shaking or moving the position of a mobile phone when a user shoots is avoided, the quality of the fused image is improved, the fluency of the fused image is improved, and the accurate fusion is realized.
It should be clearly understood that the present disclosure describes how to make and use particular examples, but the principles of the present disclosure are not limited to any details of these examples. Rather, these principles can be applied to many other embodiments based on the teachings of the present disclosure.
The following are embodiments of the apparatus of the present invention that may be used to perform embodiments of the method of the present invention. In the following description of the apparatus, the same parts as those of the foregoing method will not be described again.
Fig. 7 is a schematic structural diagram illustrating an image processing apparatus according to an exemplary embodiment, the apparatus 700 including: a first determination module 710, a second determination module 720, and a matching module 730. A conversion module 740 and an acquisition module 750. Wherein:
a first determination module 710 configured to determine a first image and a second image from a plurality of images.
A second determining module 720, configured to determine a calibration region based on a region circled by a user in the first image; the calibration area includes: a target region and a fusion region.
And the matching module 730 is configured to extract and match the scale features of the fusion region and the second image.
The conversion module 740 is configured to convert the calibration region and the second image into a same plane space according to the matching result.
And the acquisition module is configured to fuse the target area and the second image to acquire a fused image.
In the embodiment of the invention, the flexibility of image fusion is improved, the calibration area and the second image are converted into the same plane space according to the matching result by matching the scale characteristics, so that the splicing error caused by shaking or moving the position of a mobile phone when a user shoots is avoided, the quality of the fused image is improved, the fluency of the fused image is improved, and the accurate fusion is realized.
As another aspect, the present application also provides a computer-readable medium, which may be contained in the apparatus described in the above embodiments; or may be separate and not incorporated into the device. The computer readable medium carries one or more programs which, when executed by a device, cause the device to perform: determining a first image and a second image from a plurality of images; determining a calibration area based on an area circled by a user in the first image; the calibration area includes: a target region and a fusion region; extracting scale features of the fusion region and the second image, and matching; converting the calibration area and the second image into the same plane space according to a matching result; and fusing the target area and the second image to obtain a fused image.
Fig. 8 is a schematic structural diagram of an electronic device according to an exemplary embodiment. It should be noted that the electronic device shown in fig. 8 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present application.
As shown in fig. 8, the computer system 800 includes a Central Processing Unit (CPU)801 that can perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)802 or a program loaded from a storage section 808 into a Random Access Memory (RAM) 803. In the RAM 803, various programs and data necessary for the operation of the system 800 are also stored. The CPU 801, ROM 802, and RAM 803 are connected to each other via a bus 804. An input/output (I/O) interface 805 is also connected to bus 804.
The following components are connected to the I/O interface 805: an input portion 806 including a keyboard, a mouse, and the like; an output section 807 including a signal such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage portion 808 including a hard disk and the like; and a communication section 809 including a network interface card such as a LAN card, a modem, or the like. The communication section 809 performs communication processing via a network such as the internet. A drive 810 is also connected to the I/O interface 805 as necessary. A removable medium 811 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 810 as necessary, so that a computer program read out therefrom is mounted on the storage section 808 as necessary.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer readable medium, the computer program comprising program code for performing the method illustrated in the flow chart. In such an embodiment, the computer program can be downloaded and installed from a network through the communication section 809 and/or installed from the removable medium 811. The computer program executes the above-described functions defined in the terminal of the present application when executed by the Central Processing Unit (CPU) 801.
It should be noted that the computer readable medium shown in the present application may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present application, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In this application, however, a computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wire, fiber optic cable, RF, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present application may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes a first determination module, a second determination module, a matching module, a conversion module, and an acquisition module. Wherein the names of the modules do not in some cases constitute a limitation of the module itself.
Exemplary embodiments of the present invention are specifically illustrated and described above. It is to be understood that the invention is not limited to the precise construction, arrangements, or instrumentalities described herein; on the contrary, the invention is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.

Claims (10)

1. An image processing method, characterized in that the method comprises:
determining a first image and a second image from a plurality of images;
determining a calibration area based on an area circled by a user in the first image; the calibration area includes: a target region and a fusion region;
extracting scale features of the fusion region and the second image, and matching;
converting the calibration area and the second image into the same plane space according to a matching result;
and fusing the target area and the second image to obtain a fused image.
2. The method of claim 1, wherein when the image processing mode is erase,
according to the matching result, the step of converting the calibration area and the second image into the same plane space comprises the following steps:
converting the second image into the same plane space of the calibration area;
fusing the target area with the second image to obtain a fused image, comprising:
and fusing a region corresponding to the target region in the second image to the first image to obtain a fused image.
3. The method of claim 1, wherein when the image processing mode is Add,
according to the matching result, the step of converting the calibration area and the second image into the same plane space comprises the following steps:
converting the calibration area into the same plane space of the second image;
fusing the target area with the second image to obtain a fused image, comprising:
and fusing the target area to the area corresponding to the target area in the second image to obtain a fused image.
4. A method according to any one of claims 1-3, characterized in that the method comprises:
and fusing the target area and the second image by using a Poisson fusion technology to obtain a fused image.
5. The method of any one of claims 1-3, further comprising:
and restoring the boundary pixels in the fused image by using a pixel restoring algorithm.
6. The method of claim 1, wherein determining a calibration region based on a region circled by a user in the first image comprises:
determining a target area based on an area circled by a user in the first image;
expanding the target area to determine a calibration area;
and determining a fusion area based on the target area removed from the calibration area.
7. The method of claim 6, wherein determining a target region based on a region circled by a user in the first image comprises:
optimizing a segmentation boundary of a region circled in the first image by utilizing an image segmentation technology based on the region circled in the first image by a user;
and determining a target area based on the area corresponding to the segmentation boundary.
8. An image processing apparatus, characterized in that the apparatus comprises:
a first determining module configured to determine a first image and a second image from a plurality of images;
the second determination module is configured to determine a calibration area based on an area circled in the first image by a user; the calibration area includes: a target region and a fusion region;
the matching module is configured to extract the scale features of the fusion area and the second image and perform matching;
the conversion module is configured to convert the calibration area and the second image into the same plane space according to a matching result;
and the acquisition module is configured to fuse the target area and the second image to acquire a fused image.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method steps of any one of claims 1 to 7.
10. An electronic device, comprising: one or more processors;
storage means for storing one or more programs which, when executed by the one or more processors, cause the one or more processors to carry out the method steps of any of claims 1-7.
CN201911056817.XA 2019-10-31 2019-10-31 Image processing method, image processing device, storage medium and electronic equipment Pending CN110766611A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911056817.XA CN110766611A (en) 2019-10-31 2019-10-31 Image processing method, image processing device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911056817.XA CN110766611A (en) 2019-10-31 2019-10-31 Image processing method, image processing device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN110766611A true CN110766611A (en) 2020-02-07

Family

ID=69335011

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911056817.XA Pending CN110766611A (en) 2019-10-31 2019-10-31 Image processing method, image processing device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN110766611A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661584A (en) * 2022-11-18 2023-01-31 浙江莲荷科技有限公司 Model training method, open domain target detection method and related device

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646022A (en) * 2009-09-04 2010-02-10 深圳华为通信技术有限公司 Image splicing method and system thereof
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN103679749A (en) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 Moving target tracking based image processing method and device
CN107018336A (en) * 2017-04-11 2017-08-04 腾讯科技(深圳)有限公司 The method and apparatus of image procossing and the method and apparatus of Video processing
WO2017161544A1 (en) * 2016-03-25 2017-09-28 深圳大学 Single-camera video sequence matching based vehicle speed measurement method and system
CN108093221A (en) * 2017-12-27 2018-05-29 南京大学 A kind of real-time video joining method based on suture
CN108197623A (en) * 2018-01-19 2018-06-22 百度在线网络技术(北京)有限公司 For detecting the method and apparatus of target
CN108198130A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108230245A (en) * 2017-12-26 2018-06-29 中国科学院深圳先进技术研究院 Image split-joint method, image splicing device and electronic equipment
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method, apparatus, readable storage medium storing program for executing and computer equipment
CN109409335A (en) * 2018-11-30 2019-03-01 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110084754A (en) * 2019-06-25 2019-08-02 江苏德劭信息科技有限公司 A kind of image superimposing method based on improvement SIFT feature point matching algorithm
CN110390640A (en) * 2019-07-29 2019-10-29 齐鲁工业大学 Graph cut image split-joint method, system, equipment and medium based on template

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101646022A (en) * 2009-09-04 2010-02-10 深圳华为通信技术有限公司 Image splicing method and system thereof
CN102006425A (en) * 2010-12-13 2011-04-06 交通运输部公路科学研究所 Method for splicing video in real time based on multiple cameras
CN103679749A (en) * 2013-11-22 2014-03-26 北京奇虎科技有限公司 Moving target tracking based image processing method and device
WO2017161544A1 (en) * 2016-03-25 2017-09-28 深圳大学 Single-camera video sequence matching based vehicle speed measurement method and system
CN107018336A (en) * 2017-04-11 2017-08-04 腾讯科技(深圳)有限公司 The method and apparatus of image procossing and the method and apparatus of Video processing
CN108230245A (en) * 2017-12-26 2018-06-29 中国科学院深圳先进技术研究院 Image split-joint method, image splicing device and electronic equipment
CN108093221A (en) * 2017-12-27 2018-05-29 南京大学 A kind of real-time video joining method based on suture
CN108198130A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Image processing method, device, storage medium and electronic equipment
CN108197623A (en) * 2018-01-19 2018-06-22 百度在线网络技术(北京)有限公司 For detecting the method and apparatus of target
CN109166077A (en) * 2018-08-17 2019-01-08 广州视源电子科技股份有限公司 Image alignment method, apparatus, readable storage medium storing program for executing and computer equipment
CN109409335A (en) * 2018-11-30 2019-03-01 腾讯科技(深圳)有限公司 Image processing method, device, computer-readable medium and electronic equipment
CN110084754A (en) * 2019-06-25 2019-08-02 江苏德劭信息科技有限公司 A kind of image superimposing method based on improvement SIFT feature point matching algorithm
CN110390640A (en) * 2019-07-29 2019-10-29 齐鲁工业大学 Graph cut image split-joint method, system, equipment and medium based on template

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115661584A (en) * 2022-11-18 2023-01-31 浙江莲荷科技有限公司 Model training method, open domain target detection method and related device

Similar Documents

Publication Publication Date Title
US9400939B2 (en) System and method for relating corresponding points in images with different viewing angles
US9014470B2 (en) Non-rigid dense correspondence
US10212410B2 (en) Systems and methods of fusing multi-angle view HD images based on epipolar geometry and matrix completion
CN108171677B (en) Image processing method and related equipment
Zhang et al. Robust metric reconstruction from challenging video sequences
CN112802033B (en) Image processing method and device, computer readable storage medium and electronic equipment
CN111612696B (en) Image stitching method, device, medium and electronic equipment
CN110855957B (en) Image processing method and device, storage medium and electronic equipment
CN112767295A (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN112906492A (en) Video scene processing method, device, equipment and medium
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN113283319A (en) Method and device for evaluating face ambiguity, medium and electronic equipment
CN110766611A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111494947B (en) Method and device for determining movement track of camera, electronic equipment and storage medium
CN113724143A (en) Method and device for image restoration
CN116188535A (en) Video tracking method, device, equipment and storage medium based on optical flow estimation
Van Vo et al. High dynamic range video synthesis using superpixel-based illuminance-invariant motion estimation
CN116848547A (en) Image processing method and system
CN113259698A (en) Method, apparatus, storage medium, and program product for replacing background in picture
CN115761389A (en) Image sample amplification method and device, electronic device and storage medium
CN113284077A (en) Image processing method, image processing device, communication equipment and readable storage medium
CN112308809A (en) Image synthesis method and device, computer equipment and storage medium
EP3711017A1 (en) A method for processing a light field video based on the use of a super-rays representation
WO2023056833A1 (en) Background picture generation method and apparatus, image fusion method and apparatus, and electronic device and readable medium
CN114972517B (en) Self-supervision depth estimation method based on RAFT

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination