WO2019127269A1 - Image stitching method, image stitching device and electronic device - Google Patents

Image stitching method, image stitching device and electronic device Download PDF

Info

Publication number
WO2019127269A1
WO2019127269A1 PCT/CN2017/119565 CN2017119565W WO2019127269A1 WO 2019127269 A1 WO2019127269 A1 WO 2019127269A1 CN 2017119565 W CN2017119565 W CN 2017119565W WO 2019127269 A1 WO2019127269 A1 WO 2019127269A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
region
processed
moving object
moving target
Prior art date
Application number
PCT/CN2017/119565
Other languages
French (fr)
Chinese (zh)
Inventor
程俊
郝洛莹
Original Assignee
中国科学院深圳先进技术研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中国科学院深圳先进技术研究院 filed Critical 中国科学院深圳先进技术研究院
Publication of WO2019127269A1 publication Critical patent/WO2019127269A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application belongs to the field of image processing technologies, and in particular, to an image stitching method, an image stitching apparatus, an electronic device, and a computer readable storage medium.
  • the acquisition of panoramic images is an emerging research field and hotspot of computer vision.
  • the panoramic image is obtained mainly by the following two methods: 1. directly using a dedicated wide-angle imaging device (such as a fisheye optical lens, a convex optical lens such as a convex optical lens) to take a sufficiently large horizontal angle image at a time, but The cost is high, and the resolution and viewing angle are difficult to balance, and the image will be severely distorted. 2.
  • image stitching technology a set of low-resolution or small-view images with overlapping regions are stitched into a high-resolution, wide-view image. New image. Since the second method has low requirements on the device and can retain the detailed information of the original captured image, the image stitching technique is very important for the acquisition of the panoramic image.
  • images used for image stitching have moving objects in addition to static objects, and the misalignment and superposition of moving objects are the main causes of ghosting in the stitched images. How to eliminate ghosting in image stitching is one of the most difficult problems in the industry.
  • the brightness, color or texture structure of the corresponding pixel in the overlapping area of the image to be spliced is used to determine the position of the moving object, and the moving object is selectively shielded during the image splicing process.
  • this method is susceptible to exposure differences, interference pixel points, etc., and only relies on pixel difference for ghost elimination. For most slightly complicated scenes, misoperations are apt to occur, resulting in ghost effect elimination.
  • the present application provides an image splicing method, an image splicing device, an electronic device, and a computer readable storage medium, which are beneficial to improving ghost elimination effects.
  • a first aspect of the embodiment of the present application provides an image mosaic method, including:
  • the first processed image and the second processed image are image-fused to obtain a stitched image.
  • a moving target exists in a dynamic background in the image sequence
  • the extracting the first image and the second image to be spliced from the image sequence includes:
  • the image fusion of the first processed image and the second processed image is performed based on a result of the detecting :
  • the image Processing also includes: morphological processing
  • the performing image processing on the first image and the second image includes:
  • the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
  • the second aspect of the present application provides an image splicing apparatus, including:
  • An acquiring unit configured to obtain a sequence of images obtained by continuous shooting
  • An optical flow detecting unit configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method
  • Extracting unit configured to extract a first image and a second image to be stitched from the image sequence
  • An image processing unit configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
  • an image fusion unit configured to perform image fusion on the first processed image and the second processed image based on a result of the detection by the optical flow detecting unit to obtain a stitched image.
  • a moving target exists in a dynamic background in the image sequence
  • the extracting unit is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include an area in the first image that overlaps with a background portion of the second image A complete image of the target.
  • the image fusion unit specifically includes:
  • a determining unit configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
  • a sub-fusion unit configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position;
  • the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target;
  • the image Processing also includes: morphological processing
  • the image processing unit is specifically configured to: perform image registration on the first image and the second image; and perform morphology on the image-registered first image and the second image based on the detection result Processing to refine the moving objects in the first image and the second image.
  • a third aspect of the present application provides an electronic device including a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the computer program to implement the first aspect or the first aspect An image stitching method mentioned in any of the possible implementations.
  • a fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the first aspect or any of the possible implementations of the first aspect The image stitching method mentioned in the paper.
  • a fifth aspect of the present application provides a computer program product comprising a computer program, the computer program being executed by one or more processors to implement any of the above first aspects or any of the above first aspects The image stitching method mentioned in the way.
  • the solution of the present application acquires the image sequence obtained by continuous shooting, and detects the moving target in the dynamic background in the image sequence based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
  • the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
  • FIG. 1 is a schematic flowchart of an embodiment of an image splicing method provided by the present application
  • FIG. 1-b is a schematic diagram of an image fusion process that can be applied to the embodiment shown in FIG. 1-a according to the present application;
  • FIG. 2-a is a schematic diagram of a first processing image of an application scenario provided by the present application.
  • FIG. 2-b is a schematic diagram of a second processing image of an application scenario provided by the present application.
  • Figure 2-c is a schematic diagram of a first area determined based on the first processed image and the second processed image shown in Figures 2-a and 2-b;
  • Figure 2-d is a schematic diagram of a second region determined based on the first processed image and the second processed image illustrated in Figures 2-a and 2-b;
  • FIG. 2-e is a schematic diagram of an image obtained by image fusion based on the first region and the second region illustrated in FIG. 2-c and FIG. 2-d; FIG.
  • FIG. 3 is a schematic structural diagram of an embodiment of an image splicing device provided by the present application.
  • FIG. 4 is a schematic structural diagram of an embodiment of an electronic device provided by the present application.
  • the image splicing method can be applied to an image splicing device, and the image splicing device can be an independent device, or the image splicing device can also be integrated in an electronic device (for example, a smart phone or a tablet). In computers, computers, wearables, etc.).
  • the operating system of the device or the electronic device integrated with the image splicing device may be an ios system, an android system, a windows system, or other operating system, which is not limited herein.
  • the image splicing method in the embodiment of the present application may include:
  • Step 101 Acquire an image sequence obtained by continuous shooting
  • the image sequence may be a set of consecutive frame images taken by a single camera.
  • the camera can be slowly rotated along a vertical axis (ie, the angular velocity of the camera rotation is less than an angular velocity threshold (for example, 50 degrees). / sec)) Take a picture of the image to get the sequence of images.
  • the image sequence may be captured in real time, or a pre-stored image sequence may be obtained from a database, which is not limited herein.
  • Step 102 Detecting a moving target in a dynamic background in the image sequence based on an optical flow method
  • the optical flow field is used to represent the instantaneous velocity field of the pixel point change trend in the image, and the principle of detecting the moving target in the dynamic background in the above image sequence based on the optical flow method is: based on the image of each adjacent frame in the image sequence
  • the change in the distribution of the pixel points calculates the relevant motion information (for example, the moving speed and the moving direction) of the moving object in each image.
  • the motion in the image sequence is jointly generated by the motion of the moving target itself and the motion of the camera. Since there is relative motion between the moving target and the background, the moving target is located.
  • the motion vector of the pixel is different from the motion vector of the background, so that the moving target in the dynamic background in the image sequence can be detected based on the optical flow method.
  • the background threshold and the moving target can be distinguished by setting the corresponding threshold based on the motion vector of the background.
  • the optical flow method can be divided into a sparse optical flow method (such as LK optical flow method (ie, Lucas-Kanade method) and dense optical flow method (such as Gunnar Farneback optical flow method).
  • LK optical flow method ie, Lucas-Kanade method
  • dense optical flow method such as Gunnar Farneback optical flow method.
  • the dense optical flow method is different from the sparse optical flow. Only for a number of feature points on the image, the dense optical flow method calculates the offset of all the points on the image, thus forming a dense optical flow field. Therefore, in order to better improve the detection effect of the moving target, the follow-up ghost The effect of eliminating the shadow is better.
  • the moving object in the dynamic background in the image sequence may be detected based on the dense optical flow method.
  • Step 103 Extract a first image and a second image to be spliced from the image sequence.
  • the first image and the second image to be spliced may be automatically extracted from the image sequence, or the first image and the second image to be spliced may be manually selected by the user from the image sequence. In step 103, the first image and the second image are extracted based on the user's selection.
  • step 103 may specifically be: extracting from the image sequence.
  • the area where the first image overlaps the background portion of the second image is the area S1 (the area S1 belongs to the image area in the first image), and the area where the second image overlaps the background portion of the first image is the area S2 (
  • the area S2 belongs to the image area in the first image, and the area S1 contains a complete image of the moving object, and the area S2 may contain a complete image or a partial image of the moving object, or may not contain the moving object.
  • the entire image of the moving object is included, and the second image is also ensured as much as possible with respect to the background of the first image (or The first image is extended relative to the background of the second image such that the field of view of the subsequently obtained stitched image is expanded.
  • the step 103 may include: extracting, according to the detection result of step 102, an image including a complete image of the moving target from the image sequence as a first image; and extracting, from the image sequence, the preset frame from the first image interval
  • the number of images is used as the second image, thereby achieving automatic extraction of the first image and the second image. For example, if the image sequence includes 10 images obtained by continuous shooting, and the preset number of frames is 3, and the first image includes a complete image of the moving object, the first image in the image sequence may be extracted as the first image. Then, the fifth image in the image sequence is extracted as the second image (the number of frames in which the fifth image is separated from the first image is 3).
  • Step 104 Perform image processing on the first image and the second image to obtain a first processed image and a second processed image.
  • the image processing is performed on the first image and the second image in step 104, wherein step 104 is performed on the first image and the second image.
  • Performing image processing on the image includes performing image registration on the first image and the second image.
  • the process of performing image registration on the first image and the second image may be as follows: performing feature extraction and matching on the first image and the second image to find matching feature point pairs; determining the first based on the matched feature point pairs And a coordinate transformation parameter of the image and the second image, and finally performing image registration on the first image and the second image based on the coordinate transformation parameter.
  • the step 104 may perform image registration on the first image and the second image based on the SURF algorithm, and the specific process of performing image registration on the two images may be implemented by referring to the prior art, and details are not described herein again.
  • the first image and the second image may be further subjected to morphological processing (for example, etching, expansion, etc.).
  • the step 104 may specifically include: performing image registration on the first image and the second image; and performing morphological processing on the image-registered first image and the second image based on the result of the detecting to accurately a moving target in the first image and the second image described above.
  • the contour of the moving target in the first image may be extracted, and the first image is subjected to morphological processing, and the image is etched to remove the miscellaneous in the contour. Point, fill the broken part of the outline with image expansion, and then fill the inside of the outline to obtain the area where the moving target in the first image is located.
  • the process of performing morphological processing on an image may be implemented by referring to the prior art, and details are not described herein again.
  • the moving object in the image may also be locked by using a connectivity area (for example, a rectangular frame) to determine an image portion corresponding to the moving target. For example, if a moving object in an image is locked by a rectangular frame, the portion framed by the rectangular frame is the image portion corresponding to the moving target in the image.
  • a connectivity area for example, a rectangular frame
  • Step 105 Perform image fusion on the first processed image and the second processed image based on the result of the detecting, to obtain a stitched image;
  • image fusion refers to a process of processing a plurality of images through a computer such as image processing, extracting favorable information in each image to the utmost, and finally integrating into a high-quality image.
  • step 105 based on the result of step 102, the moving object included in each image in the image sequence may be extracted, and after the image processing in step 104, the first processed image and the second processed image are image-fused. Get the stitched image.
  • step 105 may include:
  • Step 1051 Determine, according to a result of the foregoing detection, a first area in the first processed image and a second area in the second processed image;
  • the background regions of the first region and the second region overlap.
  • step 1051 based on the result of step 102, the image portion corresponding to the background portion and the moving target in the first processed image and the second processed image may be distinguished, and the first processed image may be further determined based on the similarity measure.
  • An area in which the second processed image partially overlaps the background ie, the first area
  • an area in the second processed image that overlaps with the first processed image in the background portion ie, the second area.
  • Step 1052 When there is no image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and remove the motion in the first processed image.
  • the other part except the image part corresponding to the target is image-fused with the other part of the second processed image except the background part of the corresponding position;
  • the first area includes a complete image of the moving object, and for the second area, there are the following three situations: 1. the image of the moving target does not exist in the second area; 2. the second area There is a partial image of the above moving target; 3. The completed image of the moving target exists in the second region.
  • step 1052 the image portion corresponding to the moving object in the first region is replaced with the background portion of the corresponding position in the second region, and the image portion corresponding to the moving target in the first processed image is The other portion is image-fused with other portions of the second processed image than the background portion of the corresponding position.
  • Step 1053 When there is a partial image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and remove the motion in the first processed image. And the other portion except the image portion corresponding to the target is image-fused with the background portion of the second processed image except the partial portion of the moving target and the portion of the moving target;
  • the image portion corresponding to the moving object in the first region may be replaced with the background portion of the corresponding position in the second region, and the first processed image is divided into the above.
  • the other portion except the image portion corresponding to the moving object is image-fused with the portion of the second processed image other than the background portion of the corresponding position and the partial image of the moving target.
  • the first processed image and the second processed image are respectively, wherein the artificial moving target in FIG. 2-a and FIG. 2-b is other than human.
  • the background portion based on step 1051, it can be determined that Figure 2-c is the first region described above, and Figure 2-d is the second region. Since there is a partial image of the moving object in the first region shown in FIG. 2-c, in step 1053, the image portion corresponding to the moving object in FIG. 2-c is replaced with the background portion of the corresponding position in FIG. 2-d. And merging the other portion of the image in FIG. 2-a except the image portion corresponding to the moving object with the background portion of the corresponding position in FIG.
  • FIG. 2-e An image as shown in FIG. 2-e (ie, the stitched image) can be obtained.
  • FIG. 2-e the image portion corresponding to the moving object in the second region shown in FIG. 2-d is illustrated by FIG. 2-c.
  • the background portion in the first region shown is replaced, and Figure 2-e has a wider field of view relative to Figures 2-c and 2-d.
  • Step 1054 When there is a complete image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and save the first processed image
  • the other part of the image corresponding to the moving object is image-fused with the background portion of the second processed image except the background portion of the corresponding position and the entire image of the moving object;
  • the image portion corresponding to the moving target in the first region may be replaced with the background portion of the corresponding position in the second region, and the first processed image may be divided.
  • the other portion except the image portion corresponding to the moving object is image-fused with the background portion of the second processed image except the background portion of the corresponding position and the entire image of the moving object; or, in other embodiments, when When there is a complete image of the moving object in the second region, the image portion corresponding to the moving target in the second region may be replaced with the background portion of the corresponding position in the first region, and the second processed image may be divided into the above.
  • the other portion except the image portion corresponding to the moving object is image-fused with the portion of the first processed image except the background portion of the corresponding position and the entire image of the moving object.
  • the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
  • the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
  • the embodiment of the present application provides an image splicing device.
  • the image splicing device 300 in the embodiment of the present application includes:
  • the acquiring unit 301 is configured to acquire a sequence of images obtained by continuous shooting
  • the optical flow detecting unit 302 is configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method
  • the extracting unit 303 is configured to extract, from the image sequence, a first image and a second image to be stitched;
  • the image processing unit 304 is configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
  • the image fusion unit 305 is configured to perform image fusion on the first processed image and the second processed image based on the result detected by the optical flow detecting unit 302 to obtain a stitched image.
  • the extracting unit 303 is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include a moving target in an area overlapping the background portion of the second image in the first image Full image.
  • the image fusion unit 305 specifically includes:
  • a determining unit configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
  • a sub-fusion unit configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position;
  • the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target;
  • the image processing further includes: morphological processing.
  • the image processing unit 304 is specifically configured to: perform image registration on the first image and the second image; perform morphological processing on the first image and the second image after image registration based on the result of the detecting To refine the moving objects in the first image and the second image.
  • the image splicing device in the embodiment of the present application may be an independent device, or alternatively, the image splicing device may be integrated into an electronic device (such as a smart phone, a tablet computer, a computer, a wearable device, etc.).
  • the operating system of the device or the electronic device integrated with the image splicing device may be an ios system, an android system, a windows system, or other operating system, which is not limited herein.
  • the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
  • the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
  • the electronic device in the embodiment of the present application includes: a memory 401, one or more processors 402 (only one is shown in FIG. 4), and stored in the memory 401.
  • the memory 401 is used to store software programs and modules
  • the processor 402 executes various functional applications and data processing by running software programs and units stored in the memory 401.
  • the processor 402 implements the following steps by running the above computer program stored in the memory 401:
  • the first processed image and the second processed image are image-fused to obtain a stitched image.
  • a moving target exists in a dynamic background in the image sequence
  • Extracting the first image and the second image to be spliced from the image sequence is:
  • the performing image fusion on the first processed image and the second processed image based on the result of the detecting includes:
  • the performing image processing on the first image and the second image includes:
  • the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
  • the foregoing electronic device may further include: one or more input devices 403 (only one is shown in FIG. 4) and one or more output devices 404 (only one is shown in FIG. 4).
  • the memory 401, the processor 402, the input device 403, and the output device 404 are connected by a bus 405.
  • the so-called processor 402 may be a central processing unit (Central) Processing Unit (CPU), which can also be other general-purpose processors, digital signal processors (DSPs), and application specific integrated circuits (Application). Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the input device 403 can include a keyboard, a touchpad, a fingerprint sensor (for collecting fingerprint information of the user and direction information of the fingerprint), a microphone, etc.
  • the output device 404 can include a display, a speaker, and the like.
  • Memory 404 can include read only memory and random access memory and provides instructions and data to processor 401. Some or all of memory 404 may also include non-volatile random access memory. For example, the memory 404 can also store information of the device type.
  • the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
  • the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
  • each functional unit and module in the foregoing system may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be hardware.
  • Formal implementation can also be implemented in the form of software functional units.
  • the specific names of the respective functional units and modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application.
  • the disclosed apparatus and method may be implemented in other manners.
  • the system embodiments described above are merely illustrative.
  • the division of the above modules or units is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined. Or it can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
  • the units described above as separate components may or may not be physically separated.
  • the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • the above-described integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the processes in the above embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium.
  • the steps of the various method embodiments described above may be implemented when executed by a processor.
  • the above computer program comprises computer program code
  • the computer program code may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium may include any entity or device capable of carrying the above computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only). Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read Only memory
  • RAM random access memory
  • electrical carrier signals telecommunications signals
  • software distribution media may be any entity or device capable of carrying the above computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only). Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • the contents of the above computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in jurisdictions. For example, in some jurisdictions, according to legislation and patent practice, computer

Abstract

Provided in the present application are an image stitching method, an image stitching device, an electronic device, and a computer-readable storage medium, the image stitching method comprising: acquiring a sequence of images obtained by continuous photographing; detecting a moving target within the sequence of images in a dynamic background on the basis of an optical flow method; extracting from the sequence of images a first image and second image to be stitched; processing the first image and the second image to obtain a first processed image and a second processed image, wherein the image processing comprises: image registration; and fusing the first processed image and the second processed image on the basis of the result of detection, so as to obtain a stitched image. The technical solution according to the present application is beneficial to improving the effect of eliminating ghost images.

Description

图像拼接方法、图像拼接装置及电子设备Image stitching method, image stitching device and electronic device 技术领域Technical field
本申请属于图像处理技术领域,尤其涉及一种图像拼接方法、图像拼接装置、电子设备及计算机可读存储介质。The present application belongs to the field of image processing technologies, and in particular, to an image stitching method, an image stitching apparatus, an electronic device, and a computer readable storage medium.
背景技术Background technique
全景图像的获取是计算机视觉的新兴研究领域和热点内容。目前,主要通过如下两种方式获取全景图像:1、直接利用专用广角成像设备(如鱼眼光学镜头、凸面反射光学镜头等非线性光学成像设备),一次摄取足够大的水平角度的图像,但其造价较高,且分辨率和视角难以兼顾,图像会严重畸变;2、通过图像拼接技术,将一组具有重叠区域的低分辨率或小视角图像,拼接成一幅高分辨率、宽视野的新图像。由于第2种方式对设备要求低,且能保留原始拍摄图像的细节信息,因此,图像拼接技术对于全景图像的获取非常重要。The acquisition of panoramic images is an emerging research field and hotspot of computer vision. At present, the panoramic image is obtained mainly by the following two methods: 1. directly using a dedicated wide-angle imaging device (such as a fisheye optical lens, a convex optical lens such as a convex optical lens) to take a sufficiently large horizontal angle image at a time, but The cost is high, and the resolution and viewing angle are difficult to balance, and the image will be severely distorted. 2. By image stitching technology, a set of low-resolution or small-view images with overlapping regions are stitched into a high-resolution, wide-view image. New image. Since the second method has low requirements on the device and can retain the detailed information of the original captured image, the image stitching technique is very important for the acquisition of the panoramic image.
然而,一般情况下,用以图像拼接的图像中除了静态物体之外,还有运动物体,而运动物体的错位和叠加是导致拼接后的图像出现鬼影现象的主要原因。如何消除图像拼接中的鬼影现象是业内需要解决的重难点问题之一。However, in general, images used for image stitching have moving objects in addition to static objects, and the misalignment and superposition of moving objects are the main causes of ghosting in the stitched images. How to eliminate ghosting in image stitching is one of the most difficult problems in the industry.
现有技术中在图像拼接过程中,利用待拼接图像的重叠区域内对应像素点的亮度、颜色或者纹理结构来判断运动物体存在的位置,并在图像拼接的过程中选择性屏蔽该运动物体,以此减少鬼影现象的产生。然而,该方法容易受到曝光差异、干扰像素点等的影响,仅依靠像素点差异进行鬼影消除,对于大多数稍复杂的场景易出现误操作,导致鬼影消除的效果不明显。In the prior art, in the image splicing process, the brightness, color or texture structure of the corresponding pixel in the overlapping area of the image to be spliced is used to determine the position of the moving object, and the moving object is selectively shielded during the image splicing process. This reduces the occurrence of ghosting. However, this method is susceptible to exposure differences, interference pixel points, etc., and only relies on pixel difference for ghost elimination. For most slightly complicated scenes, misoperations are apt to occur, resulting in ghost effect elimination.
技术问题technical problem
有鉴于此,本申请提供了一种图像拼接方法、图像拼接装置、电子设备及计算机可读存储介质,有利于提高鬼影的消除效果。In view of this, the present application provides an image splicing method, an image splicing device, an electronic device, and a computer readable storage medium, which are beneficial to improving ghost elimination effects.
技术解决方案Technical solution
本申请实施例的第一方面提供了一种图像拼接方法,包括:A first aspect of the embodiment of the present application provides an image mosaic method, including:
获取连续拍摄所得的图像序列;Obtaining a sequence of images obtained by continuous shooting;
基于光流法对所述图像序列中动态背景下的运动目标进行检测;Detecting a moving target in a dynamic background in the image sequence based on an optical flow method;
从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting a first image and a second image to be stitched from the image sequence;
对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;Performing image processing on the first image and the second image to obtain a first processed image and a second processed image, wherein the image processing includes: image registration;
基于所述检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And based on the result of the detecting, the first processed image and the second processed image are image-fused to obtain a stitched image.
基于本申请第一方面,在第一种可能的实现方式中,所述图像序列中动态背景下存在运动目标;Based on the first aspect of the present application, in a first possible implementation, a moving target exists in a dynamic background in the image sequence;
所述从所述图像序列中抽取待拼接的第一图像和第二图像包括:The extracting the first image and the second image to be spliced from the image sequence includes:
从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。Extracting the first image and the second image to be spliced from the sequence of images, and causing a complete image of the moving object to be included in an area overlapping the background portion of the second image in the first image.
基于本申请第一方面的第一种可能的实现方式,在第二种可能的实现方式中,所述基于所述检测的结果,将所述第一处理图像和第二处理图像进行图像融合包括:According to a first possible implementation manner of the first aspect of the present application, in a second possible implementation manner, the image fusion of the first processed image and the second processed image is performed based on a result of the detecting :
基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;Determining, in a result of the detecting, a first region in the first processed image and a second region in the second processed image, wherein a background portion of the first region and the second region overlap;
当所述第二区域中不存在所述运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融When an image of the moving object does not exist in the second area, replacing an image portion corresponding to the moving object in the first area with a background portion of a corresponding position in the second area, and a portion of the first processed image other than the image portion corresponding to the moving object, and a portion of the second processed image other than the background portion of the corresponding position and the entire image of the moving object
当所述第二区域中存在所述运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;When a partial image of the moving target exists in the second region, replacing an image portion corresponding to the moving target in the first region with a background portion of a corresponding position in the second region, and And a portion other than the image portion corresponding to the moving object in the first processed image, and image fusion with the background portion of the second processed image and the portion other than the partial image of the moving target ;
当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合;或者,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。When a complete image of the moving object exists in the second region, replacing an image portion corresponding to the moving target in the first region with a background portion of a corresponding position in the second region, and Image fusion of the portion of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position and the complete image of the moving object Or replacing the image portion corresponding to the moving object in the second region with the background portion of the corresponding position in the first region, and dividing the image portion corresponding to the moving target in the second processed image The other portion is image-fused with the portion of the first processed image other than the background portion of the corresponding position and the entire image of the moving object.
基于本申请第一方面,或者本申请第一方面的第一种可能的实现方式,或者本申请第一方面的第二种可能的实现方式,在第三种可能的实现方式中,所述图像处理还包括:形态学处理;Based on the first aspect of the present application, or the first possible implementation manner of the first aspect of the application, or the second possible implementation manner of the first aspect of the present application, in a third possible implementation, the image Processing also includes: morphological processing;
所述对所述第一图像和所述第二图像进行图像处理包括:The performing image processing on the first image and the second image includes:
对所述第一图像和所述第二图像进行图像配准;Performing image registration on the first image and the second image;
基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。Based on the result of the detecting, the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
本申请第二方面提供一种图像拼接装置,包括:The second aspect of the present application provides an image splicing apparatus, including:
获取单元,用于获取连续拍摄所得的图像序列;An acquiring unit, configured to obtain a sequence of images obtained by continuous shooting;
光流检测单元,用于基于光流法对所述图像序列中动态背景下的运动目标进行检测;An optical flow detecting unit, configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method;
抽取单元,用于从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting unit, configured to extract a first image and a second image to be stitched from the image sequence;
图像处理单元,用于对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;An image processing unit, configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
图像融合单元,用于基于所述光流检测单元检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And an image fusion unit configured to perform image fusion on the first processed image and the second processed image based on a result of the detection by the optical flow detecting unit to obtain a stitched image.
基于本申请第二方面,在第一种可能的实现方式中,所述图像序列中动态背景下存在运动目标;Based on the second aspect of the present application, in a first possible implementation manner, a moving target exists in a dynamic background in the image sequence;
所述抽取单元具体用于:从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。The extracting unit is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include an area in the first image that overlaps with a background portion of the second image A complete image of the target.
基于本申请第二方面的第一种可能的实现方式,在第二种可能的实现方式中,所述图像融合单元具体包括:Based on the first possible implementation manner of the second aspect of the present application, in a second possible implementation manner, the image fusion unit specifically includes:
确定单元,用于基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;a determining unit, configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
子融合单元,用于当所述第二区域中不存在运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;当所述第二区域中存在运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合,或者,当所述第二区域中存在所述运动目标的完整图像时,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。a sub-fusion unit, configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position; When there is a partial image of the moving target in the second region, the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target; When a complete image of the moving object exists in the two regions, replacing an image portion corresponding to the moving target in the first region with a corresponding one in the second region a background portion, and other portions of the first processed image other than the image portion corresponding to the moving object, and a background portion of the second processed image other than the corresponding position and the moving target Other portions of the entire image are image-fused, or when a complete image of the moving object exists in the second region, the image portion corresponding to the moving target in the second region is replaced with the first a background portion of a corresponding position in the region, and a portion of the second processed image other than the image portion corresponding to the moving object, and a background portion of the first processed image other than the corresponding position Image fusion is performed on other parts of the moving image beyond the complete image.
基于本申请第二方面,或者本申请第二方面的第一种可能的实现方式,或者本申请第二方面的第二种可能的实现方式,在第三种可能的实现方式中,所述图像处理还包括:形态学处理;Based on the second aspect of the present application, or the first possible implementation manner of the second aspect of the present application, or the second possible implementation manner of the second aspect of the present application, in a third possible implementation manner, the image Processing also includes: morphological processing;
所述图像处理单元具体用于:对所述第一图像和所述第二图像进行图像配准;基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。The image processing unit is specifically configured to: perform image registration on the first image and the second image; and perform morphology on the image-registered first image and the second image based on the detection result Processing to refine the moving objects in the first image and the second image.
本申请第三方面提供一种电子设备,包括存储器,处理器及存储在存储器上并可在处理器上运行的计算机程序,上述处理器执行上述计算机程序时实现上述第一方面或者上述第一方面的任一可能实现方式中提及的图像拼接方法。A third aspect of the present application provides an electronic device including a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the computer program to implement the first aspect or the first aspect An image stitching method mentioned in any of the possible implementations.
本申请第四方面提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,上述计算机程序被处理器执行时实现上述第一方面或者上述第一方面的任一可能实现方式中提及的图像拼接方法。A fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the first aspect or any of the possible implementations of the first aspect The image stitching method mentioned in the paper.
本申请第五方面提供了一种计算机程序产品,所述计算机程序产品包括计算机程序,所述计算机程序被一个或多个处理器执行时实现上述第一方面或者上述第一方面的任一可能实现方式中提及的图像拼接方法。A fifth aspect of the present application provides a computer program product comprising a computer program, the computer program being executed by one or more processors to implement any of the above first aspects or any of the above first aspects The image stitching method mentioned in the way.
有益效果Beneficial effect
由上可见,本申请方案通过获取连续拍摄所得的图像序列,并基于光流法对图像序列中动态背景下的运动目标进行检测。之后从该图像序列中抽取待拼接的第一图像和第二图像,对第一图像和第二图像进行图像处理得到第一处理图像和第二处理图像后,基于上述检测的结果,将第一处理图像和第二处理图像进行图像融合,得到拼接后的图像。由于鬼影多数是因为待拼接的图像中包含运动目标造成,而相对于基于像素点差异对图像中的运动目标进行判断的方案,光流法在复杂的场景下也能够有效的检测出图像中的运动目标,因此,本申请方案通过光流法对图像序列中动态背景下的运动目标进行检测之后,基于检测的结果对第一处理图像和第二处理图像进行图像融合,可有效提高鬼影的消除效果。It can be seen from the above that the solution of the present application acquires the image sequence obtained by continuous shooting, and detects the moving target in the dynamic background in the image sequence based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene. The moving target, therefore, the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
附图说明DRAWINGS
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the prior art description will be briefly described below. Obviously, the drawings in the following description are only the present application. For some embodiments, other drawings may be obtained from those of ordinary skill in the art in light of the inventive workability.
图1-a为本申请提供的图像拼接方法一个实施例流程示意图;FIG. 1 is a schematic flowchart of an embodiment of an image splicing method provided by the present application;
图1-b为本申请提供的可应用于图1-a所示实施例中的图像融合流程示意图;FIG. 1-b is a schematic diagram of an image fusion process that can be applied to the embodiment shown in FIG. 1-a according to the present application;
图2-a为本申请提供的一种应用场景的第一处理图像示意图;FIG. 2-a is a schematic diagram of a first processing image of an application scenario provided by the present application;
图2-b为本申请提供的一种应用场景的第二处理图像示意图;FIG. 2-b is a schematic diagram of a second processing image of an application scenario provided by the present application;
图2-c为基于图2-a和图2-b所示的第一处理图像和第二处理图像确定出的第一区域示意图;Figure 2-c is a schematic diagram of a first area determined based on the first processed image and the second processed image shown in Figures 2-a and 2-b;
图2-d为基于图2-a和图2-b所示的第一处理图像和第二处理图像确定出的第二区域示意图;Figure 2-d is a schematic diagram of a second region determined based on the first processed image and the second processed image illustrated in Figures 2-a and 2-b;
图2-e为基于图2-c和图2-d所示的第一区域和第二区域进行图像融合得到的图像示意图;FIG. 2-e is a schematic diagram of an image obtained by image fusion based on the first region and the second region illustrated in FIG. 2-c and FIG. 2-d; FIG.
图3为本申请提供的图像拼接装置一个实施例结构示意图;3 is a schematic structural diagram of an embodiment of an image splicing device provided by the present application;
图4为本申请提供的电子设备一个实施例结构示意图。FIG. 4 is a schematic structural diagram of an embodiment of an electronic device provided by the present application.
本发明的实施方式Embodiments of the invention
以下描述中,为了说明而不是为了限定,提出了诸如特定系统结构、技术之类的具体细节,以便透彻理解本申请实施例。然而,本领域的技术人员应当清楚,在没有这些具体细节的其它实施例中也可以实现本申请。在其它情况中,省略对众所周知的系统、装置、电路以及方法的详细说明,以免不必要的细节妨碍本申请的描述。In the following description, for purposes of illustration and description However, it will be apparent to those skilled in the art that the present invention may be practiced in other embodiments without these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the application.
应理解,下述方法实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对各实施例的实施过程构成任何限定。It should be understood that the size of the serial number of each step in the following method embodiments does not mean the order of execution sequence, and the execution order of each process should be determined by its function and internal logic, and should not constitute any limitation on the implementation process of each embodiment. .
为了说明本申请所述的技术方案,下面通过具体实施例来进行说明。In order to explain the technical solutions described in the present application, the following description will be made by way of specific embodiments.
实施例一Embodiment 1
本申请实施例提供一种图像拼接方法,该图像拼接方法可应用于图像拼接装置中,该图像拼接装置可以为独立的设备,或者,图像拼接装置也可以集成在电子设备(例如智能手机、平板电脑、计算机以及可穿戴设备等)中。可选的,集成该图像拼接装置的设备或电子设备所搭载的操作系统可以为ios系统、android系统、windows系统或其它操作系统,此处不作限定。The image splicing method can be applied to an image splicing device, and the image splicing device can be an independent device, or the image splicing device can also be integrated in an electronic device (for example, a smart phone or a tablet). In computers, computers, wearables, etc.). Optionally, the operating system of the device or the electronic device integrated with the image splicing device may be an ios system, an android system, a windows system, or other operating system, which is not limited herein.
请参阅图1-a,本申请实施例中的图像拼接方法可包括:Referring to FIG. 1-a, the image splicing method in the embodiment of the present application may include:
步骤101、获取连续拍摄所得的图像序列;Step 101: Acquire an image sequence obtained by continuous shooting;
本申请实施例中,图像序列可以为采用单摄像机拍摄的一组连续帧图像。为了保证图像序列中各图像的清晰度和拼接后的图像的视野得以扩展,本申请实施例中,可以沿一竖直轴缓慢旋转摄像机(即保证摄像机旋转的角速度小于一角速度阈值(例如50度/秒))进行图像的拍摄,以获的该图像序列。In the embodiment of the present application, the image sequence may be a set of consecutive frame images taken by a single camera. In order to ensure that the resolution of each image in the image sequence and the field of view of the spliced image are expanded, in the embodiment of the present application, the camera can be slowly rotated along a vertical axis (ie, the angular velocity of the camera rotation is less than an angular velocity threshold (for example, 50 degrees). / sec)) Take a picture of the image to get the sequence of images.
在步骤101中,该图像序列可以实时拍摄得到,或者,也可以从数据库获取预先存储的图像序列,此处不做限定。In step 101, the image sequence may be captured in real time, or a pre-stored image sequence may be obtained from a database, which is not limited herein.
步骤102、基于光流法对上述图像序列中动态背景下的运动目标进行检测;Step 102: Detecting a moving target in a dynamic background in the image sequence based on an optical flow method;
光流场是用来表征图像中像素点变化趋势的瞬时速度场,而基于光流法对上述图像序列中动态背景下的运动目标进行检测的原理是:基于图像序列中各相邻帧图像之间像素点分布的改变,计算出各图像中运动目标的相关运动信息(例如移动速度大小、移动方向)。本申请实施例中,若图像序列中存在运动目标,则图像序列中的运动是由运动目标本身的运动和摄像机的运动共同产生,由于运动目标与背景之间存在相对运动,因此,运动目标所在像素点的运动向量与背景的运动向量存在差异,由此可基于光流法检测出图像序列中动态背景下的运动目标。例如,可以以背景的运动向量为准,设置相应的阈值,以此区别出图像中的背景和运动目标。The optical flow field is used to represent the instantaneous velocity field of the pixel point change trend in the image, and the principle of detecting the moving target in the dynamic background in the above image sequence based on the optical flow method is: based on the image of each adjacent frame in the image sequence The change in the distribution of the pixel points calculates the relevant motion information (for example, the moving speed and the moving direction) of the moving object in each image. In the embodiment of the present application, if there is a moving target in the image sequence, the motion in the image sequence is jointly generated by the motion of the moving target itself and the motion of the camera. Since there is relative motion between the moving target and the background, the moving target is located. The motion vector of the pixel is different from the motion vector of the background, so that the moving target in the dynamic background in the image sequence can be detected based on the optical flow method. For example, the background threshold and the moving target can be distinguished by setting the corresponding threshold based on the motion vector of the background.
光流法可分为稀疏光流法(例如LK光流法(也即Lucas-Kanade法)和稠密光流法(例如Gunnar Farneback光流法),对于稀疏光流法来说,计算时需要在对运动目标进行检测前指定若干个特征点(例如角点),对于少纹理的运动目标的部位(例如人手),稀疏光流法就比较容易跟丢。而稠密光流法不同于稀疏光流只针对图像上若干个特征点,稠密光流法是计算图像上所有的点的偏移量,从而形成一个稠密的光流场。因此,为了更好地提高运动目标的检测效果,使得后续鬼影的消除效果更优,本申请实施例中可以优选基于稠密光流法对上述图像序列中动态背景下的运动目标进行检测。The optical flow method can be divided into a sparse optical flow method (such as LK optical flow method (ie, Lucas-Kanade method) and dense optical flow method (such as Gunnar Farneback optical flow method). For the sparse optical flow method, calculation needs to be Specifying a number of feature points (such as corner points) before detecting a moving target, for a less textured moving target part (such as a human hand), the sparse optical flow method is easier to lose. The dense optical flow method is different from the sparse optical flow. Only for a number of feature points on the image, the dense optical flow method calculates the offset of all the points on the image, thus forming a dense optical flow field. Therefore, in order to better improve the detection effect of the moving target, the follow-up ghost The effect of eliminating the shadow is better. In the embodiment of the present application, the moving object in the dynamic background in the image sequence may be detected based on the dense optical flow method.
步骤103、从上述图像序列中抽取待拼接的第一图像和第二图像;Step 103: Extract a first image and a second image to be spliced from the image sequence.
在步骤103中,可以自动从上述图像序列中抽取待拼接的第一图像和第二图像,或者,也可以由用户手动从上述图像序列中选择待拼接的第一图像和第二图像,以便在步骤103中基于用户的选择抽取出第一图像和第二图像。In step 103, the first image and the second image to be spliced may be automatically extracted from the image sequence, or the first image and the second image to be spliced may be manually selected by the user from the image sequence. In step 103, the first image and the second image are extracted based on the user's selection.
对于图像序列中动态背景下存在运动目标的场景,由于运动目标的位移可能导致拼接后的图像中出现鬼影现象,因此,在此场景下,步骤103具体可以表现为:从上述图像序列中抽取待拼接的第一图像和第二图像,并使得上述第一图像中,与上述第二图像的背景部分重叠的区域内包含运动目标的完整图像。也即,设第一图像与第二图像的背景部分重叠的区域为区域S1(区域S1属于第一图像中的图像区域),第二图像与第一图像的背景部分重叠的区域为区域S2(区域S2属于第一图像中的图像区域),则区域S1内包含运动目标的完整图像,而区域S2可以包含运动目标的完整图像或部分图像,也可以不包含运动目标。进一步,在保证上述第一图像中,与上述第二图像的背景部分重叠的区域内包含运动目标的完整图像的基础上,还可以尽可能保证第二图像相对于与第一图像的背景(或者第一图像相对于第二图像的背景)有所延伸,以使得后续得到的拼接后的图像的视野有所扩展。For a scene in which there is a moving object in the dynamic background of the image sequence, the displacement of the moving target may cause ghosting in the image after the stitching. Therefore, in this scenario, step 103 may specifically be: extracting from the image sequence. The first image and the second image to be spliced, and such that a region overlapping the background portion of the second image in the first image includes a complete image of the moving object. That is, the area where the first image overlaps the background portion of the second image is the area S1 (the area S1 belongs to the image area in the first image), and the area where the second image overlaps the background portion of the first image is the area S2 ( The area S2 belongs to the image area in the first image, and the area S1 contains a complete image of the moving object, and the area S2 may contain a complete image or a partial image of the moving object, or may not contain the moving object. Further, on the basis of ensuring that the first image overlaps with the background image of the second image, the entire image of the moving object is included, and the second image is also ensured as much as possible with respect to the background of the first image (or The first image is extended relative to the background of the second image such that the field of view of the subsequently obtained stitched image is expanded.
可选的,步骤103可以包括:基于步骤102的检测结果,从上述图像序列中抽取包含运动目标的完整图像的图像作为第一图像;从上述图像序列中抽取与上述第一图像间隔预设帧数的图像作为第二图像,以此实现第一图像和第二图像的自动抽取。举例说明,设上述图像序列中包含连续拍摄得到的10张图像,预设帧数为3,第1张图像包含运动目标的完整图像,则可以抽取图像序列中的第一张图像作为第一图像,然后抽取图像序列中第5张图像作为第二图像(第5张图像与第1张图像间隔的帧数为3)。Optionally, the step 103 may include: extracting, according to the detection result of step 102, an image including a complete image of the moving target from the image sequence as a first image; and extracting, from the image sequence, the preset frame from the first image interval The number of images is used as the second image, thereby achieving automatic extraction of the first image and the second image. For example, if the image sequence includes 10 images obtained by continuous shooting, and the preset number of frames is 3, and the first image includes a complete image of the moving object, the first image in the image sequence may be extracted as the first image. Then, the fifth image in the image sequence is extracted as the second image (the number of frames in which the fifth image is separated from the first image is 3).
步骤104、对上述第一图像和上述第二图像进行图像处理,得到第一处理图像和第二处理图像;Step 104: Perform image processing on the first image and the second image to obtain a first processed image and a second processed image.
考虑到第一图像和第二图像可能存在平移、旋转、缩放等情况,因此,在步骤104中对上述第一图像和上述第二图像进行图像处理,其中,步骤104对第一图像和第二图像进行图像处理包括:对上述第一图像和上述第二图像进行图像配准。其中,对第一图像和第二图像进行图像配准的流程可如下:对第一图像和第二图像进行特征提取与匹配,以找到匹配的特征点对;基于匹配的特征点对确定第一图像和第二图像的坐标变换参数,最后基于该坐标变换参数对第一图像和第二图像进行图像配准。可选的,步骤104可以基于SURF算法对上述第一图像和上述第二图像进行图像配准,而对两张图像进行图像配准的具体过程可以参照已有技术实现,此处不再赘述。Considering that the first image and the second image may have translation, rotation, scaling, etc., the image processing is performed on the first image and the second image in step 104, wherein step 104 is performed on the first image and the second image. Performing image processing on the image includes performing image registration on the first image and the second image. The process of performing image registration on the first image and the second image may be as follows: performing feature extraction and matching on the first image and the second image to find matching feature point pairs; determining the first based on the matched feature point pairs And a coordinate transformation parameter of the image and the second image, and finally performing image registration on the first image and the second image based on the coordinate transformation parameter. Optionally, the step 104 may perform image registration on the first image and the second image based on the SURF algorithm, and the specific process of performing image registration on the two images may be implemented by referring to the prior art, and details are not described herein again.
为了消除干扰像素点对图像中运动目标的影响,在步骤104中,还可以进一步对第一图像和第二图像进行形态学处理(例如腐蚀、膨胀等处理)。则步骤104具体可包括:对上述第一图像和上述第二图像进行图像配准;基于上述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化上述第一图像和上述第二图像中的运动目标。以第一图像包含运动目标为例进行说明,基于步骤102检测的结果,可以提取出第一图像中运动目标的轮廓,通过对第一图像进行形态学处理,用图像腐蚀去除该轮廓中的杂点,用图像膨胀填补该轮廓中的断裂部分,然后对该轮廓内部进行填充,即可得到第一图像中运动个目标所在的区域。具体的,对图像进行形态学处理的过程可以参照已有技术实现,此处不再赘述。In order to eliminate the influence of the interfering pixel on the moving object in the image, in step 104, the first image and the second image may be further subjected to morphological processing (for example, etching, expansion, etc.). The step 104 may specifically include: performing image registration on the first image and the second image; and performing morphological processing on the image-registered first image and the second image based on the result of the detecting to accurately a moving target in the first image and the second image described above. Taking the first image as a moving target as an example for description, based on the result of step 102, the contour of the moving target in the first image may be extracted, and the first image is subjected to morphological processing, and the image is etched to remove the miscellaneous in the contour. Point, fill the broken part of the outline with image expansion, and then fill the inside of the outline to obtain the area where the moving target in the first image is located. Specifically, the process of performing morphological processing on an image may be implemented by referring to the prior art, and details are not described herein again.
进一步,在对第一图像和第二图像进行形态学处理后,还可以利用连通性区域(例如矩形框)锁定图像中的运动目标,以便确定运动目标对应的图像部分。例如,通过矩形框锁定图像中的运动目标,则矩形框所框选的部分即为图像中运动目标对应的图像部分。Further, after the morphological processing is performed on the first image and the second image, the moving object in the image may also be locked by using a connectivity area (for example, a rectangular frame) to determine an image portion corresponding to the moving target. For example, if a moving object in an image is locked by a rectangular frame, the portion framed by the rectangular frame is the image portion corresponding to the moving target in the image.
步骤105、基于上述检测的结果,将上述第一处理图像和上述第二处理图像进行图像融合,得到拼接后的图像;Step 105: Perform image fusion on the first processed image and the second processed image based on the result of the detecting, to obtain a stitched image;
本申请实施例中,图像融合是指将多张图像经过图像处理等计算机处理,最大限度的提取各图像中的有利信息,最后综合成高质量的图像的过程。在步骤105中,基于步骤102检测的结果,可以提取出图像序列中各图像所包含的运动目标,经过步骤104的图像处理后,将上述第一处理图像和上述第二处理图像进行图像融合,得到拼接后的图像。In the embodiment of the present application, image fusion refers to a process of processing a plurality of images through a computer such as image processing, extracting favorable information in each image to the utmost, and finally integrating into a high-quality image. In step 105, based on the result of step 102, the moving object included in each image in the image sequence may be extracted, and after the image processing in step 104, the first processed image and the second processed image are image-fused. Get the stitched image.
当图像序列中动态背景下存在运动目标时,为了避免因第一处理图像和第二处理图像的背景重叠区域均包含运动目标而导致拼接后的图像在该区域存在运动目标的叠影,在此场景下,如图1-b所示,步骤105可以包括:When there is a moving target in the dynamic background in the image sequence, in order to avoid that the image of the first processed image and the second processed image both contain the moving target, the stitched image has a moving object in the region, where In the scenario, as shown in FIG. 1-b, step 105 may include:
步骤1051、基于上述检测的结果,确定上述第一处理图像中的第一区域和上述第二处理图像中的第二区域;Step 1051: Determine, according to a result of the foregoing detection, a first area in the first processed image and a second area in the second processed image;
其中,上述第一区域和上述第二区域的背景部分重叠。The background regions of the first region and the second region overlap.
在步骤1051中,基于步骤102检测的结果,可以区分出上述第一处理图像和上述第二处理图像中的背景部分和运动目标对应的图像部分,进一步基于相似度度量可以确定出第一处理图像中与第二处理图像在背景部分重叠的区域(即第一区域),以及第二处理图像中与第一处理图像在背景部分重叠的区域(即第二区域)。In step 1051, based on the result of step 102, the image portion corresponding to the background portion and the moving target in the first processed image and the second processed image may be distinguished, and the first processed image may be further determined based on the similarity measure. An area in which the second processed image partially overlaps the background (ie, the first area), and an area in the second processed image that overlaps with the first processed image in the background portion (ie, the second area).
步骤1052、当上述第二区域中不存在运动目标的图像时,将第一区域中运动目标对应的图像部分替换为第二区域中相应位置的背景部分,并将第一处理图像中除上述运动目标对应的图像部分外的其它部分,与第二处理图像中除上述相应位置的背景部分外的其它部分进行图像融合;Step 1052: When there is no image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and remove the motion in the first processed image. The other part except the image part corresponding to the target is image-fused with the other part of the second processed image except the background part of the corresponding position;
在步骤1052中,上述第一区域内包含运动目标的完整图像,对于上述第二区域,存在如下三种情况:1、上述第二区域中不存在上述运动目标的图像;2、上述第二区域中存在上述运动目标的部分图像;3、上述第二区域中存在上述运动目标的完成图像。In step 1052, the first area includes a complete image of the moving object, and for the second area, there are the following three situations: 1. the image of the moving target does not exist in the second area; 2. the second area There is a partial image of the above moving target; 3. The completed image of the moving target exists in the second region.
对于上述第1种情况,在步骤1052中,将第一区域中运动目标对应的图像部分替换为第二区域中相应位置的背景部分,并将第一处理图像中除上述运动目标对应的图像部分外的其它部分,与第二处理图像中除上述相应位置的背景部分外的其它部分进行图像融合。For the first case described above, in step 1052, the image portion corresponding to the moving object in the first region is replaced with the background portion of the corresponding position in the second region, and the image portion corresponding to the moving target in the first processed image is The other portion is image-fused with other portions of the second processed image than the background portion of the corresponding position.
步骤1053、当上述第二区域中存在运动目标的部分图像时,将第一区域中运动目标对应的图像部分替换为第二区域中相应位置的背景部分,并将第一处理图像中除上述运动目标对应的图像部分外的其它部分,与第二处理图像中除上述相应位置的背景部分和上述运动目标的部分图像外的其它部分进行图像融合;Step 1053: When there is a partial image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and remove the motion in the first processed image. And the other portion except the image portion corresponding to the target is image-fused with the background portion of the second processed image except the partial portion of the moving target and the portion of the moving target;
对于步骤1052中提及的第2种情况,在步骤1053中,可以将第一区域中运动目标对应的图像部分替换为第二区域中相应位置的背景部分,并将第一处理图像中除上述运动目标对应的图像部分外的其它部分,与第二处理图像中除上述相应位置的背景部分和上述运动目标的部分图像外的其它部分进行图像融合。For the second case mentioned in step 1052, in step 1053, the image portion corresponding to the moving object in the first region may be replaced with the background portion of the corresponding position in the second region, and the first processed image is divided into the above. The other portion except the image portion corresponding to the moving object is image-fused with the portion of the second processed image other than the background portion of the corresponding position and the partial image of the moving target.
举例说明,如图2-a和图2-b所示分别为上述第一处理图像和上述第二处理图像,其中,图2-a和图2-b中的人为运动目标,除人之外为背景部分,基于步骤1051,可以确定出图2-c为上述第一区域,图2-d为上述第二区域。由于图2-c所示的第一区域内存在运动目标的部分图像,因此,在步骤1053中,将图2-c中运动目标对应的图像部分替换为图2-d中相应位置的背景部分,并将图2-a中除该运动目标对应的图像部分外的其它部分,与图2-c中除上述相应位置的背景部分和上述运动目标的部分图像外的其它部分进行图像融合,,可得到如图2-e所示的图像(即拼接后的图像),由图2-e可见,图2-d所示的第二区域中的运动目标对应的图像部分被图2-c所示的第一区域中的背景部分替代,且图2-e相对于图2-c和图2-d具有更宽的视野。步骤1054、当上述第二区域中存在运动目标的完整图像时,将第一区域中上述运动目标对应的图像部分替换为第二区域中相应位置的背景部分,并将第一处理图像中除上述运动目标对应的图像部分外的其它部分,与第二处理图像中除上述相应位置的背景部分和上述运动目标的完整图像外的其它部分进行图像融合;For example, as shown in FIG. 2-a and FIG. 2-b, the first processed image and the second processed image are respectively, wherein the artificial moving target in FIG. 2-a and FIG. 2-b is other than human. For the background portion, based on step 1051, it can be determined that Figure 2-c is the first region described above, and Figure 2-d is the second region. Since there is a partial image of the moving object in the first region shown in FIG. 2-c, in step 1053, the image portion corresponding to the moving object in FIG. 2-c is replaced with the background portion of the corresponding position in FIG. 2-d. And merging the other portion of the image in FIG. 2-a except the image portion corresponding to the moving object with the background portion of the corresponding position in FIG. 2-c and the portion other than the partial image of the moving object, An image as shown in FIG. 2-e (ie, the stitched image) can be obtained. As can be seen from FIG. 2-e, the image portion corresponding to the moving object in the second region shown in FIG. 2-d is illustrated by FIG. 2-c. The background portion in the first region shown is replaced, and Figure 2-e has a wider field of view relative to Figures 2-c and 2-d. Step 1054: When there is a complete image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and save the first processed image The other part of the image corresponding to the moving object is image-fused with the background portion of the second processed image except the background portion of the corresponding position and the entire image of the moving object;
对于步骤1052中提及的第3种情况,在步骤1054中,可以将第一区域中上述运动目标对应的图像部分替换为第二区域中相应位置的背景部分,并将第一处理图像中除上述运动目标对应的图像部分外的其它部分,与第二处理图像中除上述相应位置的背景部分和上述运动目标的完整图像外的其它部分进行图像融合;或者,在其它实施例中,当上述第二区域中存在运动目标的完整图像时,也可以将上述第二区域中上述运动目标对应的图像部分替换为上述第一区域中相应位置的背景部分,并将上述第二处理图像中除上述运动目标对应的图像部分外的其它部分,与上述第一处理图像中除上述相应位置的背景部分和上述运动目标的完整图像外的其它部分进行图像融合。For the third case mentioned in step 1052, in step 1054, the image portion corresponding to the moving target in the first region may be replaced with the background portion of the corresponding position in the second region, and the first processed image may be divided. The other portion except the image portion corresponding to the moving object is image-fused with the background portion of the second processed image except the background portion of the corresponding position and the entire image of the moving object; or, in other embodiments, when When there is a complete image of the moving object in the second region, the image portion corresponding to the moving target in the second region may be replaced with the background portion of the corresponding position in the first region, and the second processed image may be divided into the above. The other portion except the image portion corresponding to the moving object is image-fused with the portion of the first processed image except the background portion of the corresponding position and the entire image of the moving object.
由上可见,本申请实施例中通过获取连续拍摄所得的图像序列,并基于光流法对图像序列中动态背景下的运动目标进行检测。之后从该图像序列中抽取待拼接的第一图像和第二图像,对第一图像和第二图像进行图像处理得到第一处理图像和第二处理图像后,基于上述检测的结果,将第一处理图像和第二处理图像进行图像融合,得到拼接后的图像。由于鬼影多数是因为待拼接的图像中包含运动目标造成,而相对于基于像素点差异对图像中的运动目标进行判断的方案,光流法在复杂的场景下也能够有效的检测出图像中的运动目标,因此,本申请方案通过光流法对图像序列中动态背景下的运动目标进行检测之后,基于检测的结果对第一处理图像和第二处理图像进行图像融合,可有效提高鬼影的消除效果。It can be seen from the above that in the embodiment of the present application, the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene. The moving target, therefore, the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
实施例二Embodiment 2
本申请实施例提供一种图像拼接装置,如图3所示,本申请实施例中的图像拼接装置300包括:The embodiment of the present application provides an image splicing device. As shown in FIG. 3, the image splicing device 300 in the embodiment of the present application includes:
获取单元301,用于获取连续拍摄所得的图像序列;The acquiring unit 301 is configured to acquire a sequence of images obtained by continuous shooting;
光流检测单元302,用于基于光流法对所述图像序列中动态背景下的运动目标进行检测;The optical flow detecting unit 302 is configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method;
抽取单元303,用于从所述图像序列中抽取待拼接的第一图像和第二图像;The extracting unit 303 is configured to extract, from the image sequence, a first image and a second image to be stitched;
图像处理单元304,用于对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;The image processing unit 304 is configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
图像融合单元305,用于基于光流检测单元302检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。The image fusion unit 305 is configured to perform image fusion on the first processed image and the second processed image based on the result detected by the optical flow detecting unit 302 to obtain a stitched image.
可选的,所述图像序列中动态背景下存在运动目标。抽取单元303具体用于:从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。Optionally, there is a moving target in the dynamic background in the image sequence. The extracting unit 303 is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include a moving target in an area overlapping the background portion of the second image in the first image Full image.
可选的,图像融合单元305具体包括:Optionally, the image fusion unit 305 specifically includes:
确定单元,用于基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;a determining unit, configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
子融合单元,用于当所述第二区域中不存在运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;当所述第二区域中存在运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合,或者,当所述第二区域中存在所述运动目标的完整图像时,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。a sub-fusion unit, configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position; When there is a partial image of the moving target in the second region, the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target; When a complete image of the moving object exists in the two regions, replacing an image portion corresponding to the moving target in the first region with a corresponding one in the second region a background portion, and other portions of the first processed image other than the image portion corresponding to the moving object, and a background portion of the second processed image other than the corresponding position and the moving target Other portions of the entire image are image-fused, or when a complete image of the moving object exists in the second region, the image portion corresponding to the moving target in the second region is replaced with the first a background portion of a corresponding position in the region, and a portion of the second processed image other than the image portion corresponding to the moving object, and a background portion of the first processed image other than the corresponding position Image fusion is performed on other parts of the moving image beyond the complete image.
可选的,所述图像处理还包括:形态学处理。图像处理单元304具体用于:对所述第一图像和所述第二图像进行图像配准;基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。Optionally, the image processing further includes: morphological processing. The image processing unit 304 is specifically configured to: perform image registration on the first image and the second image; perform morphological processing on the first image and the second image after image registration based on the result of the detecting To refine the moving objects in the first image and the second image.
需要说明的是,本申请实施例中的图像拼接装置可以为独立的设备,或者,,或者,图像拼接装置也可以集成在电子设备(例如智能手机、平板电脑、计算机以及可穿戴设备等)中。可选的,集成该图像拼接装置的设备或电子设备所搭载的操作系统可以为ios系统、android系统、windows系统或其它操作系统,此处不作限定。It should be noted that the image splicing device in the embodiment of the present application may be an independent device, or alternatively, the image splicing device may be integrated into an electronic device (such as a smart phone, a tablet computer, a computer, a wearable device, etc.). . Optionally, the operating system of the device or the electronic device integrated with the image splicing device may be an ios system, an android system, a windows system, or other operating system, which is not limited herein.
由上可见,本申请实施例中通过获取连续拍摄所得的图像序列,并基于光流法对图像序列中动态背景下的运动目标进行检测。之后从该图像序列中抽取待拼接的第一图像和第二图像,对第一图像和第二图像进行图像处理得到第一处理图像和第二处理图像后,基于上述检测的结果,将第一处理图像和第二处理图像进行图像融合,得到拼接后的图像。由于鬼影多数是因为待拼接的图像中包含运动目标造成,而相对于基于像素点差异对图像中的运动目标进行判断的方案,光流法在复杂的场景下也能够有效的检测出图像中的运动目标,因此,本申请方案通过光流法对图像序列中动态背景下的运动目标进行检测之后,基于检测的结果对第一处理图像和第二处理图像进行图像融合,可有效提高鬼影的消除效果。It can be seen from the above that in the embodiment of the present application, the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene. The moving target, therefore, the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
实施例三Embodiment 3
本申请实施例提供一种电子设备,请参阅图4,本申请实施例中的电子设备包括:存储器401,一个或多个处理器402(图4中仅示出一个)及存储在存储器401上并可在处理器上运行的计算机程序。其中:存储器401用于存储软件程序以及模块,处理器402通过运行存储在存储器401的软件程序以及单元,从而执行各种功能应用以及数据处理。具体地,处理器402通过运行存储在存储器401的上述计算机程序时实现以下步骤:The embodiment of the present application provides an electronic device. Referring to FIG. 4, the electronic device in the embodiment of the present application includes: a memory 401, one or more processors 402 (only one is shown in FIG. 4), and stored in the memory 401. A computer program that can be run on a processor. Wherein: the memory 401 is used to store software programs and modules, and the processor 402 executes various functional applications and data processing by running software programs and units stored in the memory 401. Specifically, the processor 402 implements the following steps by running the above computer program stored in the memory 401:
获取连续拍摄所得的图像序列;Obtaining a sequence of images obtained by continuous shooting;
基于光流法对所述图像序列中动态背景下的运动目标进行检测;Detecting a moving target in a dynamic background in the image sequence based on an optical flow method;
从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting a first image and a second image to be stitched from the image sequence;
对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;Performing image processing on the first image and the second image to obtain a first processed image and a second processed image, wherein the image processing includes: image registration;
基于所述检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And based on the result of the detecting, the first processed image and the second processed image are image-fused to obtain a stitched image.
假设上述为第一种可能的实施方式,则在第一种可能的实施方式作为基础而提供的第二种可能的实施方式中,所述图像序列中动态背景下存在运动目标;Assuming that the foregoing is a first possible implementation manner, in a second possible implementation manner provided by the first possible implementation manner, a moving target exists in a dynamic background in the image sequence;
所述从所述图像序列中抽取待拼接的第一图像和第二图像为:Extracting the first image and the second image to be spliced from the image sequence is:
从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。Extracting the first image and the second image to be spliced from the sequence of images, and causing a complete image of the moving object to be included in an area overlapping the background portion of the second image in the first image.
在上述第二种可能的实现方式作为基础而提供的第三种可能的实施方式中,所述基于所述检测的结果,将所述第一处理图像和第二处理图像进行图像融合包括:In a third possible implementation manner provided by the foregoing second possible implementation manner, the performing image fusion on the first processed image and the second processed image based on the result of the detecting includes:
基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;Determining, in a result of the detecting, a first region in the first processed image and a second region in the second processed image, wherein a background portion of the first region and the second region overlap;
当所述第二区域中不存在运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;当所述第二区域中存在运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合,或者,当所述第二区域中存在所述运动目标的完整图像时,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。在上述第一种可能的实现方式,或者上述第二种可能的实现方式或者上述第三种可能的实现方式作为基础而提供的第四种可能的实施方式中,所述图像处理还包括:形态学处理;When there is no image of the moving target in the second area, replacing an image portion corresponding to the moving target in the first area with a background portion of a corresponding position in the second area, and the first Processing other portions of the image other than the image portion corresponding to the moving object, performing image fusion with other portions of the second processed image other than the background portion of the corresponding position; when there is motion in the second region a partial image of the target, replacing an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region, and corresponding to the moving target in the first processed image Other portions of the image portion other than the portion of the second processed image other than the background portion of the corresponding position and the partial image of the moving object; when the second region is present When a complete image of the moving target is used, the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, And other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position and the complete image of the moving target Performing image fusion, or, when a complete image of the moving object exists in the second region, replacing an image portion corresponding to the moving target in the second region with a background of a corresponding position in the first region a portion, and the other portion of the second processed image except the image portion corresponding to the moving object, and the background portion of the first processed image except the background portion of the corresponding position and the complete image of the moving target The other part of the image is fused. In a fourth possible implementation manner provided by the foregoing first possible implementation manner, or the foregoing second possible implementation manner or the foregoing third possible implementation manner, the image processing further includes: Learning
所述对所述第一图像和所述第二图像进行图像处理包括:The performing image processing on the first image and the second image includes:
对所述第一图像和所述第二图像进行图像配准;Performing image registration on the first image and the second image;
基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。Based on the result of the detecting, the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
可选的,如图4所示,上述电子设备还可包括:一个或多个输入设备403(图4中仅示出一个)和一个或多个输出设备404(图4中仅示出一个)。存储器401、处理器402、输入设备403和输出设备404通过总线405连接。Optionally, as shown in FIG. 4, the foregoing electronic device may further include: one or more input devices 403 (only one is shown in FIG. 4) and one or more output devices 404 (only one is shown in FIG. 4). . The memory 401, the processor 402, the input device 403, and the output device 404 are connected by a bus 405.
应当理解,在本申请实施例中,所称处理器402可以是中央处理单元(Central Processing Unit,CPU),该处理器还可以是其他通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现成可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。It should be understood that in the embodiment of the present application, the so-called processor 402 may be a central processing unit (Central) Processing Unit (CPU), which can also be other general-purpose processors, digital signal processors (DSPs), and application specific integrated circuits (Application). Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc. The general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
输入设备403可以包括键盘、触控板、指纹采传感器(用于采集用户的指纹信息和指纹的方向信息)、麦克风等,输出设备404可以包括显示器、扬声器等。The input device 403 can include a keyboard, a touchpad, a fingerprint sensor (for collecting fingerprint information of the user and direction information of the fingerprint), a microphone, etc., and the output device 404 can include a display, a speaker, and the like.
存储器404可以包括只读存储器和随机存取存储器,并向处理器401 提供指令和数据。存储器404的一部分或全部还可以包括非易失性随机存取存储器。例如,存储器404还可以存储设备类型的信息。Memory 404 can include read only memory and random access memory and provides instructions and data to processor 401. Some or all of memory 404 may also include non-volatile random access memory. For example, the memory 404 can also store information of the device type.
由上可见,本申请实施例中通过获取连续拍摄所得的图像序列,并基于光流法对图像序列中动态背景下的运动目标进行检测。之后从该图像序列中抽取待拼接的第一图像和第二图像,对第一图像和第二图像进行图像处理得到第一处理图像和第二处理图像后,基于上述检测的结果,将第一处理图像和第二处理图像进行图像融合,得到拼接后的图像。由于鬼影多数是因为待拼接的图像中包含运动目标造成,而相对于基于像素点差异对图像中的运动目标进行判断的方案,光流法在复杂的场景下也能够有效的检测出图像中的运动目标,因此,本申请方案通过光流法对图像序列中动态背景下的运动目标进行检测之后,基于检测的结果对第一处理图像和第二处理图像进行图像融合,可有效提高鬼影的消除效果。It can be seen from the above that in the embodiment of the present application, the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene. The moving target, therefore, the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将上述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。It will be apparent to those skilled in the art that, for convenience and brevity of description, only the division of each functional unit and module described above is exemplified. In practical applications, the above functions may be assigned to different functional units as needed. The module is completed by dividing the internal structure of the above device into different functional units or modules to perform all or part of the functions described above. Each functional unit and module in the embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be hardware. Formal implementation can also be implemented in the form of software functional units. In addition, the specific names of the respective functional units and modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application. For the specific working process of the unit and the module in the foregoing system, reference may be made to the corresponding process in the foregoing method embodiment, and details are not described herein again.
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。In the above embodiments, the descriptions of the various embodiments are different, and the parts that are not detailed or described in a certain embodiment can be referred to the related descriptions of other embodiments.
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。Those of ordinary skill in the art will appreciate that the elements and algorithm steps of the various examples described in connection with the embodiments disclosed herein can be implemented in electronic hardware or a combination of computer software and electronic hardware. Whether these functions are performed in hardware or software depends on the specific application and design constraints of the solution. A person skilled in the art can use different methods to implement the described functions for each particular application, but such implementation should not be considered to be beyond the scope of the present application.
在本申请所提供的实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的系统实施例仅仅是示意性的,例如,上述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。In the embodiments provided by the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the system embodiments described above are merely illustrative. For example, the division of the above modules or units is only a logical function division. In actual implementation, there may be another division manner, for example, multiple units or components may be combined. Or it can be integrated into another system, or some features can be ignored or not executed. In addition, the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
上述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。The units described above as separate components may or may not be physically separated. The components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
上述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,上述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,上述计算机程序包括计算机程序代码,上述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。上述计算机可读介质可以包括:能够携带上述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,上述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括是电载波信号和电信信号。The above-described integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the processes in the above embodiments, and may also be completed by a computer program to instruct related hardware. The computer program may be stored in a computer readable storage medium. The steps of the various method embodiments described above may be implemented when executed by a processor. Wherein, the above computer program comprises computer program code, and the computer program code may be in the form of source code, object code form, executable file or some intermediate form. The computer readable medium may include any entity or device capable of carrying the above computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only). Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media. It should be noted that the contents of the above computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in jurisdictions. For example, in some jurisdictions, according to legislation and patent practice, computer readable media are not It includes electrical carrier signals and telecommunication signals.
以上上述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。The above embodiments are only used to explain the technical solutions of the present application, and are not limited thereto; although the present application has been described in detail with reference to the foregoing embodiments, those skilled in the art should understand that the foregoing embodiments can still be The technical solutions are modified, or some of the technical features are replaced by equivalents; and the modifications or substitutions do not deviate from the spirit and scope of the technical solutions of the embodiments of the present application, and should be included in the present disclosure. Within the scope of protection of the application.

Claims (10)

  1. 一种图像拼接方法,其特征在于,包括:An image mosaic method, comprising:
    获取连续拍摄所得的图像序列;Obtaining a sequence of images obtained by continuous shooting;
    基于光流法对所述图像序列中动态背景下的运动目标进行检测;Detecting a moving target in a dynamic background in the image sequence based on an optical flow method;
    从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting a first image and a second image to be stitched from the image sequence;
    对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;Performing image processing on the first image and the second image to obtain a first processed image and a second processed image, wherein the image processing includes: image registration;
    基于所述检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And based on the result of the detecting, the first processed image and the second processed image are image-fused to obtain a stitched image.
  2. 根据权利要求1所述的图像拼接方法,其特征在于,所述图像序列中动态背景下存在运动目标;The image stitching method according to claim 1, wherein a moving object exists in a dynamic background in the image sequence;
    所述从所述图像序列中抽取待拼接的第一图像和第二图像为:Extracting the first image and the second image to be spliced from the image sequence is:
    从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。Extracting the first image and the second image to be spliced from the sequence of images, and causing a complete image of the moving object to be included in an area overlapping the background portion of the second image in the first image.
  3. 根据权利要求2所述的图像拼接方法,其特征在于,所述基于所述检测的结果,将所述第一处理图像和第二处理图像进行图像融合包括:The image splicing method according to claim 2, wherein the image fusion of the first processed image and the second processed image based on the result of the detecting comprises:
    基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;Determining, in a result of the detecting, a first region in the first processed image and a second region in the second processed image, wherein a background portion of the first region and the second region overlap;
    当所述第二区域中不存在所述运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;When an image of the moving object does not exist in the second area, replacing an image portion corresponding to the moving object in the first area with a background portion of a corresponding position in the second area, and And a portion other than the image portion corresponding to the moving object in the first processed image is image-fused with other portions of the second processed image except the background portion of the corresponding position;
    当所述第二区域中存在所述运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合;或者,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。When a partial image of the moving target exists in the second region, replacing an image portion corresponding to the moving target in the first region with a background portion of a corresponding position in the second region, and And a portion other than the image portion corresponding to the moving object in the first processed image, and image fusion with the background portion of the second processed image and the portion other than the partial image of the moving target When a complete image of the moving object exists in the second area, replacing an image portion corresponding to the moving object in the first area with a background portion of a corresponding position in the second area, and An image of the first processed image other than the image portion corresponding to the moving object, and an image of the second processed image other than the background portion of the corresponding position and the entire image of the moving object Or merging; or replacing an image portion corresponding to the moving object in the second region with a background portion of a corresponding position in the first region, and Other portions of the second processed image other than the image portion corresponding to the moving object, and image fusion with the background portion of the first processed image other than the complete image of the moving target .
  4. 根据权利要求1至3任一项所述的图像拼接方法,其特征在于,所述图像处理还包括:形态学处理;The image splicing method according to any one of claims 1 to 3, wherein the image processing further comprises: morphological processing;
    所述对所述第一图像和所述第二图像进行图像处理包括:The performing image processing on the first image and the second image includes:
    对所述第一图像和所述第二图像进行图像配准;Performing image registration on the first image and the second image;
    基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。Based on the result of the detecting, the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
  5. 一种图像拼接装置,其特征在于,包括:An image splicing device, comprising:
    获取单元,用于获取连续拍摄所得的图像序列;An acquiring unit, configured to obtain a sequence of images obtained by continuous shooting;
    光流检测单元,用于基于光流法对所述图像序列中动态背景下的运动目标进行检测;An optical flow detecting unit, configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method;
    抽取单元,用于从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting unit, configured to extract a first image and a second image to be stitched from the image sequence;
    图像处理单元,用于对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;An image processing unit, configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
    图像融合单元,用于基于所述光流检测单元检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And an image fusion unit configured to perform image fusion on the first processed image and the second processed image based on a result of the detection by the optical flow detecting unit to obtain a stitched image.
  6. 根据权利要求5所述的图像拼接装置,其特征在于,所述图像序列中动态背景下存在运动目标;The image splicing apparatus according to claim 5, wherein a moving target exists in a dynamic background in the image sequence;
    所述抽取单元具体用于:从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。The extracting unit is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include an area in the first image that overlaps with a background portion of the second image A complete image of the target.
  7. 根据权利要求6所述的图像拼接装置,其特征在于,所述图像融合单元具体包括:The image splicing apparatus according to claim 6, wherein the image merging unit specifically comprises:
    确定单元,用于基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;a determining unit, configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
    子融合单元,用于当所述第二区域中不存在运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;当所述第二区域中存在运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合,或者,当所述第二区域中存在所述运动目标的完整图像时,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。a sub-fusion unit, configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position; When there is a partial image of the moving target in the second region, the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target; When a complete image of the moving object exists in the two regions, replacing an image portion corresponding to the moving target in the first region with a corresponding one in the second region a background portion, and other portions of the first processed image other than the image portion corresponding to the moving object, and a background portion of the second processed image other than the corresponding position and the moving target Other portions of the entire image are image-fused, or when a complete image of the moving object exists in the second region, the image portion corresponding to the moving target in the second region is replaced with the first a background portion of a corresponding position in the region, and a portion of the second processed image other than the image portion corresponding to the moving object, and a background portion of the first processed image other than the corresponding position Image fusion is performed on other parts of the moving image beyond the complete image.
  8. 根据权利要求5至7任一项所述的图像拼接装置,其特征在于,所述图像处理还包括:形态学处理;The image splicing apparatus according to any one of claims 5 to 7, wherein the image processing further comprises: morphological processing;
    所述图像处理单元具体用于:对所述第一图像和所述第二图像进行图像配准;基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。The image processing unit is specifically configured to: perform image registration on the first image and the second image; and perform morphology on the image-registered first image and the second image based on the detection result Processing to refine the moving objects in the first image and the second image.
  9. 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至4任一项所述方法的步骤。An electronic device comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes the computer program as claimed in claim 1 4 The steps of any of the methods described.
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至4任一项所述方法的步骤。A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 4.
PCT/CN2017/119565 2017-12-26 2017-12-28 Image stitching method, image stitching device and electronic device WO2019127269A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711431274.6A CN108230245B (en) 2017-12-26 2017-12-26 Image splicing method, image splicing device and electronic equipment
CN201711431274.6 2017-12-26

Publications (1)

Publication Number Publication Date
WO2019127269A1 true WO2019127269A1 (en) 2019-07-04

Family

ID=62648814

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/119565 WO2019127269A1 (en) 2017-12-26 2017-12-28 Image stitching method, image stitching device and electronic device

Country Status (2)

Country Link
CN (1) CN108230245B (en)
WO (1) WO2019127269A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989751B (en) * 2018-07-17 2020-07-14 上海交通大学 Video splicing method based on optical flow
CN111757146B (en) * 2019-03-29 2022-11-15 杭州萤石软件有限公司 Method, system and storage medium for video splicing
CN110298826A (en) * 2019-06-18 2019-10-01 合肥联宝信息技术有限公司 A kind of image processing method and device
CN110619652B (en) * 2019-08-19 2022-03-18 浙江大学 Image registration ghost elimination method based on optical flow mapping repeated area detection
CN110501344A (en) * 2019-08-30 2019-11-26 无锡先导智能装备股份有限公司 Battery material online test method
TWI749365B (en) 2019-09-06 2021-12-11 瑞昱半導體股份有限公司 Motion image integration method and motion image integration system
CN112511764A (en) * 2019-09-16 2021-03-16 瑞昱半导体股份有限公司 Mobile image integration method and mobile image integration system
CN110766611A (en) * 2019-10-31 2020-02-07 北京沃东天骏信息技术有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN111047628B (en) * 2019-12-16 2020-10-02 中国水利水电科学研究院 Night light satellite image registration method and device
WO2021168755A1 (en) * 2020-02-27 2021-09-02 Oppo广东移动通信有限公司 Image processing method and apparatus, and device
CN111429354B (en) * 2020-03-27 2022-01-21 贝壳找房(北京)科技有限公司 Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010025309A1 (en) * 2008-08-28 2010-03-04 Zoran Corporation Robust fast panorama stitching in mobile phones or cameras
CN101859433A (en) * 2009-04-10 2010-10-13 日电(中国)有限公司 Image mosaic device and method
CN103366351A (en) * 2012-03-29 2013-10-23 华晶科技股份有限公司 Method for generating panoramic image and image acquisition device thereof
CN106296570A (en) * 2016-07-28 2017-01-04 北京小米移动软件有限公司 Image processing method and device
CN106709868A (en) * 2016-12-14 2017-05-24 云南电网有限责任公司电力科学研究院 Image stitching method and apparatus
CN106851045A (en) * 2015-12-07 2017-06-13 北京航天长峰科技工业集团有限公司 A kind of image mosaic overlapping region moving target processing method
CN107133972A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of video moving object detection method

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100455266C (en) * 2005-03-29 2009-01-28 深圳迈瑞生物医疗电子股份有限公司 Broad image processing method
JP5510012B2 (en) * 2010-04-09 2014-06-04 ソニー株式会社 Image processing apparatus and method, and program
CN101901481B (en) * 2010-08-11 2012-11-21 深圳市蓝韵实业有限公司 Image mosaic method
JP2012075088A (en) * 2010-09-03 2012-04-12 Pentax Ricoh Imaging Co Ltd Image processing system and image processing method
CN103581562A (en) * 2013-11-19 2014-02-12 宇龙计算机通信科技(深圳)有限公司 Panoramic shooting method and panoramic shooting device
CN104361584B (en) * 2014-10-29 2017-09-26 中国科学院深圳先进技术研究院 The detection method and detecting system of a kind of display foreground
CN106909911B (en) * 2017-03-09 2020-07-10 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, and electronic apparatus

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010025309A1 (en) * 2008-08-28 2010-03-04 Zoran Corporation Robust fast panorama stitching in mobile phones or cameras
CN101859433A (en) * 2009-04-10 2010-10-13 日电(中国)有限公司 Image mosaic device and method
CN103366351A (en) * 2012-03-29 2013-10-23 华晶科技股份有限公司 Method for generating panoramic image and image acquisition device thereof
CN106851045A (en) * 2015-12-07 2017-06-13 北京航天长峰科技工业集团有限公司 A kind of image mosaic overlapping region moving target processing method
CN106296570A (en) * 2016-07-28 2017-01-04 北京小米移动软件有限公司 Image processing method and device
CN106709868A (en) * 2016-12-14 2017-05-24 云南电网有限责任公司电力科学研究院 Image stitching method and apparatus
CN107133972A (en) * 2017-05-11 2017-09-05 南宁市正祥科技有限公司 A kind of video moving object detection method

Also Published As

Publication number Publication date
CN108230245A (en) 2018-06-29
CN108230245B (en) 2021-06-11

Similar Documents

Publication Publication Date Title
WO2019127269A1 (en) Image stitching method, image stitching device and electronic device
CN111145238B (en) Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
Ren et al. Video deblurring via semantic segmentation and pixel-wise non-linear kernel
WO2018214365A1 (en) Image correction method, apparatus, device, and system, camera device, and display device
US9325899B1 (en) Image capturing device and digital zooming method thereof
WO2020259271A1 (en) Image distortion correction method and apparatus
US10915998B2 (en) Image processing method and device
CN110717942B (en) Image processing method and device, electronic equipment and computer readable storage medium
US20160301868A1 (en) Automated generation of panning shots
Lee et al. Simultaneous localization, mapping and deblurring
WO2017088533A1 (en) Method and apparatus for merging images
CN105957015A (en) Thread bucket interior wall image 360 DEG panorama mosaicing method and system
CN107798702B (en) Real-time image superposition method and device for augmented reality
Kim et al. Fisheye lens camera based surveillance system for wide field of view monitoring
WO2017091927A1 (en) Image processing method and dual-camera system
US20190340732A1 (en) Picture Processing Method and Apparatus
US11620730B2 (en) Method for merging multiple images and post-processing of panorama
WO2021063245A1 (en) Image processing method and image processing apparatus, and electronic device applying same
CN110766706A (en) Image fusion method and device, terminal equipment and storage medium
WO2022160857A1 (en) Image processing method and apparatus, and computer-readable storage medium and electronic device
TW201947536A (en) Image processing method and image processing device
TW201203130A (en) System for correcting image and correcting method thereof
CN114926514B (en) Registration method and device of event image and RGB image
CN115883988A (en) Video image splicing method and system, electronic equipment and storage medium
US11734877B2 (en) Method and device for restoring image obtained from array camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17936852

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 17936852

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17936852

Country of ref document: EP

Kind code of ref document: A1