WO2019127269A1 - Image stitching method, image stitching device and electronic device - Google Patents
Image stitching method, image stitching device and electronic device Download PDFInfo
- Publication number
- WO2019127269A1 WO2019127269A1 PCT/CN2017/119565 CN2017119565W WO2019127269A1 WO 2019127269 A1 WO2019127269 A1 WO 2019127269A1 CN 2017119565 W CN2017119565 W CN 2017119565W WO 2019127269 A1 WO2019127269 A1 WO 2019127269A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- region
- processed
- moving object
- moving target
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/269—Analysis of motion using gradient-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Definitions
- the present application belongs to the field of image processing technologies, and in particular, to an image stitching method, an image stitching apparatus, an electronic device, and a computer readable storage medium.
- the acquisition of panoramic images is an emerging research field and hotspot of computer vision.
- the panoramic image is obtained mainly by the following two methods: 1. directly using a dedicated wide-angle imaging device (such as a fisheye optical lens, a convex optical lens such as a convex optical lens) to take a sufficiently large horizontal angle image at a time, but The cost is high, and the resolution and viewing angle are difficult to balance, and the image will be severely distorted. 2.
- image stitching technology a set of low-resolution or small-view images with overlapping regions are stitched into a high-resolution, wide-view image. New image. Since the second method has low requirements on the device and can retain the detailed information of the original captured image, the image stitching technique is very important for the acquisition of the panoramic image.
- images used for image stitching have moving objects in addition to static objects, and the misalignment and superposition of moving objects are the main causes of ghosting in the stitched images. How to eliminate ghosting in image stitching is one of the most difficult problems in the industry.
- the brightness, color or texture structure of the corresponding pixel in the overlapping area of the image to be spliced is used to determine the position of the moving object, and the moving object is selectively shielded during the image splicing process.
- this method is susceptible to exposure differences, interference pixel points, etc., and only relies on pixel difference for ghost elimination. For most slightly complicated scenes, misoperations are apt to occur, resulting in ghost effect elimination.
- the present application provides an image splicing method, an image splicing device, an electronic device, and a computer readable storage medium, which are beneficial to improving ghost elimination effects.
- a first aspect of the embodiment of the present application provides an image mosaic method, including:
- the first processed image and the second processed image are image-fused to obtain a stitched image.
- a moving target exists in a dynamic background in the image sequence
- the extracting the first image and the second image to be spliced from the image sequence includes:
- the image fusion of the first processed image and the second processed image is performed based on a result of the detecting :
- the image Processing also includes: morphological processing
- the performing image processing on the first image and the second image includes:
- the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
- the second aspect of the present application provides an image splicing apparatus, including:
- An acquiring unit configured to obtain a sequence of images obtained by continuous shooting
- An optical flow detecting unit configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method
- Extracting unit configured to extract a first image and a second image to be stitched from the image sequence
- An image processing unit configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
- an image fusion unit configured to perform image fusion on the first processed image and the second processed image based on a result of the detection by the optical flow detecting unit to obtain a stitched image.
- a moving target exists in a dynamic background in the image sequence
- the extracting unit is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include an area in the first image that overlaps with a background portion of the second image A complete image of the target.
- the image fusion unit specifically includes:
- a determining unit configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
- a sub-fusion unit configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position;
- the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target;
- the image Processing also includes: morphological processing
- the image processing unit is specifically configured to: perform image registration on the first image and the second image; and perform morphology on the image-registered first image and the second image based on the detection result Processing to refine the moving objects in the first image and the second image.
- a third aspect of the present application provides an electronic device including a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the computer program to implement the first aspect or the first aspect An image stitching method mentioned in any of the possible implementations.
- a fourth aspect of the present application provides a computer readable storage medium having stored thereon a computer program, wherein the computer program is executed by a processor to implement the first aspect or any of the possible implementations of the first aspect The image stitching method mentioned in the paper.
- a fifth aspect of the present application provides a computer program product comprising a computer program, the computer program being executed by one or more processors to implement any of the above first aspects or any of the above first aspects The image stitching method mentioned in the way.
- the solution of the present application acquires the image sequence obtained by continuous shooting, and detects the moving target in the dynamic background in the image sequence based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
- the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
- FIG. 1 is a schematic flowchart of an embodiment of an image splicing method provided by the present application
- FIG. 1-b is a schematic diagram of an image fusion process that can be applied to the embodiment shown in FIG. 1-a according to the present application;
- FIG. 2-a is a schematic diagram of a first processing image of an application scenario provided by the present application.
- FIG. 2-b is a schematic diagram of a second processing image of an application scenario provided by the present application.
- Figure 2-c is a schematic diagram of a first area determined based on the first processed image and the second processed image shown in Figures 2-a and 2-b;
- Figure 2-d is a schematic diagram of a second region determined based on the first processed image and the second processed image illustrated in Figures 2-a and 2-b;
- FIG. 2-e is a schematic diagram of an image obtained by image fusion based on the first region and the second region illustrated in FIG. 2-c and FIG. 2-d; FIG.
- FIG. 3 is a schematic structural diagram of an embodiment of an image splicing device provided by the present application.
- FIG. 4 is a schematic structural diagram of an embodiment of an electronic device provided by the present application.
- the image splicing method can be applied to an image splicing device, and the image splicing device can be an independent device, or the image splicing device can also be integrated in an electronic device (for example, a smart phone or a tablet). In computers, computers, wearables, etc.).
- the operating system of the device or the electronic device integrated with the image splicing device may be an ios system, an android system, a windows system, or other operating system, which is not limited herein.
- the image splicing method in the embodiment of the present application may include:
- Step 101 Acquire an image sequence obtained by continuous shooting
- the image sequence may be a set of consecutive frame images taken by a single camera.
- the camera can be slowly rotated along a vertical axis (ie, the angular velocity of the camera rotation is less than an angular velocity threshold (for example, 50 degrees). / sec)) Take a picture of the image to get the sequence of images.
- the image sequence may be captured in real time, or a pre-stored image sequence may be obtained from a database, which is not limited herein.
- Step 102 Detecting a moving target in a dynamic background in the image sequence based on an optical flow method
- the optical flow field is used to represent the instantaneous velocity field of the pixel point change trend in the image, and the principle of detecting the moving target in the dynamic background in the above image sequence based on the optical flow method is: based on the image of each adjacent frame in the image sequence
- the change in the distribution of the pixel points calculates the relevant motion information (for example, the moving speed and the moving direction) of the moving object in each image.
- the motion in the image sequence is jointly generated by the motion of the moving target itself and the motion of the camera. Since there is relative motion between the moving target and the background, the moving target is located.
- the motion vector of the pixel is different from the motion vector of the background, so that the moving target in the dynamic background in the image sequence can be detected based on the optical flow method.
- the background threshold and the moving target can be distinguished by setting the corresponding threshold based on the motion vector of the background.
- the optical flow method can be divided into a sparse optical flow method (such as LK optical flow method (ie, Lucas-Kanade method) and dense optical flow method (such as Gunnar Farneback optical flow method).
- LK optical flow method ie, Lucas-Kanade method
- dense optical flow method such as Gunnar Farneback optical flow method.
- the dense optical flow method is different from the sparse optical flow. Only for a number of feature points on the image, the dense optical flow method calculates the offset of all the points on the image, thus forming a dense optical flow field. Therefore, in order to better improve the detection effect of the moving target, the follow-up ghost The effect of eliminating the shadow is better.
- the moving object in the dynamic background in the image sequence may be detected based on the dense optical flow method.
- Step 103 Extract a first image and a second image to be spliced from the image sequence.
- the first image and the second image to be spliced may be automatically extracted from the image sequence, or the first image and the second image to be spliced may be manually selected by the user from the image sequence. In step 103, the first image and the second image are extracted based on the user's selection.
- step 103 may specifically be: extracting from the image sequence.
- the area where the first image overlaps the background portion of the second image is the area S1 (the area S1 belongs to the image area in the first image), and the area where the second image overlaps the background portion of the first image is the area S2 (
- the area S2 belongs to the image area in the first image, and the area S1 contains a complete image of the moving object, and the area S2 may contain a complete image or a partial image of the moving object, or may not contain the moving object.
- the entire image of the moving object is included, and the second image is also ensured as much as possible with respect to the background of the first image (or The first image is extended relative to the background of the second image such that the field of view of the subsequently obtained stitched image is expanded.
- the step 103 may include: extracting, according to the detection result of step 102, an image including a complete image of the moving target from the image sequence as a first image; and extracting, from the image sequence, the preset frame from the first image interval
- the number of images is used as the second image, thereby achieving automatic extraction of the first image and the second image. For example, if the image sequence includes 10 images obtained by continuous shooting, and the preset number of frames is 3, and the first image includes a complete image of the moving object, the first image in the image sequence may be extracted as the first image. Then, the fifth image in the image sequence is extracted as the second image (the number of frames in which the fifth image is separated from the first image is 3).
- Step 104 Perform image processing on the first image and the second image to obtain a first processed image and a second processed image.
- the image processing is performed on the first image and the second image in step 104, wherein step 104 is performed on the first image and the second image.
- Performing image processing on the image includes performing image registration on the first image and the second image.
- the process of performing image registration on the first image and the second image may be as follows: performing feature extraction and matching on the first image and the second image to find matching feature point pairs; determining the first based on the matched feature point pairs And a coordinate transformation parameter of the image and the second image, and finally performing image registration on the first image and the second image based on the coordinate transformation parameter.
- the step 104 may perform image registration on the first image and the second image based on the SURF algorithm, and the specific process of performing image registration on the two images may be implemented by referring to the prior art, and details are not described herein again.
- the first image and the second image may be further subjected to morphological processing (for example, etching, expansion, etc.).
- the step 104 may specifically include: performing image registration on the first image and the second image; and performing morphological processing on the image-registered first image and the second image based on the result of the detecting to accurately a moving target in the first image and the second image described above.
- the contour of the moving target in the first image may be extracted, and the first image is subjected to morphological processing, and the image is etched to remove the miscellaneous in the contour. Point, fill the broken part of the outline with image expansion, and then fill the inside of the outline to obtain the area where the moving target in the first image is located.
- the process of performing morphological processing on an image may be implemented by referring to the prior art, and details are not described herein again.
- the moving object in the image may also be locked by using a connectivity area (for example, a rectangular frame) to determine an image portion corresponding to the moving target. For example, if a moving object in an image is locked by a rectangular frame, the portion framed by the rectangular frame is the image portion corresponding to the moving target in the image.
- a connectivity area for example, a rectangular frame
- Step 105 Perform image fusion on the first processed image and the second processed image based on the result of the detecting, to obtain a stitched image;
- image fusion refers to a process of processing a plurality of images through a computer such as image processing, extracting favorable information in each image to the utmost, and finally integrating into a high-quality image.
- step 105 based on the result of step 102, the moving object included in each image in the image sequence may be extracted, and after the image processing in step 104, the first processed image and the second processed image are image-fused. Get the stitched image.
- step 105 may include:
- Step 1051 Determine, according to a result of the foregoing detection, a first area in the first processed image and a second area in the second processed image;
- the background regions of the first region and the second region overlap.
- step 1051 based on the result of step 102, the image portion corresponding to the background portion and the moving target in the first processed image and the second processed image may be distinguished, and the first processed image may be further determined based on the similarity measure.
- An area in which the second processed image partially overlaps the background ie, the first area
- an area in the second processed image that overlaps with the first processed image in the background portion ie, the second area.
- Step 1052 When there is no image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and remove the motion in the first processed image.
- the other part except the image part corresponding to the target is image-fused with the other part of the second processed image except the background part of the corresponding position;
- the first area includes a complete image of the moving object, and for the second area, there are the following three situations: 1. the image of the moving target does not exist in the second area; 2. the second area There is a partial image of the above moving target; 3. The completed image of the moving target exists in the second region.
- step 1052 the image portion corresponding to the moving object in the first region is replaced with the background portion of the corresponding position in the second region, and the image portion corresponding to the moving target in the first processed image is The other portion is image-fused with other portions of the second processed image than the background portion of the corresponding position.
- Step 1053 When there is a partial image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and remove the motion in the first processed image. And the other portion except the image portion corresponding to the target is image-fused with the background portion of the second processed image except the partial portion of the moving target and the portion of the moving target;
- the image portion corresponding to the moving object in the first region may be replaced with the background portion of the corresponding position in the second region, and the first processed image is divided into the above.
- the other portion except the image portion corresponding to the moving object is image-fused with the portion of the second processed image other than the background portion of the corresponding position and the partial image of the moving target.
- the first processed image and the second processed image are respectively, wherein the artificial moving target in FIG. 2-a and FIG. 2-b is other than human.
- the background portion based on step 1051, it can be determined that Figure 2-c is the first region described above, and Figure 2-d is the second region. Since there is a partial image of the moving object in the first region shown in FIG. 2-c, in step 1053, the image portion corresponding to the moving object in FIG. 2-c is replaced with the background portion of the corresponding position in FIG. 2-d. And merging the other portion of the image in FIG. 2-a except the image portion corresponding to the moving object with the background portion of the corresponding position in FIG.
- FIG. 2-e An image as shown in FIG. 2-e (ie, the stitched image) can be obtained.
- FIG. 2-e the image portion corresponding to the moving object in the second region shown in FIG. 2-d is illustrated by FIG. 2-c.
- the background portion in the first region shown is replaced, and Figure 2-e has a wider field of view relative to Figures 2-c and 2-d.
- Step 1054 When there is a complete image of the moving target in the second region, replace the image portion corresponding to the moving target in the first region with the background portion of the corresponding position in the second region, and save the first processed image
- the other part of the image corresponding to the moving object is image-fused with the background portion of the second processed image except the background portion of the corresponding position and the entire image of the moving object;
- the image portion corresponding to the moving target in the first region may be replaced with the background portion of the corresponding position in the second region, and the first processed image may be divided.
- the other portion except the image portion corresponding to the moving object is image-fused with the background portion of the second processed image except the background portion of the corresponding position and the entire image of the moving object; or, in other embodiments, when When there is a complete image of the moving object in the second region, the image portion corresponding to the moving target in the second region may be replaced with the background portion of the corresponding position in the first region, and the second processed image may be divided into the above.
- the other portion except the image portion corresponding to the moving object is image-fused with the portion of the first processed image except the background portion of the corresponding position and the entire image of the moving object.
- the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
- the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
- the embodiment of the present application provides an image splicing device.
- the image splicing device 300 in the embodiment of the present application includes:
- the acquiring unit 301 is configured to acquire a sequence of images obtained by continuous shooting
- the optical flow detecting unit 302 is configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method
- the extracting unit 303 is configured to extract, from the image sequence, a first image and a second image to be stitched;
- the image processing unit 304 is configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;
- the image fusion unit 305 is configured to perform image fusion on the first processed image and the second processed image based on the result detected by the optical flow detecting unit 302 to obtain a stitched image.
- the extracting unit 303 is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include a moving target in an area overlapping the background portion of the second image in the first image Full image.
- the image fusion unit 305 specifically includes:
- a determining unit configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;
- a sub-fusion unit configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position;
- the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target;
- the image processing further includes: morphological processing.
- the image processing unit 304 is specifically configured to: perform image registration on the first image and the second image; perform morphological processing on the first image and the second image after image registration based on the result of the detecting To refine the moving objects in the first image and the second image.
- the image splicing device in the embodiment of the present application may be an independent device, or alternatively, the image splicing device may be integrated into an electronic device (such as a smart phone, a tablet computer, a computer, a wearable device, etc.).
- the operating system of the device or the electronic device integrated with the image splicing device may be an ios system, an android system, a windows system, or other operating system, which is not limited herein.
- the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
- the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
- the electronic device in the embodiment of the present application includes: a memory 401, one or more processors 402 (only one is shown in FIG. 4), and stored in the memory 401.
- the memory 401 is used to store software programs and modules
- the processor 402 executes various functional applications and data processing by running software programs and units stored in the memory 401.
- the processor 402 implements the following steps by running the above computer program stored in the memory 401:
- the first processed image and the second processed image are image-fused to obtain a stitched image.
- a moving target exists in a dynamic background in the image sequence
- Extracting the first image and the second image to be spliced from the image sequence is:
- the performing image fusion on the first processed image and the second processed image based on the result of the detecting includes:
- the performing image processing on the first image and the second image includes:
- the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
- the foregoing electronic device may further include: one or more input devices 403 (only one is shown in FIG. 4) and one or more output devices 404 (only one is shown in FIG. 4).
- the memory 401, the processor 402, the input device 403, and the output device 404 are connected by a bus 405.
- the so-called processor 402 may be a central processing unit (Central) Processing Unit (CPU), which can also be other general-purpose processors, digital signal processors (DSPs), and application specific integrated circuits (Application). Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
- the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
- the input device 403 can include a keyboard, a touchpad, a fingerprint sensor (for collecting fingerprint information of the user and direction information of the fingerprint), a microphone, etc.
- the output device 404 can include a display, a speaker, and the like.
- Memory 404 can include read only memory and random access memory and provides instructions and data to processor 401. Some or all of memory 404 may also include non-volatile random access memory. For example, the memory 404 can also store information of the device type.
- the image sequence obtained by continuous shooting is acquired, and the moving target in the dynamic background in the image sequence is detected based on the optical flow method. And then extracting the first image and the second image to be spliced from the image sequence, performing image processing on the first image and the second image to obtain the first processed image and the second processed image, and based on the result of the detecting, The image is processed and the second processed image is image-fused to obtain a stitched image. Since the ghost image is mostly caused by the moving target in the image to be stitched, and the scheme for judging the moving target in the image based on the pixel difference, the optical flow method can effectively detect the image in a complicated scene.
- the solution of the present application detects the moving target in the dynamic background in the image sequence by the optical flow method, and then performs image fusion on the first processed image and the second processed image based on the detected result, thereby effectively improving the ghost image. Elimination effect.
- each functional unit and module in the foregoing system may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be hardware.
- Formal implementation can also be implemented in the form of software functional units.
- the specific names of the respective functional units and modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application.
- the disclosed apparatus and method may be implemented in other manners.
- the system embodiments described above are merely illustrative.
- the division of the above modules or units is only a logical function division.
- there may be another division manner for example, multiple units or components may be combined. Or it can be integrated into another system, or some features can be ignored or not executed.
- the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
- the units described above as separate components may or may not be physically separated.
- the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
- the above-described integrated unit if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the processes in the above embodiments, and may also be completed by a computer program to instruct related hardware.
- the computer program may be stored in a computer readable storage medium.
- the steps of the various method embodiments described above may be implemented when executed by a processor.
- the above computer program comprises computer program code
- the computer program code may be in the form of source code, object code form, executable file or some intermediate form.
- the computer readable medium may include any entity or device capable of carrying the above computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only). Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
- ROM Read Only memory
- RAM random access memory
- electrical carrier signals telecommunications signals
- software distribution media may be any entity or device capable of carrying the above computer program code, a recording medium, a USB flash drive, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only). Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
- the contents of the above computer readable medium may be appropriately increased or decreased according to the requirements of legislation and patent practice in jurisdictions. For example, in some jurisdictions, according to legislation and patent practice, computer
Abstract
Description
Claims (10)
- 一种图像拼接方法,其特征在于,包括:An image mosaic method, comprising:获取连续拍摄所得的图像序列;Obtaining a sequence of images obtained by continuous shooting;基于光流法对所述图像序列中动态背景下的运动目标进行检测;Detecting a moving target in a dynamic background in the image sequence based on an optical flow method;从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting a first image and a second image to be stitched from the image sequence;对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;Performing image processing on the first image and the second image to obtain a first processed image and a second processed image, wherein the image processing includes: image registration;基于所述检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And based on the result of the detecting, the first processed image and the second processed image are image-fused to obtain a stitched image.
- 根据权利要求1所述的图像拼接方法,其特征在于,所述图像序列中动态背景下存在运动目标;The image stitching method according to claim 1, wherein a moving object exists in a dynamic background in the image sequence;所述从所述图像序列中抽取待拼接的第一图像和第二图像为:Extracting the first image and the second image to be spliced from the image sequence is:从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。Extracting the first image and the second image to be spliced from the sequence of images, and causing a complete image of the moving object to be included in an area overlapping the background portion of the second image in the first image.
- 根据权利要求2所述的图像拼接方法,其特征在于,所述基于所述检测的结果,将所述第一处理图像和第二处理图像进行图像融合包括:The image splicing method according to claim 2, wherein the image fusion of the first processed image and the second processed image based on the result of the detecting comprises:基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;Determining, in a result of the detecting, a first region in the first processed image and a second region in the second processed image, wherein a background portion of the first region and the second region overlap;当所述第二区域中不存在所述运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;When an image of the moving object does not exist in the second area, replacing an image portion corresponding to the moving object in the first area with a background portion of a corresponding position in the second area, and And a portion other than the image portion corresponding to the moving object in the first processed image is image-fused with other portions of the second processed image except the background portion of the corresponding position;当所述第二区域中存在所述运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合;或者,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。When a partial image of the moving target exists in the second region, replacing an image portion corresponding to the moving target in the first region with a background portion of a corresponding position in the second region, and And a portion other than the image portion corresponding to the moving object in the first processed image, and image fusion with the background portion of the second processed image and the portion other than the partial image of the moving target When a complete image of the moving object exists in the second area, replacing an image portion corresponding to the moving object in the first area with a background portion of a corresponding position in the second area, and An image of the first processed image other than the image portion corresponding to the moving object, and an image of the second processed image other than the background portion of the corresponding position and the entire image of the moving object Or merging; or replacing an image portion corresponding to the moving object in the second region with a background portion of a corresponding position in the first region, and Other portions of the second processed image other than the image portion corresponding to the moving object, and image fusion with the background portion of the first processed image other than the complete image of the moving target .
- 根据权利要求1至3任一项所述的图像拼接方法,其特征在于,所述图像处理还包括:形态学处理;The image splicing method according to any one of claims 1 to 3, wherein the image processing further comprises: morphological processing;所述对所述第一图像和所述第二图像进行图像处理包括:The performing image processing on the first image and the second image includes:对所述第一图像和所述第二图像进行图像配准;Performing image registration on the first image and the second image;基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。Based on the result of the detecting, the image-registered first image and the second image are subjected to morphological processing to refine the moving object in the first image and the second image.
- 一种图像拼接装置,其特征在于,包括:An image splicing device, comprising:获取单元,用于获取连续拍摄所得的图像序列;An acquiring unit, configured to obtain a sequence of images obtained by continuous shooting;光流检测单元,用于基于光流法对所述图像序列中动态背景下的运动目标进行检测;An optical flow detecting unit, configured to detect a moving target in a dynamic background in the image sequence based on an optical flow method;抽取单元,用于从所述图像序列中抽取待拼接的第一图像和第二图像;Extracting unit, configured to extract a first image and a second image to be stitched from the image sequence;图像处理单元,用于对所述第一图像和所述第二图像进行图像处理,得到第一处理图像和第二处理图像,其中,所述图像处理包括:图像配准;An image processing unit, configured to perform image processing on the first image and the second image to obtain a first processed image and a second processed image, where the image processing includes: image registration;图像融合单元,用于基于所述光流检测单元检测的结果,将所述第一处理图像和所述第二处理图像进行图像融合,得到拼接后的图像。And an image fusion unit configured to perform image fusion on the first processed image and the second processed image based on a result of the detection by the optical flow detecting unit to obtain a stitched image.
- 根据权利要求5所述的图像拼接装置,其特征在于,所述图像序列中动态背景下存在运动目标;The image splicing apparatus according to claim 5, wherein a moving target exists in a dynamic background in the image sequence;所述抽取单元具体用于:从所述图像序列中抽取待拼接的第一图像和第二图像,并使得所述第一图像中,与所述第二图像的背景部分重叠的区域内包含运动目标的完整图像。The extracting unit is specifically configured to: extract a first image and a second image to be spliced from the image sequence, and include an area in the first image that overlaps with a background portion of the second image A complete image of the target.
- 根据权利要求6所述的图像拼接装置,其特征在于,所述图像融合单元具体包括:The image splicing apparatus according to claim 6, wherein the image merging unit specifically comprises:确定单元,用于基于所述检测的结果,确定所述第一处理图像中的第一区域和所述第二处理图像中的第二区域,其中,所述第一区域和所述第二区域的背景部分重叠;a determining unit, configured to determine a first region in the first processed image and a second region in the second processed image based on a result of the detecting, wherein the first region and the second region The background overlaps partially;子融合单元,用于当所述第二区域中不存在运动目标的图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分外的其它部分进行图像融合;当所述第二区域中存在运动目标的部分图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的部分图像外的其它部分进行图像融合;当所述第二区域中存在所述运动目标的完整图像时,将所述第一区域中所述运动目标对应的图像部分替换为所述第二区域中相应位置的背景部分,并将所述第一处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第二处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合,或者,当所述第二区域中存在所述运动目标的完整图像时,将所述第二区域中所述运动目标对应的图像部分替换为所述第一区域中相应位置的背景部分,并将所述第二处理图像中除所述运动目标对应的图像部分外的其它部分,与所述第一处理图像中除所述相应位置的背景部分和所述运动目标的完整图像外的其它部分进行图像融合。a sub-fusion unit, configured to replace an image portion corresponding to the moving object in the first region with a background portion of a corresponding position in the second region when there is no image of the moving target in the second region, And performing image fusion with other portions of the first processed image other than the image portion corresponding to the moving object, and other portions of the second processed image other than the background portion of the corresponding position; When there is a partial image of the moving target in the second region, the image portion corresponding to the moving target in the first region is replaced with the background portion of the corresponding position in the second region, and the first processed image is Performing image fusion with other portions of the second processed image other than the background portion of the corresponding position and the partial image of the moving target; When a complete image of the moving object exists in the two regions, replacing an image portion corresponding to the moving target in the first region with a corresponding one in the second region a background portion, and other portions of the first processed image other than the image portion corresponding to the moving object, and a background portion of the second processed image other than the corresponding position and the moving target Other portions of the entire image are image-fused, or when a complete image of the moving object exists in the second region, the image portion corresponding to the moving target in the second region is replaced with the first a background portion of a corresponding position in the region, and a portion of the second processed image other than the image portion corresponding to the moving object, and a background portion of the first processed image other than the corresponding position Image fusion is performed on other parts of the moving image beyond the complete image.
- 根据权利要求5至7任一项所述的图像拼接装置,其特征在于,所述图像处理还包括:形态学处理;The image splicing apparatus according to any one of claims 5 to 7, wherein the image processing further comprises: morphological processing;所述图像处理单元具体用于:对所述第一图像和所述第二图像进行图像配准;基于所述检测的结果,对经图像配准后的第一图像和第二图像进行形态学处理,以精确化所述第一图像和所述第二图像中的运动目标。The image processing unit is specifically configured to: perform image registration on the first image and the second image; and perform morphology on the image-registered first image and the second image based on the detection result Processing to refine the moving objects in the first image and the second image.
- 一种电子设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至4任一项所述方法的步骤。An electronic device comprising a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor executes the computer program as claimed in claim 1 4 The steps of any of the methods described.
- 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至4任一项所述方法的步骤。A computer readable storage medium storing a computer program, wherein the computer program, when executed by a processor, implements the steps of the method of any one of claims 1 to 4.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711431274.6A CN108230245B (en) | 2017-12-26 | 2017-12-26 | Image splicing method, image splicing device and electronic equipment |
CN201711431274.6 | 2017-12-26 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019127269A1 true WO2019127269A1 (en) | 2019-07-04 |
Family
ID=62648814
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2017/119565 WO2019127269A1 (en) | 2017-12-26 | 2017-12-28 | Image stitching method, image stitching device and electronic device |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN108230245B (en) |
WO (1) | WO2019127269A1 (en) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108989751B (en) * | 2018-07-17 | 2020-07-14 | 上海交通大学 | Video splicing method based on optical flow |
CN111757146B (en) * | 2019-03-29 | 2022-11-15 | 杭州萤石软件有限公司 | Method, system and storage medium for video splicing |
CN110298826A (en) * | 2019-06-18 | 2019-10-01 | 合肥联宝信息技术有限公司 | A kind of image processing method and device |
CN110619652B (en) * | 2019-08-19 | 2022-03-18 | 浙江大学 | Image registration ghost elimination method based on optical flow mapping repeated area detection |
CN110501344A (en) * | 2019-08-30 | 2019-11-26 | 无锡先导智能装备股份有限公司 | Battery material online test method |
TWI749365B (en) | 2019-09-06 | 2021-12-11 | 瑞昱半導體股份有限公司 | Motion image integration method and motion image integration system |
CN112511764A (en) * | 2019-09-16 | 2021-03-16 | 瑞昱半导体股份有限公司 | Mobile image integration method and mobile image integration system |
CN110766611A (en) * | 2019-10-31 | 2020-02-07 | 北京沃东天骏信息技术有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN111047628B (en) * | 2019-12-16 | 2020-10-02 | 中国水利水电科学研究院 | Night light satellite image registration method and device |
WO2021168755A1 (en) * | 2020-02-27 | 2021-09-02 | Oppo广东移动通信有限公司 | Image processing method and apparatus, and device |
CN111429354B (en) * | 2020-03-27 | 2022-01-21 | 贝壳找房(北京)科技有限公司 | Image splicing method and device, panorama splicing method and device, storage medium and electronic equipment |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010025309A1 (en) * | 2008-08-28 | 2010-03-04 | Zoran Corporation | Robust fast panorama stitching in mobile phones or cameras |
CN101859433A (en) * | 2009-04-10 | 2010-10-13 | 日电(中国)有限公司 | Image mosaic device and method |
CN103366351A (en) * | 2012-03-29 | 2013-10-23 | 华晶科技股份有限公司 | Method for generating panoramic image and image acquisition device thereof |
CN106296570A (en) * | 2016-07-28 | 2017-01-04 | 北京小米移动软件有限公司 | Image processing method and device |
CN106709868A (en) * | 2016-12-14 | 2017-05-24 | 云南电网有限责任公司电力科学研究院 | Image stitching method and apparatus |
CN106851045A (en) * | 2015-12-07 | 2017-06-13 | 北京航天长峰科技工业集团有限公司 | A kind of image mosaic overlapping region moving target processing method |
CN107133972A (en) * | 2017-05-11 | 2017-09-05 | 南宁市正祥科技有限公司 | A kind of video moving object detection method |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100455266C (en) * | 2005-03-29 | 2009-01-28 | 深圳迈瑞生物医疗电子股份有限公司 | Broad image processing method |
JP5510012B2 (en) * | 2010-04-09 | 2014-06-04 | ソニー株式会社 | Image processing apparatus and method, and program |
CN101901481B (en) * | 2010-08-11 | 2012-11-21 | 深圳市蓝韵实业有限公司 | Image mosaic method |
JP2012075088A (en) * | 2010-09-03 | 2012-04-12 | Pentax Ricoh Imaging Co Ltd | Image processing system and image processing method |
CN103581562A (en) * | 2013-11-19 | 2014-02-12 | 宇龙计算机通信科技(深圳)有限公司 | Panoramic shooting method and panoramic shooting device |
CN104361584B (en) * | 2014-10-29 | 2017-09-26 | 中国科学院深圳先进技术研究院 | The detection method and detecting system of a kind of display foreground |
CN106909911B (en) * | 2017-03-09 | 2020-07-10 | Oppo广东移动通信有限公司 | Image processing method, image processing apparatus, and electronic apparatus |
-
2017
- 2017-12-26 CN CN201711431274.6A patent/CN108230245B/en active Active
- 2017-12-28 WO PCT/CN2017/119565 patent/WO2019127269A1/en active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010025309A1 (en) * | 2008-08-28 | 2010-03-04 | Zoran Corporation | Robust fast panorama stitching in mobile phones or cameras |
CN101859433A (en) * | 2009-04-10 | 2010-10-13 | 日电(中国)有限公司 | Image mosaic device and method |
CN103366351A (en) * | 2012-03-29 | 2013-10-23 | 华晶科技股份有限公司 | Method for generating panoramic image and image acquisition device thereof |
CN106851045A (en) * | 2015-12-07 | 2017-06-13 | 北京航天长峰科技工业集团有限公司 | A kind of image mosaic overlapping region moving target processing method |
CN106296570A (en) * | 2016-07-28 | 2017-01-04 | 北京小米移动软件有限公司 | Image processing method and device |
CN106709868A (en) * | 2016-12-14 | 2017-05-24 | 云南电网有限责任公司电力科学研究院 | Image stitching method and apparatus |
CN107133972A (en) * | 2017-05-11 | 2017-09-05 | 南宁市正祥科技有限公司 | A kind of video moving object detection method |
Also Published As
Publication number | Publication date |
---|---|
CN108230245A (en) | 2018-06-29 |
CN108230245B (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019127269A1 (en) | Image stitching method, image stitching device and electronic device | |
CN111145238B (en) | Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment | |
Ren et al. | Video deblurring via semantic segmentation and pixel-wise non-linear kernel | |
WO2018214365A1 (en) | Image correction method, apparatus, device, and system, camera device, and display device | |
US9325899B1 (en) | Image capturing device and digital zooming method thereof | |
WO2020259271A1 (en) | Image distortion correction method and apparatus | |
US10915998B2 (en) | Image processing method and device | |
CN110717942B (en) | Image processing method and device, electronic equipment and computer readable storage medium | |
US20160301868A1 (en) | Automated generation of panning shots | |
Lee et al. | Simultaneous localization, mapping and deblurring | |
WO2017088533A1 (en) | Method and apparatus for merging images | |
CN105957015A (en) | Thread bucket interior wall image 360 DEG panorama mosaicing method and system | |
CN107798702B (en) | Real-time image superposition method and device for augmented reality | |
Kim et al. | Fisheye lens camera based surveillance system for wide field of view monitoring | |
WO2017091927A1 (en) | Image processing method and dual-camera system | |
US20190340732A1 (en) | Picture Processing Method and Apparatus | |
US11620730B2 (en) | Method for merging multiple images and post-processing of panorama | |
WO2021063245A1 (en) | Image processing method and image processing apparatus, and electronic device applying same | |
CN110766706A (en) | Image fusion method and device, terminal equipment and storage medium | |
WO2022160857A1 (en) | Image processing method and apparatus, and computer-readable storage medium and electronic device | |
TW201947536A (en) | Image processing method and image processing device | |
TW201203130A (en) | System for correcting image and correcting method thereof | |
CN114926514B (en) | Registration method and device of event image and RGB image | |
CN115883988A (en) | Video image splicing method and system, electronic equipment and storage medium | |
US11734877B2 (en) | Method and device for restoring image obtained from array camera |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 17936852 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17936852 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 09.11.2020) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 17936852 Country of ref document: EP Kind code of ref document: A1 |