CN106683051A - Image stitching method and apparatus - Google Patents

Image stitching method and apparatus Download PDF

Info

Publication number
CN106683051A
CN106683051A CN201611248742.1A CN201611248742A CN106683051A CN 106683051 A CN106683051 A CN 106683051A CN 201611248742 A CN201611248742 A CN 201611248742A CN 106683051 A CN106683051 A CN 106683051A
Authority
CN
China
Prior art keywords
target
image
moving target
area
moving
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611248742.1A
Other languages
Chinese (zh)
Inventor
王琳
王西颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Ca Ltd
Original Assignee
Beijing QIYI Century Science and Technology Ca Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Ca Ltd filed Critical Beijing QIYI Century Science and Technology Ca Ltd
Priority to CN201611248742.1A priority Critical patent/CN106683051A/en
Publication of CN106683051A publication Critical patent/CN106683051A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/0068Geometric image transformation in the plane of the image for image registration, e.g. elastic snapping
    • G06T3/0081Geometric image transformation in the plane of the image for image registration, e.g. elastic snapping by elastic snapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4038Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images

Abstract

The embodiment of the invention provides an image stitching method and apparatus. The method includes the following steps: obtaining multiple frames of images to be stitched; determining moving targets of the multiple frames of images to be stitched and background areas except the moving targets; determining background models of all the images to be stitched on the basis of each background area; calculating a proportion value of an overlapped area of every two adjacent moving targets, and selecting the moving targets whose proportion value of the overlapped area meets a preset condition as target moving areas; and enabling all the target moving areas and the background models to be stitched into a target image. The stitched image obtained through the embodiment can clearly present a full moving view of the moving targets, and the image stitching effect and the actual user experience are improved.

Description

A kind of image split-joint method and device
Technical field
The present invention relates to technical field of image processing, more particularly to a kind of image split-joint method and device.
Background technology
With the popularization of mobile terminal camera, people increasingly like sharing picture on social network sites increasing entertaining Property, the various camera functions of mobile terminal camera emerge in an endless stream, and user wants to record an action with a pictures sometimes Process or moment, as shown in figure 1, Fig. 1 have recorded sportsman skiing process, in order to obtain picture as shown in Figure 1, need To use image mosaic technology.
Image mosaic technology as image processing techniques an important branch, refer to will by camera to Same Scene clap The multiple image in regions that overlap take the photograph, a series of, according to the feature of different images, is carried out with different Processing Algorithms Matching alignment, so as to be spliced into the process of a breadth multi-view image.
Realize it is of the invention during, inventor find at least there are the following problems in the prior art:
When the image mosaic that existing image mosaic technology shoots these under Same Scene is into a width picture, work as object Motion it is slow when, lap is larger between adjacent image, the object that image show relatively obscure, do not having any selection During method, although the stitching image for obtaining can record the motion overall picture of moving target, picture visual angle is expanded, it is many Lap between frame moving target can be also presented on picture, it is impossible to obtain a width clearly moving target stitching image, The splicing effect of image is not good.
The content of the invention
The purpose of the embodiment of the present invention is to provide a kind of image split-joint method and device, can obtain a width and clearly transport Moving-target stitching image, improves image mosaic effect.Concrete technical scheme is as follows:
The invention discloses a kind of image split-joint method, including:
Obtain multiframe image to be spliced;
Determine the moving target of multiframe image to be spliced and the background area in addition to the moving target;
The background model of all images to be spliced is determined according to every frame background area;
Adjacent two frames moving target overlapping area ratio value is calculated, overlapping area ratio value is chosen and is met pre-conditioned fortune Moving-target is used as target moving region;
Target complete moving region is spliced into target image with the background model.
Adjacent two frames moving target overlapping area ratio value is calculated specifically, described, including:
Calculate adjacent two frames moving target lap area;
Calculate the gross area of the adjacent two frames moving target;
The adjacent two frames moving target overlapping area ratio value is calculated, the overlapping area ratio value is described adjacent two Frame moving target lap area and the adjacent two frames moving target merge the ratio of area, wherein, adjacent two frame Moving target merges the gross area and the adjacent two frames moving target lap that area is the adjacent two frames moving target The difference of area.
Specifically, the selection overlapping area ratio value meets pre-conditioned moving target as target moving region, Including:
Choose described pre-conditioned to overlap moving target conduct of the area ratio value less than or equal to preset threshold range Target moving region.
Specifically, the moving target and the background in addition to the moving target for determining the multiframe image to be spliced Region, including:
The moving target of multiframe image to be spliced is determined and in addition to the moving target by frame differential method Background area.
Specifically, the basis determines the background model of all images to be spliced per frame background area, including:
Frame differential method will be passed through per frame background area, the background model of all images to be spliced will be obtained.
Specifically, described be spliced into target image by target complete moving region with the background model, including:
When target complete moving region is spliced into background model, the edge of stitching portion is blurred using feathering operation, obtained Target image.
Specifically, the acquisition multiframe image to be spliced, including:
Obtain the multiframe image to be spliced with same background region.
The invention also discloses a kind of image splicing device, including:
Acquisition module, for obtaining multiframe image to be spliced;
Image processing module, for determining the moving target of multiframe image to be spliced and in addition to the moving target Background area;The background model of all images to be spliced is determined according to every frame background area;
Computing module, for calculating adjacent two frames moving target overlapping area ratio value;
Module is chosen, pre-conditioned moving target is met as target motor area for choosing overlapping area ratio value Domain;
Concatenation module, target image is spliced into by target complete moving region with the background model.
Specifically, the computing module, including:
First calculating sub module, for calculating adjacent two frames moving target lap area;
Second calculating sub module, the gross area for calculating the adjacent two frames moving target;
3rd calculating sub module, for calculating the adjacent two frames moving target lap area and adjacent two frame Moving target merges the ratio of area, wherein, it is the adjacent two frames motion mesh that the adjacent two frames moving target merges area The difference of the target gross area and the adjacent two frames moving target lap area.
Specifically, the selection module, described pre-conditioned for overlap area ratio value is less than or waits specifically for choosing In preset threshold range moving target as target moving region.
Specifically, described image processing module, including:
First image procossing submodule, the moving target for determining multiframe image to be spliced by frame differential method With the background area in addition to the moving target;
Second image procossing submodule, for all images to be spliced will to be obtained by frame differential method per frame background area Background model.
Specifically, the concatenation module, the edge specifically for being blurred stitching portion by feathering operation, target complete is transported When dynamic region is spliced to background model, target image is obtained.
A kind of image split-joint method provided in an embodiment of the present invention and device, by calculating two frame moving target overlapping areas Ratio value, selection meets pre-conditioned moving target as target moving region, and target complete moving region is spliced into the back of the body On scape model, the stitching image obtained by the present invention considers the overlap problem between image and image, chooses overlapping area Ratio value meets pre-conditioned moving target and forms stitching image, solves in the prior art, stitching image have a large amount of overlaps, Easily fuzzy problem, can obtain a width clearly moving target stitching image, improve image mosaic effect.Certainly, implement Any product of the invention or method must be not necessarily required to while reaching all the above advantage.
Brief description of the drawings
In order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing The accompanying drawing to be used needed for having technology description is briefly described, it should be apparent that, drawings in the following description are only this Some embodiments of invention, for those of ordinary skill in the art, on the premise of not paying creative work, can be with Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is image mosaic design sketch in the prior art;
Fig. 2 is the flow chart of embodiment of the present invention image split-joint method;
Fig. 3 is the flow chart that the embodiment of the present invention calculates adjacent two frames moving target overlapping area ratio value;
Fig. 4 is that embodiment of the present invention selection overlapping area ratio value meets pre-conditioned moving target as target motion The flow chart in region;
Fig. 5 is the structure chart of embodiment of the present invention image splicing device;
Fig. 6 is the structural representation of embodiment of the present invention computing module;
Fig. 7 is the structural representation of embodiment of the present invention image processing module.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete Site preparation is described, it is clear that described embodiment is only a part of embodiment of the invention, rather than whole embodiments.It is based on Embodiment in the present invention, it is every other that those of ordinary skill in the art are obtained under the premise of creative work is not made Embodiment, belongs to the scope of protection of the invention.
As the popularization and various camera functions of mobile terminal camera emerge in an endless stream, when people want to be recorded with a pictures The process of one action or during moment, picture as shown in Figure 1, Fig. 1 have recorded the process of sportsman's skiing, in order to obtain as Picture shown in Fig. 1 is, it is necessary to image mosaic technology.
When the image mosaic that existing image mosaic technology will shoot under Same Scene is into a width picture, work as object of which movement It is slow when, lap is larger between moving target in adjacent image frame, on stitching image show object relatively obscure, When there is no any choosing method, although the stitching image for obtaining can record the motion overall picture of object, motion mesh is expanded Target moves visual angle, but the lap between multiframe moving object can be also presented on picture, it is impossible to obtain a width clear Moving target stitching image, the splicing effect of image is good.
In order to solve in the prior art, stitching image has a large amount of overlaps, easily fuzzy problem, obtains a width and clearly moves Target stitching image, improves the splicing effect of image, the invention provides a kind of image split-joint method and device.
The image split-joint method that the present invention is provided is described in detail first below.
The flow chart of the image split-joint method that Fig. 2 is provided for the present invention, comprises the following steps:
Step 201, obtains multiframe image to be spliced.
In this step, in order to realize the image split-joint method of present invention offer, it is necessary to obtain multiframe image to be spliced in advance, Multiframe image procossing to be spliced can be embodied into the stitching image that moving target moves overall picture into a width.Obtain image to be spliced When, the background area of image to be spliced is divided into two kinds of situations:Background area is identical different with background area.Background area is identical, During object that subsequent treatment is moved, single, the influence of background area is smaller.Background area is incomplete same, image to be spliced Background area can change with time and position, it is follow-up determine background area when, it is necessary to by certain image procossing skill Art is obtained.Background area between image to be spliced is incomplete same, but there is overlapping region, and overlapping region is defined as into background Region can also obtain user needed for background area.Background area when background area differs is obtained, image can be used Treatment technology, overlapping region is all extracted, and obtains background area.
Specifically, when obtaining multiframe image to be spliced, obtaining the multiframe image to be spliced with same background region.
In actual applications, when obtaining image to be spliced, in order to obtain highlighting the stitching image of moving target, in same field The multiframe image to be spliced with same background region is readily obtained under scape during continuous shooting.For example, in mobile phone camera continuous shooting, should not Cell phone, will obtain the image with same background region.
Step 202, determines the moving target of multiframe image to be spliced and the background area in addition to moving target.
In this step, in order to obtain the movement locus of required moving target, it is necessary to carry out image to multiframe image to be spliced Treatment, moving target and the background area in addition to moving target are separated, and are easy to subsequently make splicing to moving target.Fortune Moving-target can be different according to the demand of user, and moving target can be people, object in motion, can be single Object, or two and more than two objects, it is different according to the demand of user, with list in the embodiment of the present invention As a example by one moving target, it is easy to preferably explanation.Part in addition to moving target is exactly background area, background area Domain and moving target collectively constitute an image to be spliced.
The moving target and the background area in addition to moving target for determining image to be spliced can be by various methods, example The method for such as scratching figure.Stingy figure is one of operation for most often using in image procossing, is the part that will be needed in image from picture A kind of method for accurately extracting, stingy drawing method specifically also includes:Lasso tool scratches figure method, Magic wand and scratches figure method, path Scratch figure method, passage and scratch figure method etc..These methods can be realized by the moving target of image to be spliced and in addition to moving target Background area separate purpose.Although these stingy drawing methods can be partitioned into moving target and background area, but be interactive , it is necessary to people go manual interaction to complete to scratch figure, for the present invention, from for the angle for improving Consumer's Experience, using automation Frame differential method is come to obtain moving target and background area be better method.
Specifically, determining the moving target and the back of the body in addition to moving target of multiframe image to be spliced by frame differential method Scene area.
Frame differential method is a kind of to be obtained as calculus of differences by by two frames or three frames continuous in time in image sequence The method for obtaining moving target, moving target occurs obvious difference in the middle of two field pictures, and two frames subtract each other, and obtain two frame figures The absolute value of image brightness difference, judges whether absolute value is more than certain threshold value to analyze the kinetic characteristic of image sequence, determines image Object of which movement is whether there is in sequence.Generally by customized method choice threshold value, for example, the first frame in image to be spliced and Two two field pictures are respectively f1 and f2, and the gray value of two frame corresponding pixel points is designated as f1 (x, y), f2 (x, y), two field pictures correspondence picture The gray value of vegetarian refreshments is subtracted each other, and takes its absolute value, obtains difference image D2.Setting predetermined threshold value T, predetermined threshold value T according to User's request or industry need any setting, and predetermined threshold value T is embodied as in the embodiment of the present invention, and gray value is 255 Point be moving target point, gray value be 0 point be background area point.Two-value is carried out to the corresponding pixels of difference image D2 one by one Change is processed, and obtains binary image D2 ', wherein, gray value be 255 point for moving target point, gray value is that 0 point is background Region point, thus all images to be spliced are carried out the computing of frame differential method, obtain moving target and in addition to moving target Background area.The moving target and the background area in addition to moving target of multiframe image to be spliced are determined with frame differential method During domain, can flexibly select to meet the background area of condition by way of given threshold.
Step 203, the background model of all images to be spliced is determined according to every frame background area.
In step 202., after moving target and the background area in addition to moving target being separated, all of background area All there is white space in domain, in order to obtain the clean and complete background model image of a width, it is necessary to by all of background area Image procossing is carried out, is conducive to the integrality of image mosaic in subsequent operation.Can determine all to wait to spell by different methods The background model of map interlinking picture, the method that common methods have statistics characterizes background using statistical value, sets up background model.Can be with Result according to frame differential method determines background model, and then repeatedly carrying out increment with frame differential method determines whole background mould Type.
Specifically, frame differential method will be passed through per frame background area, the background model of all images to be spliced is obtained.
When all of background area is carried out into image procossing, frame differential method can be used, frame differential method is actually It is a kind of thought of statistical filtering, it is poor that the multiple image that will be collected carries out adjacent two frame.It is illustrated below;
Assuming that in actual applications, choosing 10 frames image to be spliced, f1, f2, f3...f10 represent each frame and wait to spell respectively Primaries pattern (color model, RGB) value of map interlinking picture, the process for asking for the background model of 10 frames image to be spliced is: This 10 frame image to be spliced is poor per adjacent two frame, the image after 9 frames make difference is obtained, in this 9 two field picture, difference value is smaller Or be that zero region is background area, the rgb value according to same pixel point on these background areas specifically generates background model The RGB vector values mode of upper respective pixel point can have various, and the method for average is the more commonly used method.Multi-frame mean method is exactly will The rgb value of the multiple image for collecting is added, the method for seeking its average value.
Step 204, calculates adjacent two frames moving target overlapping area ratio value, chooses overlapping area ratio value and meets default The moving target of condition is used as target moving region.
In this step, the moving target of have determined, multiframe image to be spliced is carried out into image procossing, calculate adjacent two Frame moving target overlapping area ratio value.In order to choose suitable moving target, can be determined by different methods adjacent Two frame moving target overlapping area ratio values, for example can be by scratching figure method reference area, it is also possible at computer picture Reason software, adjacent two frames moving target overlapping area ratio value is calculated with computer language.Obtain overlapping area ratio value Afterwards, will meet pre-conditioned moving target as target moving region, it is pre-conditioned can be with the demand of user Change.
Specifically, choosing pre-conditioned to overlap moving target work of the area ratio value less than or equal to preset threshold range It is target moving region.
Preset threshold range is to need any setting, the default threshold of the embodiment of the present invention according to user's request or industry Value scope is 3%~5%, in the range of 3%~5%, can meet the demand of the embodiment of the present invention.In overlapping area ratio value During less than or equal to 5%, stitching image effect is preferable, when overlapping area ratio value is less than or equal to 3%, stitching image effect Preferably, in the range of 3%~5%, the stitching image for obtaining can meet user's request.
Step 205, target image is spliced into by target complete moving region and background model.
The stitching image complete in order to obtain a width, motion overall picture and image clearly comprising moving target will be, it is necessary to will Target complete moving region is spliced to the relevant position of background model.During splicing, in order to obtain more natural splicing Image, can use some image mosaic technologies so that the transition of edge is more smooth.
Specifically, when target complete moving region is spliced into background model, the side of stitching portion is blurred using feathering operation Edge, obtains target image.
When target complete moving region is spliced into background model, it may appear that splicing edge is not smooth, and lines are excessively hard It is bright, in order to obtain the splicing effect figure of more natural smoothness, the edge of stitching portion can be blurred using feathering operation, produce transition Translucent effect.The method for adding some nodes and layer mask can also be used, it is also possible to blur the edge of stitching portion.
Fig. 3 is the flow chart that the present invention calculates adjacent two frames moving target overlapping area ratio value.
Step 301, calculates adjacent two frames moving target lap area.
In this step, when calculating the lap area of adjacent two frames target motion, there can be various methods.For example, system Adjacent two frames moving target lap pixel value is counted, using the overlapping portion that lap pixel value is moved as adjacent two frames target The area figures for dividing.The area of moving target can be automatically calculated with digital image processing techniques, each pixel is represented Certain real area, it is assumed that the pixel value of two field pictures f1 and f2 lap is a, then the real area representated by a is just The lap area of f1 and f2.
Adjacent two frames moving target lap area is calculated can also first to be examined by the method for image segmentation rim detection Measure the profile of each figure and mark, then rescan new images reference area again;Can also be by gray value come table Show the moving target, direct scan image makes to draw adjacent two frames moving target overlapping portion by plus 1 with different gray values Facet is accumulated.
Step 302, calculates the gross area of adjacent two frames moving target.
In this step, the method and steps 301 for calculating the gross area of adjacent two frames moving target is identical, counts adjacent two frame The total pixel value of moving target, the pixel point value of adjacent two frame is added, and obtains total pixel point value, and each pixel represents certain Real area, it is assumed that the pixel value of two field pictures f1 and f2 is respectively A and B, then total pixel value of f1 and f2 add B, A to add B institutes for A The real area of representative is just the gross area of f1 and f2.
With step 301, the method for image segmentation edge detection method statistics gray value can also calculate adjacent two frames motion The gross area of target.
Step 303, calculates adjacent two frames moving target lap area and adjacent two frames moving target merges area Ratio.
Adjacent two frames moving target merges the gross area and adjacent two frames moving target that area is adjacent two frames moving target The difference of lap area, the gross area and adjacent two frame of the adjacent two frames moving target that will be obtained in step 301 and step 302 Moving target lap area, making difference will obtain adjacent two frames moving target merging area.
Calculate adjacent two frames moving target lap area and adjacent two frames moving target merges the ratio of area, obtain Adjacent two frames moving target overlapping area ratio value.The method for calculating adjacent two frames moving target overlapping area ratio value is to choose Suitable target moving region, according to the method that step 204 is described, the adjacent two frames moving target overlapping area ratio value of acquisition The splicing effect of image can be influenceed, when selection meets pre-conditioned moving target, the splicing effect of image can be improved.
Fig. 4 is the flow chart that the present invention chooses target moving region.
Step 401, chooses adjacent two frames moving target.
It is determined that behind the moving target of multiframe image to be spliced and the background area in addition to moving target, choosing adjacent two Frame moving target, is calculated.
Step 402, calculates adjacent two frames moving target overlapping area ratio value.
This step calculates adjacent two frames moving target overlapping area ratio value according to the flow chart shown in Fig. 3.
Step 403, judges whether overlapping area ratio value meets pre-conditioned.
The overlapping area ratio value that will be calculated with it is pre-conditioned in preset threshold range be compared, in this step Also include:
Step 403a, if overlapping area ratio value meets pre-conditioned, chooses corresponding two frames moving target as target Moving region.
Specifically, choosing described pre-conditioned to overlap motion mesh of the area ratio value less than or equal to preset threshold range It is target moving region to be denoted as.
Step 403b, if overlapping area ratio value be unsatisfactory for it is pre-conditioned, eliminate corresponding two frames moving target it is any One frame, chooses other adjacent moving targets.
For the moving target being unsatisfactory for corresponding to pre-conditioned overlapping area ratio value, any one frame can be eliminated, Then a remaining adjacent frame moving target is chosen again, and repeat step 402 and 403 all meet default bar until selecting The moving target of part is used as target moving region.
Image split-joint method disclosed in this invention is chosen and is met by calculating two frame moving target overlapping area ratio values Be spliced in background model for target complete moving region as target moving region by pre-conditioned moving target, can obtain To the stitching image for representing moving target motion overall picture, and overlapping region face between target moving region and target moving region Product is small, and the stitching image obtained by the present invention considers the overlap problem between moving target and moving target in image, choosing Take overlapping area ratio value and meet pre-conditioned moving target formation stitching image, solve in the prior art, stitching image There are a large amount of overlaps, easily fuzzy problem, a width clearly moving target stitching image can be obtained, improve image mosaic effect Really.
Present invention also offers a kind of image splicing device, it is described in detail below in conjunction with the accompanying drawings.Fig. 5 is real for the present invention The structure chart of an image splicing device is applied, the device includes:Acquisition module 501, image processing module 502, computing module 503, Choose module 504, concatenation module 505;
Acquisition module 501, for obtaining multiframe image to be spliced;
Image processing module 502, moving target and the back of the body in addition to moving target for determining multiframe image to be spliced Scene area;The background model of all images to be spliced is determined according to every frame background area;
Computing module 503, for calculating adjacent two frames moving target overlapping area ratio value;
Module 504 is chosen, pre-conditioned moving target is met as target motion for choosing overlapping area ratio value Region;
Specifically, module 504 is chosen, it is pre-conditioned to overlap area ratio value less than or equal to predetermined threshold value for choosing The moving target of scope is used as target moving region.
Concatenation module 505, for target complete moving region and background model to be spliced into target image.
Image splicing device disclosed in this invention, by calculating two frame moving target overlapping area ratio values, chooses full Be spliced in background model for target complete moving region, not only by the pre-conditioned moving target of foot as target moving region Can obtain representing the stitching image that moving target moves overall picture, and it is Chong Die between target moving region and target moving region Region area is small, it is contemplated that the overlap problem between image and image in the prior art, and the splicing effect of image is easily fuzzy to ask Topic, if choose overlapping area ratio value forms stitching image in pre-conditioned interior moving target, can obtain a width clearly Stitching image, improves image mosaic effect.
It should be noted that the device of the embodiment of the present invention is the device using above-mentioned image split-joint method, then above-mentioned figure As all embodiments of joining method are applied to the device, and can reach same or analogous beneficial effect.
Fig. 6 is the structural representation of embodiment of the present invention computing module, and computing module includes:First calculating sub module 601, Second calculating sub module 602, the 3rd calculating sub module 603;
First calculating sub module 601, for calculating adjacent two frames moving target lap area;
Second calculating sub module 602, the gross area for calculating adjacent two frames moving target;
3rd calculating sub module 603, for calculating adjacent two frames moving target lap area and the motion of adjacent two frame Target merges the ratio of area, wherein, adjacent two frames moving target merge the gross area that area is adjacent two frames moving target with The difference of adjacent two frames moving target lap area.
Specifically, module is chosen, it is pre-conditioned to overlap area ratio value less than or equal to default threshold specifically for choosing It is worth the moving target of scope as target moving region.
Fig. 7 is the structural representation of embodiment of the present invention image processing module, including:First image procossing submodule 701 With the second image procossing submodule 702;
First image procossing submodule 701, the moving target for determining multiframe image to be spliced by frame differential method With the background area in addition to moving target;
Second image procossing submodule 702, for that will obtain all to be spliced by frame differential method per frame background area The background model of image.
Specifically, concatenation module, the edge for being blurred stitching portion by feathering operation, target complete moving region is spelled When being connected to background model, target image is obtained.
It should be noted that herein, such as first and second or the like relational terms are used merely to a reality Body or operation make a distinction with another entity or operation, and not necessarily require or imply these entities or deposited between operating In any this actual relation or order.And, term " including ", "comprising" or its any other variant be intended to Nonexcludability is included, so that process, method, article or equipment including a series of key elements not only will including those Element, but also other key elements including being not expressly set out, or also include being this process, method, article or equipment Intrinsic key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that Also there is other identical element in process, method, article or equipment including the key element.
Each embodiment in this specification is described by the way of correlation, identical similar portion between each embodiment Divide mutually referring to what each embodiment was stressed is the difference with other embodiment.Especially for system reality Apply for example, because it is substantially similar to embodiment of the method, so description is fairly simple, related part is referring to embodiment of the method Part explanation.
Presently preferred embodiments of the present invention is the foregoing is only, is not intended to limit the scope of the present invention.It is all Any modification, equivalent substitution and improvements made within the spirit and principles in the present invention etc., are all contained in protection scope of the present invention It is interior.

Claims (12)

1. a kind of image split-joint method, it is characterised in that including:
Obtain multiframe image to be spliced;
Determine the moving target of multiframe image to be spliced and the background area in addition to the moving target;
The background model of all images to be spliced is determined according to every frame background area;
Adjacent two frames moving target overlapping area ratio value is calculated, overlapping area ratio value is chosen and is met pre-conditioned motion mesh It is target moving region to be denoted as;
Target complete moving region is spliced into target image with the background model.
2. method according to claim 1, it is characterised in that the adjacent two frames moving target overlapping area ratio of the calculating Value, including:
Calculate adjacent two frames moving target lap area;
Calculate the gross area of the adjacent two frames moving target;
The adjacent two frames moving target overlapping area ratio value is calculated, the overlapping area ratio value is the adjacent two frames fortune Moving-target lap area and the adjacent two frames moving target merge the ratio of area, wherein, the adjacent two frames motion Target merges the gross area and the adjacent two frames moving target lap area that area is the adjacent two frames moving target Difference.
3. method according to claim 1 and 2, it is characterised in that the selection overlapping area ratio value meets default bar The moving target of part as target moving region, including:
Choose described pre-conditioned to overlap moving target of the area ratio value less than or equal to preset threshold range as target Moving region.
4. method according to claim 1, it is characterised in that the moving target of determination multiframe image to be spliced With the background area in addition to the moving target, including:
The moving target and the background in addition to the moving target of multiframe image to be spliced are determined by frame differential method Region.
5. method according to claim 1, it is characterised in that the basis determines all figures to be spliced per frame background area The background model of picture, including:
Frame differential method will be passed through per frame background area, the background model of all images to be spliced will be obtained.
6. method according to claim 1, it is characterised in that described by target complete moving region and the background model Target image is spliced into, including:
When target complete moving region is spliced into background model, the edge of stitching portion is blurred using feathering operation, obtain target Image.
7. method according to claim 1, it is characterised in that the acquisition multiframe image to be spliced, including:
Obtain the multiframe image to be spliced with same background region.
8. a kind of image splicing device, it is characterised in that including:
Acquisition module, for obtaining multiframe image to be spliced;
Image processing module, moving target and the back of the body in addition to the moving target for determining multiframe image to be spliced Scene area;The background model of all images to be spliced is determined according to every frame background area;
Computing module, for calculating adjacent two frames moving target overlapping area ratio value;
Module is chosen, pre-conditioned moving target is met as target moving region for choosing overlapping area ratio value;
Concatenation module, for target complete moving region to be spliced into target image with the background model.
9. device according to claim 8, it is characterised in that the computing module, including:
First calculating sub module, for calculating adjacent two frames moving target lap area;
Second calculating sub module, the gross area for calculating the adjacent two frames moving target;
3rd calculating sub module, for calculating the adjacent two frames moving target lap area and the adjacent two frames motion Target merges the ratio of area, wherein, it is the adjacent two frames moving target that the adjacent two frames moving target merges area The difference of the gross area and the adjacent two frames moving target lap area.
10. device according to claim 8 or claim 9, it is characterised in that the selection module, it is described pre- specifically for choosing If condition is to overlap moving target of the area ratio value less than or equal to preset threshold range as target moving region.
11. devices according to claim 8, it is characterised in that described image processing module, including:
First image procossing submodule, for determining the moving target of multiframe image to be spliced by frame differential method and removing Background area beyond the moving target;
Second image procossing submodule, the back of the body for every frame background area to be obtained all images to be spliced by frame differential method Scape model.
12. devices according to claim 8, it is characterised in that the concatenation module, specifically for empty by feathering operation Change the edge of stitching portion, when target complete moving region is spliced into background model, obtain target image.
CN201611248742.1A 2016-12-29 2016-12-29 Image stitching method and apparatus Pending CN106683051A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611248742.1A CN106683051A (en) 2016-12-29 2016-12-29 Image stitching method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611248742.1A CN106683051A (en) 2016-12-29 2016-12-29 Image stitching method and apparatus

Publications (1)

Publication Number Publication Date
CN106683051A true CN106683051A (en) 2017-05-17

Family

ID=58873140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611248742.1A Pending CN106683051A (en) 2016-12-29 2016-12-29 Image stitching method and apparatus

Country Status (1)

Country Link
CN (1) CN106683051A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017441A (en) * 2019-05-28 2020-12-01 浙江宇视科技有限公司 Vehicle traffic behavior detection method, device, equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104394488A (en) * 2014-11-28 2015-03-04 苏州科达科技股份有限公司 Video abstraction generation method and system
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
CN104484457A (en) * 2014-12-29 2015-04-01 广州中国科学院软件应用技术研究所 Method and system for extracting and searching moving object in parallel video
CN105376500A (en) * 2014-08-18 2016-03-02 三星电子株式会社 Video processing apparatus for generating paranomic video and method thereof
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105376500A (en) * 2014-08-18 2016-03-02 三星电子株式会社 Video processing apparatus for generating paranomic video and method thereof
CN104394488A (en) * 2014-11-28 2015-03-04 苏州科达科技股份有限公司 Video abstraction generation method and system
CN104408701A (en) * 2014-12-03 2015-03-11 中国矿业大学 Large-scale scene video image stitching method
CN104484457A (en) * 2014-12-29 2015-04-01 广州中国科学院软件应用技术研究所 Method and system for extracting and searching moving object in parallel video
CN105550999A (en) * 2015-12-09 2016-05-04 西安邮电大学 Video image enhancement processing method based on background reuse

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
朱明旱等: "《基于帧间差分背景模型的运动物体检测与跟踪》", 《计算机测量与控制》 *
王帅: "《基于视频序列的图像配准算法研究与应用》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112017441A (en) * 2019-05-28 2020-12-01 浙江宇视科技有限公司 Vehicle traffic behavior detection method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
JP2011509451A (en) Segmentation of image data
CN101859433B (en) Image mosaic device and method
CN104376535B (en) A kind of rapid image restorative procedure based on sample
DE69735488T2 (en) METHOD AND DEVICE FOR ALIGNING PICTURES
CN102567727A (en) Method and device for replacing background target
Cho et al. Model-assisted multiband fusion for single image enhancement and applications to robot vision
CN105279372B (en) A kind of method and apparatus of determining depth of building
CN105741231B (en) The skin makeup treating method and apparatus of image
CN103020917A (en) Method for restoring ancient Chinese calligraphy and painting images on basis of conspicuousness detection
EP2863362A1 (en) Method and apparatus for scene segmentation from focal stack images
Yan et al. Depth map generation for 2d-to-3d conversion by limited user inputs and depth propagation
CN108537782A (en) A method of building images match based on contours extract with merge
CN103942756B (en) A kind of method of depth map post processing and filtering
CN106683051A (en) Image stitching method and apparatus
CN102708570B (en) Method and device for obtaining depth map
CN108447068A (en) Ternary diagram automatic generation method and the foreground extracting method for utilizing the ternary diagram
CN101459843B (en) Method for precisely extracting broken content region in video sequence
CN107154032A (en) A kind of image processing method and device
CN111476213A (en) Method and device for filling covering area of shelter based on road image
CN106469306A (en) Many people image extract real-time based on infrared structure light and synthetic method
CN108898150B (en) Video structure alignment method and system
CN109344724B (en) Automatic background replacement method, system and server for certificate photo
CN104933679A (en) A method for enlarging an image and a system corresponding to the method
KR20190099259A (en) How to get the target, device and robot
Che et al. Reduced-reference quality metric for screen content image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20170517

RJ01 Rejection of invention patent application after publication