CN115797164B - Image stitching method, device and system in fixed view field - Google Patents
Image stitching method, device and system in fixed view field Download PDFInfo
- Publication number
- CN115797164B CN115797164B CN202111058979.4A CN202111058979A CN115797164B CN 115797164 B CN115797164 B CN 115797164B CN 202111058979 A CN202111058979 A CN 202111058979A CN 115797164 B CN115797164 B CN 115797164B
- Authority
- CN
- China
- Prior art keywords
- image
- time frame
- frame image
- moving
- stitching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 239000000203 mixture Substances 0.000 claims description 38
- 238000000605 extraction Methods 0.000 claims description 13
- 230000002123 temporal effect Effects 0.000 claims description 10
- 238000004364 calculation method Methods 0.000 claims description 8
- 230000000877 morphologic effect Effects 0.000 claims description 6
- 238000005516 engineering process Methods 0.000 abstract description 2
- 238000004590 computer program Methods 0.000 description 17
- 238000010586 diagram Methods 0.000 description 12
- 230000015654 memory Effects 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000004891 communication Methods 0.000 description 7
- 230000008569 process Effects 0.000 description 5
- 230000004927 fusion Effects 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 238000001514 detection method Methods 0.000 description 2
- 238000009472 formulation Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004806 packaging method and process Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 239000000758 substrate Substances 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
Abstract
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, an electronic device, a storage medium, and a program for image stitching in a fixed field of view. An image stitching method comprises the steps of obtaining an image sequence, wherein the image sequence at least comprises a plurality of frame images of a moving target; determining the same position in the plurality of frame images as a splicing position; respectively determining t n Time frame image and t n+1 The foreground of the moving target in the time frame image is extracted, and the contextual characteristics of the image area at the foreground are extracted; determining a moving object at t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images; respectively corresponding to t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and splicing the clipping images. The method can realize the spliced display of the moving target in the fixed view field.
Description
Technical Field
The present disclosure relates to the field of image processing technology, and more particularly, to a method, apparatus, system, electronic device, storage medium, and program product for image stitching in a fixed field of view.
Background
Generalized image stitching is to stitch together several static images (possibly acquired at different times, different viewing angles or different sensors) with overlapping portions into a seamless panoramic or high resolution image. Existing image stitching techniques generally involve two key techniques, image registration and image fusion. The image registration is to calculate the matching relation between the spliced images by adopting an image processing method to obtain an overlapping area so as to realize the splicing of the images. And the image fusion is to realize the smooth transition of the spliced images at the splicing position.
However, when the field of view of the camera or other photographing device is fixed and the object to be stitched is a moving object in the scene (for example, stitching a large vehicle running, an object conveyed on a stitching conveyor belt, etc.), with the existing stitching mode, since the images to be stitched have the same background of the field of view, when the features are used to match, erroneous image registration may be caused, so that an erroneous stitching result is obtained.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide an image stitching method, apparatus, system, electronic device, storage medium, and program product.
One aspect of the present disclosure provides a method of image stitching in a fixed field of view, comprising: acquiring an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target; determining the same position in a plurality of frame images as a splicing position; respectively determining t n Time frame image and t n+1 The foreground of the moving target in the time frame image and the image at the foregroundExtracting context characteristics from the region; determining the moving target at t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images; respectively aiming at the t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and splicing the clipping images.
In some embodiments, at each determination of t n Time frame image and t n+1 Before the foreground of the moving object in the time frame image, the method further comprises: acquiring a background image in a frame image; for the background image and t n Respectively carrying out Gaussian mixture modeling on the time frame images; based on the Gaussian mixture model of the background image and the t n Comparing the results of the Gaussian mixture model of the time frame image, and judging the t n Whether the moving object is contained in the time frame image.
In some embodiments, at the determination of t n When the time frame image does not contain the moving target, the t is calculated n And updating the Gaussian mixture model of the moment frame image into the Gaussian mixture model of the background image.
In some embodiments, the determining that the same position in the plurality of frame images is a stitching position comprises: setting the row or column of the frame image as the splicing position, wherein the splicing position satisfies the following conditions: in the moving direction of the moving object, the splicing position is positioned at t m In front of the moving object in the time frame image, t m The time frame image is a frame image containing the moving target for the first time, m is a positive integer, and m is less than or equal to n.
In some embodiments, the determining the moving object at the t based on the contextual characteristics n Time frame image and t n+1 The moving distance between the time frame images includes: for t is as described n Contextual features of a temporal frame image and t n+1 Performing feature matching on the context features of the time frame image, and obtaining feature matching pairs; frame image and the tn moment according to the characteristic matching pairt n+1 Calculating the coordinate information of the moving object in the time frame image at the t n Time frame image and t n+1 Distance of movement between time frame images.
In some embodiments, the contextual feature extraction of the image region at the foreground comprises: extracting a plurality of context features in an image area at the foreground; said pair of said t n Contextual features of a temporal frame image and t n+1 The feature matching of the context features of the time frame image comprises the following steps: for t is as described n Combination of multiple contextual features of a temporal frame image and t n+1 The combination of the multiple contextual features of the temporal frame image is feature matched.
In some embodiments, said matching pairs according to said characteristics are at said t n Time frame image and t n+1 Calculating the coordinate information of the moving object in the time frame image at the t n Time frame image and t n+1 The moving distance between the time frame images includes: and calculating the moving distance according to the average absolute difference or the intermediate value of the absolute differences of the coordinate information of the feature matching pairs.
In some embodiments, at least one of the contextual features relates to an operating mode of a device that acquired the sequence of images and/or an operating environment of the device.
In some embodiments, said moving distance and said splice position are respectively for said t n Time frame image and t n+1 The clipping of the time frame image comprises the following steps: and determining a clipping region, wherein the clipping region is a region with the clipping starting point at the splicing position and the length of the clipping starting point is defined as the moving distance along the opposite moving direction of the moving target.
In some embodiments, the acquiring the sequence of images comprises: and acquiring videos to be spliced, and decoding the videos to obtain the image sequence.
In certain embodiments, the determining t respectively n Time frame image and t n+1 Said shifting in time frame imagesThe prospects of moving targets include: determination of t using morphological methods n Time frame image and t n+1 And the foreground of the moving target in the moment frame image.
In some embodiments, after the stitching of the cropped image, if the t is determined n+2 And if the time frame image does not contain the moving target, ending the splicing of the clipping image.
Another aspect of the present disclosure provides an image stitching device in a fixed field of view, comprising: an image sequence acquisition module, configured to acquire an image sequence, where the image sequence includes at least a plurality of frame images including a moving object; the splicing position setting module is used for determining the same position in the plurality of frame images as a splicing position; a foreground determining module for determining t respectively n Time frame image and t n+1 The foreground of the moving target in the time frame image; the context feature extraction module is used for extracting context features of the image area at the foreground; a moving distance calculation module for determining the moving object at the t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images; the clipping module is used for respectively aiming at the t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and the splicing module splices the cut images.
In certain embodiments, further comprising: the background image acquisition module is used for acquiring a background image in the frame image; the Gaussian mixture modeling module is used for respectively carrying out Gaussian mixture modeling on the background image and the tn moment frame image; a moving target judging module for judging the moving target according to the Gaussian mixture model of the background image and the t n Comparing the results of the Gaussian mixture model of the time frame image, and judging the t n Whether the moving object is contained in the time frame image.
In certain embodiments, further comprising: a feature matching module for matching the t n Contextual features of a temporal frame image and t n+1 Time frame imageAnd performing feature matching on the context features, and obtaining feature matching pairs.
In certain embodiments, further comprising: the video processing module is used for acquiring videos to be spliced and decoding the videos.
In certain embodiments, further comprising: a mixed Gaussian model updating module for judging the t n When the time frame image does not contain the moving target, the t is calculated n And updating the Gaussian mixture model of the moment frame image into the Gaussian mixture model of the background image.
Another aspect of the present disclosure also provides an image stitching system in a fixed field of view, comprising: the image stitching device in a fixed field of view according to any of the above, and an image acquisition apparatus for acquiring and forming a video containing a moving object and acquiring an image background.
Another aspect of the present disclosure also provides an electronic device, including: one or more processors; a storage means for storing one or more programs, which when executed by the one or more processors cause the one or more processors to perform the method of any of the preceding claims.
Another aspect of the present disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the method of any of the above.
Another aspect of the present disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the method of any of the above.
The image stitching method comprises the steps of obtaining an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target; determining the same position in the plurality of frame images as a splicing position; respectively determining t n Time frame image and t n+1 The foreground of the moving target in the time frame image is extracted, and the contextual characteristics of the image area at the foreground are extracted; determining a moving object at t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images; respectively corresponding to t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and splicing the clipping images. According to the method, the moving distance of the moving object between the adjacent frame images is determined by comparing the context characteristics, the frame images are cut according to the moving distance and the set splicing position to obtain the part of the moving object, the part of the moving object is obtained once in cutting, and finally the panoramic display of the moving object can be achieved by splicing the cut areas in all the frame images.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of embodiments thereof with reference to the accompanying drawings in which:
FIG. 1 schematically illustrates a flow chart of a method of stitching a moving object in a fixed field of view in accordance with an embodiment of the present disclosure;
FIG. 2 schematically illustrates a flowchart of another implementation of a stitching method of moving objects in a fixed field of view according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flowchart of another implementation of a stitching method of moving objects in a fixed field of view in accordance with an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flowchart of another implementation of a stitching method of moving objects in a fixed field of view in accordance with an embodiment of the present disclosure;
FIG. 5 schematically illustrates a moving target t according to an embodiment of the present disclosure n Schematic diagram of time frame image;
FIG. 6 schematically illustrates a moving target t in an embodiment according to the disclosure n+1 Schematic diagram of time frame image;
FIG. 7 schematically illustrates a moving target t in an embodiment according to the disclosure n+2 Schematic diagram of time frame image;
FIG. 8 schematically illustrates a block diagram of a stitching device of a moving object in a fixed field of view in accordance with an embodiment of the present disclosure;
fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a stitching method of moving objects in a fixed field of view, in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is only exemplary and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the present disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. In addition, in the following description, descriptions of well-known structures and techniques are omitted so as not to unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and/or the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
Where a formulation similar to at least one of "A, B or C, etc." is used, in general such a formulation should be interpreted in accordance with the ordinary understanding of one skilled in the art (e.g. "a system with at least one of A, B or C" would include but not be limited to systems with a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). The terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more of the described features.
Detailed background art may include other technical problems in addition to the technical problem that is solved by the exclusive right.
Embodiments of the present disclosure provide a method of image stitching in a fixed field of view comprising acquiring a sequence of imagesThe image sequence at least comprises a plurality of frame images containing a moving object; determining the same position in the plurality of frame images as a splicing position; respectively determining t n Time frame image and t n+1 The foreground of the moving target in the time frame image is extracted, and the contextual characteristics of the image area at the foreground are extracted; determining a moving object at t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images; respectively corresponding to t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and splicing the clipping images.
The image stitching method in the embodiment of the disclosure is suitable for stitching panoramic images of moving objects in a fixed view field under a fixed view field environment, and is particularly suitable for scenes in which the moving objects have large volumes and the whole moving objects cannot be directly imaged in the image capturing range of the fixed view field.
It should be noted that, the moving object in the embodiment of the present disclosure may be a moving object that needs to be photographed in a fixed field of view.
Fig. 1 schematically illustrates an application scenario diagram of an image stitching method in a fixed field of view according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may include storage devices 101, 102, 103. The network 104 is used as a medium to provide communication links between the storage devices 101, 102, 103 and the server 105. The network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
A user may interact with the server 105 over the network 104 using the storage devices 101, 102, 103 to upload a sequence of images onto the server 105 for processing by the server 105.
The storage devices 101, 102, 103 may be a variety of electronic devices with display screens including, but not limited to, smartphones, tablets, laptop and desktop computers, and the like.
The server 105 may be a server that performs context feature recognition, feature matching, image cropping, image stitching fusion for a sequence of images.
It should be noted that the method for stitching moving objects in a fixed field of view provided by the embodiments of the present disclosure may be generally performed by the server 105. Accordingly, the stitching device of moving objects in a fixed field of view provided by embodiments of the present disclosure may be generally disposed in the server 105. The stitching method of moving objects in a fixed field of view provided by embodiments of the present disclosure may also be performed by a server or cluster of servers other than the server 105 and capable of communicating with the storage devices 101, 102, 103 and/or the server 105. Accordingly, the stitching system of moving objects in a fixed field of view provided by embodiments of the present disclosure may also be provided in a server or cluster of servers other than server 105 and capable of communicating with storage devices 101, 102, 103 and/or server 105.
It should be understood that the number of storage devices, networks, and servers in fig. 1 are merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation.
The following will describe in detail a method of stitching a moving object in a fixed field of view according to an embodiment of the present disclosure with reference to the scenario described in fig. 1 by fig. 2 to 7.
Fig. 2 schematically illustrates a flow chart of a method of stitching a moving object in a fixed field of view according to an embodiment of the disclosure.
As shown in fig. 2, the stitching method of the moving object in the fixed field of view of this embodiment includes operations S210 to S260.
In operation 210, a sequence of images is acquired, the sequence of images including at least a plurality of frame images including a moving object at a motion instant t.
The image sequence in the embodiment of the present disclosure may include a series of images sequentially and continuously acquired by moving objects at different times, and may further include a plurality of frame images obtained by processing video, animation, and the like including the moving objects. In an application scenario of the embodiment of the present disclosure, a moving target moves in a fixed field of view, and correspondingly, the image sequence at least includes a video generated in a whole motion process from when the moving target starts to enter the fixed field of view to when the moving target completely leaves the fixed field of view, and a plurality of frame images corresponding to the motion time t as an axis are obtained after the video is analyzed.
In combination with the application scenario of the embodiment of the disclosure, such as a security inspection scenario of a large container at a port, the photographing device in the fixed view field can keep a working state in working time, but the moving object is not moving at all times in the fixed view field, so that the image sequence can also include frame images which do not include the moving object.
The manner in which the image sequence is acquired in embodiments of the present disclosure may include: and obtaining a video containing the moving targets to be spliced, and decoding the video to obtain a plurality of frame images.
The video may include an offline video file captured by a camera in a fixed field of view, and may include an online video file uploaded to the internet by a user, the format of the video file may include, but is not limited to, a general video format such as MPEG, AVI, MOV, WMV, and the format of the decoded frame image may include, but is not limited to, a general image format such as PNG, BMP, or JPG.
The video decoding process in the embodiments of the present disclosure may be implemented using related third party software, such as frame image processing functions in photoshop, adobe premier, etc. software.
It can be appreciated that in the decoding process of the video, the frame number of the frame image can be controlled to ensure that the adjacent frames with the moving targets have repeated parts, so that the method provided by the embodiment of the disclosure has the basis of image stitching.
It should be noted that, the frame image obtained by the video decoding process is named as the motion time t of the moving object, so as to facilitate understanding and explaining the technical concept of the embodiments of the present disclosure. Illustratively t 0 Time frame image to t n Time frame images, which are designated as t in the following for the purpose of representing some of the frame images having a particular meaning or effect m Time frame images to better distinguish between illustrations.
In operation S220, it is determined that the same position in the plurality of frame images is a stitching position.
The frame images acquired in the embodiments of the present disclosure are images under a fixed field of view, and accordingly, the sizes of the background images correspondingly generated by the fixed field of view are the same. Referring to fig. 5 to 7, the background image is a pixel region including a1 to a18 columns, b1 to b14 rows. In this embodiment, the row a9 is taken as the above-mentioned splicing position, and the splicing position is determined to facilitate the cutting and splicing of the subsequent images.
It will be appreciated that the size of the pixel areas of the corresponding background image will be different for different fields of view.
It will be appreciated that the stitching location may also be a row of pixels in a pixel area, or a polyline stitching line comprising a plurality of rows or columns of pixels.
In order to further optimize the integrity of the spliced moving object, when the splicing position is set, the following conditions are also required to be satisfied:
in the moving direction of the moving object, the splicing position is positioned at t m In front of the moving object in the time frame image, t m The time frame image is a frame image containing the moving target for the first time, m is a positive integer, and m is less than or equal to n.
Referring to fig. 5, the arrow in the horizontal direction in fig. 5 identifies the direction of movement of the moving object, assuming the current t n The time is t in the above m In the time frame image, assuming that the moving object is composed of c1, c2, c3, etc., for the sake of more clarity of illustration, more overlapping is avoided, in this embodiment, only the c1 and c2 portions of the moving object are shown in the drawing. At t m At that moment, the moving object first enters the fixed field of view, at which time it is necessary to ensure that the set splice position is to the left of c1, which is set with the aim that, when t m If the splicing position of the frame image at the moment is crossed with the moving target, the moving target positioned at the left side of the splicing position cannot be cut when the subsequent image is cut, and the panoramic image of the moving target cannot be displayed when the images are spliced.
In operation S230, t is determined respectively n Time frame image and t n+1 In time frame imagesAnd the foreground of the moving target and extracting the context characteristics of the image area at the foreground.
In embodiments of the present disclosure, t is determined using morphological methods n Time frame image and t n+1 And the foreground of the moving target in the moment frame image. And performing morphological open operation on the background image and the foreground of the moving target, namely corroding and expanding the binary image formed by the background image and the foreground to perform noise reduction treatment on the frame image, and further distinguishing the foreground in the background image better.
Contextual features in embodiments of the present disclosure may include SIFT features, HOG features, and the like. The SIFT feature extraction method in the embodiment of the present disclosure may include the following operations:
constructing a DOG space;
extracting interest points (points), identifying all feature points by using corner points (points), and performing curve fitting on discrete points by using RANSAC (random sample algorithm) to obtain accurate position and scale information of key points;
and carrying out direction assignment, namely carrying out assignment on characteristic points according to the detected local image structure of the key points, wherein a gradient direction histogram method can be adopted, and each sampling point added into the histogram is weighted by using a circular Gaussian function when the histogram is calculated.
It will be appreciated that when extracting the contextual features, multiple contextual features may be extracted to enable more accurate clipping regions for ease of stitching by comparison of the contextual features of multiple combinations by subsequent operations. Illustratively, the SIFT features and the HOG features are extracted at the same time, so as to further reduce the influence of external environment such as illumination and the like on image stitching.
It will be appreciated that in extracting the contextual features, it is desirable to ensure that at least one of the contextual features is related to the operating mode of the device that acquired the sequence of images and/or the operating environment of the device, to reduce the impact of the operating mode of the device and the operating environment (e.g., day detection, night detection, etc.) on image stitching.
In the case of determining t by using a morphological method n Time of dayFrame image and t n+1 The foreground of the moving object in the time frame image also needs to be operated as follows.
Fig. 3 schematically illustrates a flowchart of yet another embodiment of a method of stitching moving objects in a fixed field of view in accordance with an embodiment of the present disclosure, including operations S201 through S203.
In operation S201, a background image in a frame image is acquired.
The background image is an image that does not contain a moving object. For example, the background image may be acquired by photographing a case where the fixed field of view does not include the moving object by a photographing apparatus.
In operation S202, for the background image and the t n And respectively carrying out Gaussian mixture modeling on the time frame images.
In operation S203, according to the Gaussian mixture model of the background image and the t n Comparing the results of the Gaussian mixture model of the time frame image, and judging the t n Whether the moving object is contained in the time frame image. K Gaussian models are used in the Gaussian mixture model to characterize each pixel point in the frame image, and the value of K is usually 3 to 5. In the judgment, by combining t n And (3) matching each pixel point in the moment frame image with the Gaussian mixture model of the background image, if the matching is successful, the pixel point is indicated to be the background, and if the matching is unsuccessful, the pixel point is indicated to be the foreground. At a decision t n After the time frame image contains the moving target, the foreground in the frame image is further noise-reduced and optimized by adopting the morphological method.
The background image is easily collected under the influence of external environment, such as illumination, shooting angle, distance, etc., which may cause the change of the mixed gaussian model of the background image, it can be understood that the gaussian mixed modeling is adopted to better distinguish the moving object in the background image, so as shown in fig. 4, in operation S2031, the t is determined n When the time frame image does not contain the moving target, the t is calculated n Updating the Gaussian mixture model of the time frame image to the Gaussian mixture model of the background image so as to ensure that the background image is as close to t as possible n Background images in the temporal frame image.
Referring back to fig. 2, in operation S240, it is determined that the moving object is at the t according to the contextual characteristics n Time frame image and t n+1 Distance of movement between time frame images.
In an embodiment of the disclosure, for t n Contextual features of a temporal frame image and t n+1 Performing feature matching on the context features of the time frame image, and obtaining feature matching pairs; matching the pair at t according to the characteristics n Time frame image and t n+1 Calculating the coordinate information of the moving object in the time frame image at the t n Time frame image and t n+1 Distance of movement between time frame images. Taking the SIFT feature in the context feature as an example, for t respectively n Time frame image and t n+1 SIFT features on a moving object in a time frame image are extracted, and the same part of the moving object is determined at t through feature description n Time frame image and t n+1 SIFT features corresponding to the moment frame images respectively are constructed as feature matching pairs, so that SIFT features describing the same part of the moving object in different frame images are constructed as feature matching pairs, and coordinate positions of pixel areas in the frame images, such as at t, of the feature matching pairs are obtained respectively n One coordinate of the feature matching pair in the time frame image is a13 column in the pixel region, at t n+1 If the other coordinate of the feature matching pair in the time frame image is the a9 column in the pixel region, the moving object can be calculated at t n Time frame image and t n+1 The moving distance between the time frame images is the length of four columns of pixel units.
It can be understood that when extracting the same context feature, feature extraction can be performed on multiple positions on the moving target, so as to obtain multiple feature matching pairs, and the moving distance can be calculated according to the average absolute difference or the intermediate value of the absolute differences of the coordinate information of multiple feature matching, so as to improve the accuracy of moving distance calculation.
It can be understood that when extracting multiple contextual features, different feature extraction can be performed on multiple positions on the moving target, so as to obtain multiple different feature matching pairs, and the moving distance can be calculated according to average absolute differences or intermediate values of absolute differences of coordinate information of multiple different feature matching, so as to improve the accuracy of moving distance calculation.
In operation S250, the t is respectively determined according to the moving distance and the splicing position n Time frame image and t n+1 And clipping the moment frame image to obtain a clipping image.
For example, in this operation S250, a clipping region, which is a region having a clipping start point at the splicing position and a length defined as the moving distance in a direction opposite to the moving direction of the moving object, may be determined. Referring to fig. 5 to 7, the moving direction of the moving object is a direction from right to left in the drawing plane. The a9 column position in the figure represents the splice position set forth in operation S220. T in FIG. 5 n At this time, the front end of the c1 portion of the moving object moves to the a13 column of the pixel region, t in fig. 6 n+1 At this time, the front end of the c1 portion of the moving object moves to the a9 column of the pixel region, and by using the above method for calculating the moving distance by using the context feature, the length of the pixel unit of four columns of the moving distance can be calculated, so that the region is defined by determining the clipping region, and the region indicated by the horizontal double-headed arrow in fig. 6 is obtained, namely the clipping region. The clipping region is at least t n Time frame image and t n+1 Time frame image. At t n The blank area without moving object is obtained by clipping in the time frame image, at t n+1 Clipping in the time frame image results in a hatched portion that is the moving object shown in fig. 6. In this way can also be applied to t n+2 The time frame image is cropped to obtain a shadow portion of the moving object as shown in fig. 7.
In operation S260, the cropped images are stitched.
As illustrated in fig. 5 and 7, the complete c1 and c2 portions on the moving object can be obtained by stitching the moving object in the clipping region, that is, stitching the blank region clipped in fig. 5 and the hatched portion of the moving object in fig. 6 and 7. When splicing the cutting areas, the a9 position in fig. 5 is taken as a starting point, the a13 position in fig. 5 is butted with the a9 position in fig. 6 to realize the splicing of the two cutting areas, and the subsequent splicing modes are analogized in turn, so that the repeated description is omitted.
It will be appreciated that the splicing of the crop areas may be performed after the crop areas are all obtained, according to the rules described above, or once each crop area is obtained.
After operation S260, at pair t n+2 If t when judging whether the time frame image contains the moving object n+2 If the time frame image does not contain a moving target, the whole image stitching method can be ended, the image stitching operation is finished, and the method is circularly executed until a new moving target is detected again.
Based on the above-mentioned splicing method of the moving object in the fixed view field, the embodiment of the disclosure further provides a splicing device of the moving object in the fixed view field. The device will be described in detail below in connection with fig. 8.
Fig. 8 schematically illustrates a block diagram of a stitching device of a moving object in a fixed field of view according to an embodiment of the present disclosure.
As shown in fig. 8, the stitching apparatus 300 includes an image sequence acquisition module 301, a stitching position setting module 302, a foreground determination module 303, a context feature extraction module 304, a moving distance calculation module 305, a cropping module 306, and a stitching module 307.
An image sequence acquisition module 301 is configured to acquire an image sequence, where the image sequence includes at least a plurality of frame images of a moving object at a motion time t.
And the stitching position setting module 302 is configured to determine that the same position in the plurality of frame images is a stitching position.
A foreground determining module 303 for determining t respectively n Time frame image and t n+1 And the foreground of the moving target in the moment frame image.
The contextual feature extraction module 304 is configured to perform contextual feature extraction on the image area at the foreground.
A moving distance calculating module 305 for determining the moving object at the t according to the context characteristics n Time frame image and t n+1 Distance of movement between time frame images.
A clipping module 306 for respectively clipping the t according to the moving distance and the splicing position n Time frame image and t n+1 And clipping the moment frame image to obtain a clipping image.
And the stitching module 307 is configured to stitch the cropped images.
For example, the stitching device of the moving object in the fixed field of view according to the embodiment of the present disclosure further includes a background image acquisition module for acquiring a background image in the frame image; a Gaussian mixture modeling module for modeling the background image and the t n Respectively carrying out Gaussian mixture modeling on the time frame images; a moving target judging module for judging the moving target according to the Gaussian mixture model of the background image and the t n Comparing the results of the Gaussian mixture model of the time frame image, and judging the t n Whether the moving object is contained in the time frame image.
For example, the stitching device of the moving object in the fixed field of view according to the embodiment of the present disclosure further includes a feature matching module for matching the t n Contextual features of a temporal frame image and t n+1 And performing feature matching on the context features of the time frame image, and obtaining the feature matching pair.
For example, a stitching device of moving objects in a fixed field of view according to an embodiment of the present disclosure further includes: the video processing module is used for acquiring videos to be spliced and decoding the videos.
For example, the stitching device of the moving object in the fixed field of view according to the embodiment of the present disclosure further includes a gaussian mixture model updating module for determining the t n When the time frame image does not contain the moving target, the t is calculated n And updating the Gaussian mixture model of the moment frame image into the Gaussian mixture model of the background image.
In the present disclosureIn an embodiment, a sequence of images is acquired, the sequence of images comprising at least a plurality of frame images comprising a moving object; determining the same position in the plurality of frame images as a splicing position; respectively determining t n Time frame image and t n+1 The foreground of the moving target in the time frame image is extracted, and the contextual characteristics of the image area at the foreground are extracted; determining a moving object at t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images; respectively corresponding to t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and splicing the clipping images. And determining the moving distance of the moving object between the adjacent frame images by comparing the contextual characteristics, cutting the frame images to obtain a part of the moving object according to the moving distance and the set splicing position, obtaining the part of the moving object once for each cutting, and finally splicing the cut areas in all the frame images to realize panoramic display of the moving object.
According to an embodiment of the present disclosure, any of the image sequence acquisition module 301, the stitching position setting module 302, the foreground determination module 303, the contextual feature extraction module 304, the travel distance calculation module 305, the cropping module 306, and the stitching module 307 may be combined in one module to be implemented, or any of the modules may be split into a plurality of modules. Alternatively, at least some of the functionality of one or more of the modules may be combined with at least some of the functionality of other modules and implemented in one module. According to embodiments of the present disclosure, at least one of the image sequence acquisition module 301, the stitching location setting module 302, the foreground determination module 303, the contextual feature extraction module 304, the distance moved calculation module 305, the cropping module 306, and the stitching module 307 may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or in any one of or a suitable combination of three of software, hardware, and firmware. Alternatively, at least one of the image sequence acquisition module 301, the stitching location setting module 302, the foreground determination module 303, the contextual feature extraction module 304, the distance moved calculation module 305, the cropping module 306, and the stitching module 307 may be at least partially implemented as a computer program module which, when executed, performs the corresponding functions.
The embodiment of the disclosure also provides a stitching system of the moving object in the fixed field of view, an image stitching device in the fixed field of view in the above embodiment, and an image acquisition device for acquiring and forming a video containing the moving object, and acquiring an image background.
Fig. 9 schematically illustrates a block diagram of an electronic device adapted to implement a stitching method of moving objects in a fixed field of view, in accordance with an embodiment of the present disclosure.
As shown in fig. 9, an electronic device 400 according to an embodiment of the present disclosure includes a processor 401 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. The processor 401 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or an associated chipset and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), or the like. Processor 401 may also include on-board memory for caching purposes. Processor 401 may include a single processing unit or multiple processing units for performing different actions of the method flows in accordance with embodiments of the disclosure.
In the RAM 403, various programs and data necessary for the operation of the electronic device 400 are stored. The processor 401, the ROM 402, and the RAM 403 are connected to each other by a bus 404. The processor 401 performs various operations of the method flow according to the embodiment of the present disclosure by executing programs in the ROM 402 and/or the RAM 403. Note that the program may be stored in one or more memories other than the ROM 402 and the RAM 403. The processor 401 may also perform various operations of the method flow according to the embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, electronic device 400 may also include an input/output (I/O) interface 405, with input/output (I/O) interface 405 also connected to bus 404. Electronic device 400 may also include one or more of the following components connected to I/O interface 405: an input section 406 including a keyboard, a mouse, and the like; an output portion 407 including a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker, and the like; a storage section 408 including a hard disk or the like; and a communication section 409 including a network interface card such as a LAN card, a modem, or the like. The communication section 409 performs communication processing via a network such as the internet. The drive 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is installed on the drive 410 as needed, so that a computer program read therefrom is installed into the storage section 408 as needed.
The present disclosure also provides a computer-readable storage medium that may be embodied in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the apparatus/device/system. The computer-readable storage medium carries one or more programs which, when executed, implement methods in accordance with embodiments of the present disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example, but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In an embodiment of the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, the computer-readable storage medium may include ROM 402 and/or RAM 403 and/or one or more memories other than ROM 402 and RAM 403 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the methods shown in the flowcharts. The program code, when executed in a computer system, causes the computer system to implement the item recommendation method provided by embodiments of the present disclosure.
The above-described functions defined in the system/apparatus of the embodiments of the present disclosure are performed when the computer program is executed by the processor 401. The systems, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
In one embodiment, the computer program may be based on a tangible storage medium such as an optical storage device, a magnetic storage device, or the like. In another embodiment, the computer program may also be transmitted, distributed over a network medium in the form of signals, downloaded and installed via the communication portion 409, and/or installed from the removable medium 411. The computer program may include program code that may be transmitted using any appropriate network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network via the communication portion 409 and/or installed from the removable medium 411. The above-described functions defined in the system of the embodiments of the present disclosure are performed when the computer program is executed by the processor 401. The systems, devices, apparatus, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the disclosure.
According to embodiments of the present disclosure, program code for performing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, such computer programs may be implemented in high-level procedural and/or object-oriented programming languages, and/or assembly/machine languages. Programming languages include, but are not limited to, such as Java, c++, python, "C" or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of remote computing devices, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., connected via the Internet using an Internet service provider).
The flowcharts and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that the features recited in the various embodiments of the disclosure and/or in the claims may be provided in a variety of combinations and/or combinations, even if such combinations or combinations are not explicitly recited in the embodiments of the disclosure. In particular, the features recited in the various embodiments of the present disclosure and/or the claims may be variously combined and/or combined without departing from the spirit and teachings of the present disclosure. All such combinations and/or combinations fall within the scope of the present disclosure.
The embodiments of the present disclosure are described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described above separately, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be made by those skilled in the art without departing from the scope of the disclosure, and such alternatives and modifications are intended to fall within the scope of the disclosure.
Claims (17)
1. A method of image stitching in a fixed field of view, comprising:
acquiring an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target;
Determining the same position in a plurality of frame images as a splicing position;
respectively determining t n Time frame image and t n+1 Extracting a plurality of context features in an image area at the foreground, wherein n is a positive integer;
determining the moving target at t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images, wherein the moving distance comprises the following distance between time frame images n Combination of multiple contextual features of a temporal frame image and t n+1 Performing feature matching on the combination of multiple contextual features of the time frame image to obtain feature matching pairs; matching the pair at t according to the characteristics n Time frame image and t n+1 Calculating the coordinate information of the moving object in the time frame image at the t n Time frame image and t n+1 A moving distance between time frame images;
respectively aiming at the t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and
and splicing the clipping images.
2. The method of image stitching in a fixed field of view according to claim 1, wherein, in determining t separately n Time frame image and t n+1 Before the foreground of the moving object in the time frame image, the method further comprises:
acquiring a background image in a frame image;
for the background image and t n Respectively carrying out Gaussian mixture modeling on the time frame images; and
based on the Gaussian mixture model of the background image and the t n Comparing the results of the Gaussian mixture model of the time frame image, and judging the t n Whether the moving object is contained in the time frame image.
3. The method of image stitching in a fixed field of view according to claim 2, wherein in determining the t n When the time frame image does not contain the moving target, the t is calculated n And updating the Gaussian mixture model of the moment frame image into the Gaussian mixture model of the background image.
4. The method of image stitching in a fixed field of view according to claim 2, wherein determining the same position in a plurality of the frame images as a stitching position comprises:
setting the row or column of the frame image as the splicing position, wherein the splicing position satisfies:
in the moving direction of the moving object, the splicing position is positioned at t m In front of the moving object in the time frame image, wherein t is m The time frame image is a frame image containing the moving target for the first time, m is a positive integer, and m is less than or equal to n.
5. The method of image stitching in a fixed field of view according to claim 1, wherein the matching pair is at the t according to the characteristics n Time frame imageAnd t is as described n+1 Calculating the coordinate information of the moving object in the time frame image at the t n Time frame image and t n+1 The moving distance between the time frame images includes:
and calculating the moving distance according to the average absolute difference or the intermediate value of the absolute differences of the coordinate information of the plurality of feature matching pairs.
6. The method of image stitching in a fixed field of view according to claim 5, wherein at least one of the contextual characteristics relates to an operating mode of a device acquiring the sequence of images and/or an operating environment of the device.
7. The method of image stitching in a fixed field of view according to claim 1, wherein the moving distance and the stitching position are respectively for the t n Time frame image and t n+1 The clipping of the time frame image comprises the following steps:
and determining a clipping region, wherein the clipping region is a region with the clipping starting point at the splicing position and the length of the clipping starting point is defined as the moving distance along the direction opposite to the moving direction of the moving target.
8. The method of image stitching in a fixed field of view according to any one of claims 1-7, wherein the acquiring the sequence of images includes:
and acquiring videos to be spliced, and decoding the videos to obtain the image sequence.
9. The method of image stitching in a fixed field of view according to any one of claims 1-7, wherein the respective determination of t n Time frame image and t n+1 The foreground of the moving object in the time frame image comprises:
determination of t using morphological methods n Time frame image and t n+1 And the foreground of the moving target in the moment frame image.
10. The method of claim 1, wherein after said stitching said cropped images, if said t is determined to be n+2 And if the time frame image does not contain the moving target, ending the splicing of the clipping image.
11. An image stitching device in a fixed field of view, comprising:
an image sequence acquisition module, configured to acquire an image sequence, where the image sequence includes at least a plurality of frame images including a moving object;
the splicing position setting module is used for determining the same position in the plurality of frame images as a splicing position;
A foreground determining module for determining t respectively n Time frame image and t n+1 The foreground of the moving target in the time frame image;
the context feature extraction module is used for extracting context features of the image area at the foreground, wherein the context feature extraction module comprises the steps of extracting various context features in the image area at the foreground;
a moving distance calculation module for determining the moving object at the t according to the context characteristics n Time frame image and t n+1 A moving distance between time frame images, wherein the moving distance comprises the following distance between time frame images n Combination of multiple contextual features of a temporal frame image and t n+1 Performing feature matching on the combination of multiple contextual features of the time frame image to obtain feature matching pairs; matching the pair at t according to the characteristics n Time frame image and t n+1 Calculating the coordinate information of the moving object in the time frame image at the t n Time frame image and t n+1 A moving distance between time frame images;
the clipping module is used for respectively aiming at the t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image;
and the splicing module splices the cut images.
12. The image stitching device in a fixed field of view of claim 11 further comprising:
the background image acquisition module is used for acquiring a background image in the frame image;
a Gaussian mixture modeling module for modeling the background image and the t n Respectively carrying out Gaussian mixture modeling on the time frame images;
a moving target judging module for judging the moving target according to the Gaussian mixture model of the background image and the t n Comparing the results of the Gaussian mixture model of the time frame image, and judging the t n Whether the moving object is contained in the time frame image.
13. The image stitching device in a fixed field of view of claim 11 further comprising:
the video processing module is used for acquiring videos to be spliced and decoding the videos.
14. The image stitching device in a fixed field of view of claim 11 further comprising:
a mixed Gaussian model updating module for judging the t n When the time frame image does not contain the moving target, the t is calculated n And updating the Gaussian mixture model of the time frame image into the Gaussian mixture model of the background image.
15. An image stitching system in a fixed field of view, comprising:
The image stitching device in a fixed field of view of any one of claims 11-14;
an image acquisition device for acquiring and forming a video including a moving object, and
an image background is acquired.
16. An electronic device, comprising:
one or more processors;
storage means for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-10.
17. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method according to any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111058979.4A CN115797164B (en) | 2021-09-09 | 2021-09-09 | Image stitching method, device and system in fixed view field |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111058979.4A CN115797164B (en) | 2021-09-09 | 2021-09-09 | Image stitching method, device and system in fixed view field |
Publications (2)
Publication Number | Publication Date |
---|---|
CN115797164A CN115797164A (en) | 2023-03-14 |
CN115797164B true CN115797164B (en) | 2023-12-12 |
Family
ID=85473563
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111058979.4A Active CN115797164B (en) | 2021-09-09 | 2021-09-09 | Image stitching method, device and system in fixed view field |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797164B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116612168B (en) * | 2023-04-20 | 2024-06-28 | 北京百度网讯科技有限公司 | Image processing method, device, electronic equipment, image processing system and medium |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104301699A (en) * | 2013-07-16 | 2015-01-21 | 浙江大华技术股份有限公司 | Image processing method and device |
CN105096338A (en) * | 2014-12-30 | 2015-11-25 | 天津航天中为数据系统科技有限公司 | Moving object extraction method and device |
US9947108B1 (en) * | 2016-05-09 | 2018-04-17 | Scott Zhihao Chen | Method and system for automatic detection and tracking of moving objects in panoramic video |
CN108230364A (en) * | 2018-01-12 | 2018-06-29 | 东南大学 | A kind of foreground object motion state analysis method based on neural network |
CN109447082A (en) * | 2018-08-31 | 2019-03-08 | 武汉尺子科技有限公司 | A kind of scene motion Target Segmentation method, system, storage medium and equipment |
CN110097063A (en) * | 2019-04-30 | 2019-08-06 | 网易有道信息技术(北京)有限公司 | Data processing method, medium, device and the calculating equipment of electronic equipment |
CN110136199A (en) * | 2018-11-13 | 2019-08-16 | 北京初速度科技有限公司 | A kind of vehicle location based on camera, the method and apparatus for building figure |
CN110675358A (en) * | 2019-09-30 | 2020-01-10 | 上海扩博智能技术有限公司 | Image stitching method, system, equipment and storage medium for long object |
CN110876036A (en) * | 2018-08-31 | 2020-03-10 | 腾讯数码(天津)有限公司 | Video generation method and related device |
CN111612696A (en) * | 2020-05-21 | 2020-09-01 | 网易有道信息技术(北京)有限公司 | Image splicing method, device, medium and electronic equipment |
CN112819694A (en) * | 2021-01-18 | 2021-05-18 | 中国工商银行股份有限公司 | Video image splicing method and device |
CN112969037A (en) * | 2021-02-26 | 2021-06-15 | 北京卓视智通科技有限责任公司 | Video image lateral fusion splicing method, electronic equipment and storage medium |
CN112991180A (en) * | 2021-03-25 | 2021-06-18 | 北京百度网讯科技有限公司 | Image splicing method, device, equipment and storage medium |
WO2021129669A1 (en) * | 2019-12-23 | 2021-07-01 | RealMe重庆移动通信有限公司 | Image processing method and system, electronic device, and computer-readable medium |
CN113286194A (en) * | 2020-02-20 | 2021-08-20 | 北京三星通信技术研究有限公司 | Video processing method and device, electronic equipment and readable storage medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108205797B (en) * | 2016-12-16 | 2021-05-11 | 杭州海康威视数字技术股份有限公司 | Panoramic video fusion method and device |
-
2021
- 2021-09-09 CN CN202111058979.4A patent/CN115797164B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104301699A (en) * | 2013-07-16 | 2015-01-21 | 浙江大华技术股份有限公司 | Image processing method and device |
CN105096338A (en) * | 2014-12-30 | 2015-11-25 | 天津航天中为数据系统科技有限公司 | Moving object extraction method and device |
US9947108B1 (en) * | 2016-05-09 | 2018-04-17 | Scott Zhihao Chen | Method and system for automatic detection and tracking of moving objects in panoramic video |
CN108230364A (en) * | 2018-01-12 | 2018-06-29 | 东南大学 | A kind of foreground object motion state analysis method based on neural network |
CN109447082A (en) * | 2018-08-31 | 2019-03-08 | 武汉尺子科技有限公司 | A kind of scene motion Target Segmentation method, system, storage medium and equipment |
CN110876036A (en) * | 2018-08-31 | 2020-03-10 | 腾讯数码(天津)有限公司 | Video generation method and related device |
CN110136199A (en) * | 2018-11-13 | 2019-08-16 | 北京初速度科技有限公司 | A kind of vehicle location based on camera, the method and apparatus for building figure |
CN110097063A (en) * | 2019-04-30 | 2019-08-06 | 网易有道信息技术(北京)有限公司 | Data processing method, medium, device and the calculating equipment of electronic equipment |
CN110675358A (en) * | 2019-09-30 | 2020-01-10 | 上海扩博智能技术有限公司 | Image stitching method, system, equipment and storage medium for long object |
WO2021129669A1 (en) * | 2019-12-23 | 2021-07-01 | RealMe重庆移动通信有限公司 | Image processing method and system, electronic device, and computer-readable medium |
CN113286194A (en) * | 2020-02-20 | 2021-08-20 | 北京三星通信技术研究有限公司 | Video processing method and device, electronic equipment and readable storage medium |
CN111612696A (en) * | 2020-05-21 | 2020-09-01 | 网易有道信息技术(北京)有限公司 | Image splicing method, device, medium and electronic equipment |
CN112819694A (en) * | 2021-01-18 | 2021-05-18 | 中国工商银行股份有限公司 | Video image splicing method and device |
CN112969037A (en) * | 2021-02-26 | 2021-06-15 | 北京卓视智通科技有限责任公司 | Video image lateral fusion splicing method, electronic equipment and storage medium |
CN112991180A (en) * | 2021-03-25 | 2021-06-18 | 北京百度网讯科技有限公司 | Image splicing method, device, equipment and storage medium |
Non-Patent Citations (4)
Title |
---|
全景视频拼接关键技术研究;蓝先迪;《中国优秀硕士学位论文全文数据库 信息科技辑》;I138-757 * |
基于图像配准的动态场景运动目标检测算法;丁莹;范静涛;杨华民;姜会林;;长春理工大学学报(自然科学版)(Z1);4-9 * |
基于改进SIFT算法的视频图像序列自动拼接;卢斌;宋夫华;;测绘科学(01);23-25 * |
视频监控中的图像拼接与合成算法研究;苗立刚;;仪器仪表学报(04);857-861 * |
Also Published As
Publication number | Publication date |
---|---|
CN115797164A (en) | 2023-03-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109035304B (en) | Target tracking method, medium, computing device and apparatus | |
US10671855B2 (en) | Video object segmentation by reference-guided mask propagation | |
US10600158B2 (en) | Method of video stabilization using background subtraction | |
CN110660102B (en) | Speaker recognition method, device and system based on artificial intelligence | |
CN110062157B (en) | Method and device for rendering image, electronic equipment and computer readable storage medium | |
CN112348828A (en) | Example segmentation method and device based on neural network and storage medium | |
US20180068451A1 (en) | Systems and methods for creating a cinemagraph | |
CN112435278B (en) | Visual SLAM method and device based on dynamic target detection | |
US11875490B2 (en) | Method and apparatus for stitching images | |
CN112435223B (en) | Target detection method, device and storage medium | |
CN111494947B (en) | Method and device for determining movement track of camera, electronic equipment and storage medium | |
CN109035257A (en) | portrait dividing method, device and equipment | |
CN115797164B (en) | Image stitching method, device and system in fixed view field | |
CN108229281B (en) | Neural network generation method, face detection device and electronic equipment | |
CN112396594A (en) | Change detection model acquisition method and device, change detection method, computer device and readable storage medium | |
US20170287187A1 (en) | Method of generating a synthetic image | |
Delibasoglu et al. | Motion detection in moving camera videos using background modeling and FlowNet | |
CN108960130B (en) | Intelligent video file processing method and device | |
CN113298707B (en) | Image frame splicing method, video inspection method, device, equipment and storage medium | |
CN112270748B (en) | Three-dimensional reconstruction method and device based on image | |
US20190287209A1 (en) | Optimal data sampling for image analysis | |
CN112329729B (en) | Small target ship detection method and device and electronic equipment | |
US11373315B2 (en) | Method and system for tracking motion of subjects in three dimensional scene | |
CN115601541A (en) | Semantic tag fusion method and device, electronic equipment and storage medium | |
CN117218364A (en) | Three-dimensional object detection method, electronic device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |