CN115797164A - Image splicing method, device and system in fixed view field - Google Patents

Image splicing method, device and system in fixed view field Download PDF

Info

Publication number
CN115797164A
CN115797164A CN202111058979.4A CN202111058979A CN115797164A CN 115797164 A CN115797164 A CN 115797164A CN 202111058979 A CN202111058979 A CN 202111058979A CN 115797164 A CN115797164 A CN 115797164A
Authority
CN
China
Prior art keywords
image
time frame
frame image
moving
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111058979.4A
Other languages
Chinese (zh)
Other versions
CN115797164B (en
Inventor
张丽
唐虎
孙运达
刘永春
李栋
王志明
郑大川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Nuctech Co Ltd
Original Assignee
Tsinghua University
Nuctech Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Nuctech Co Ltd filed Critical Tsinghua University
Priority to CN202111058979.4A priority Critical patent/CN115797164B/en
Publication of CN115797164A publication Critical patent/CN115797164A/en
Application granted granted Critical
Publication of CN115797164B publication Critical patent/CN115797164B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, an electronic device, a storage medium, and a program for image stitching in a fixed field of view. An image splicing method comprises the steps of obtaining an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target; determining the same position in the plurality of frame images as a splicing position; respectively determine t n Time frame image sum t n+1 The foreground of the moving target in the time frame image is obtained, and context feature extraction is carried out on the image area at the foreground; determining the moving target at t according to the context characteristics n Time frame image sum t n+1 Moving distance between time frame images; respectively aligning t according to the moving distance and the splicing position n Time frame image sum t n+1 Clipping the time frame image to obtain a clipped image; and splicing the cut images. The method can realize the splicing display of the moving target in the fixed view field.

Description

Image splicing method, device and system in fixed view field
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method, an apparatus, a system, an electronic device, a storage medium, and a program product for image stitching in a fixed field of view.
Background
The generalized image stitching is to stitch several still images (which may be obtained at different times, different viewing angles or different sensors) with overlapping portions into a seamless panorama or high-resolution image. The existing image stitching technology generally comprises two key technologies of image registration and image fusion. The image registration is to calculate the matching relationship between the images with splicing by adopting an image processing method to obtain an overlapping area and realize the splicing of the images. And the image fusion is to realize smooth transition of the spliced images at the spliced positions.
However, when the field of view of the camera or other shooting device is fixed and the splicing object is a moving object in the scene (e.g., splicing a large vehicle that is traveling, splicing an object conveyed on a conveyor belt, etc.), for the existing splicing method, since there is a background with the same field of view in the images to be spliced, when feature matching is used, erroneous image registration may result, and thus an erroneous splicing result may be obtained.
Disclosure of Invention
In view of this, embodiments of the present disclosure provide an image stitching method, apparatus, system, electronic device, storage medium, and program product.
One aspect of the present disclosure provides an image stitching method in a fixed field of view, comprising: acquiring an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target; determining the same position in the frame images as a splicing position; respectively determine t n Time frame image sum t n+1 The moving object in the time frame imageA target foreground is obtained, and context feature extraction is carried out on an image area at the foreground; determining the moving target at the t according to the context characteristics n Time frame image and t n+1 Moving distance between time frame images; respectively aligning the t according to the moving distance and the splicing position n Time frame image and t n+1 Clipping the time frame image to obtain a clipped image; and splicing the cut images.
In some embodiments, t is determined separately n Time frame image sum t n+1 Before the foreground of the moving target in the time frame image, the method further comprises: acquiring a background image in a frame image; for the background image and the t n Respectively carrying out mixed Gaussian modeling on the time frame images; according to the mixed Gaussian model of the background image and the t n The comparison result of the Gaussian mixture model of the time frame image is used for judging the t n Whether the time frame image contains a moving object.
In some embodiments, at the time of determining the t n When the time frame image does not contain the moving target, the t is added n And updating the Gaussian mixture model of the time frame image into the Gaussian mixture model of the background image.
In some embodiments, the determining that the same position in a plurality of the frame images is a stitching position comprises: setting the row or the column of the frame image as the splicing position, wherein the splicing position satisfies the following conditions: in the moving direction of the moving target, the splicing position is positioned at t n In front of the moving object in the time frame image, t m The time frame image is a frame image which comprises the moving target for the first time, m is a positive integer and is less than or equal to n.
In some embodiments, said determining said moving object at said t based on said contextual characteristics n Time frame image and the t n+1 The moving distance between time frame images includes: for the t n Context feature of time frame image and t n+1 Carrying out feature matching on the context features of the time frame image and obtaining feature matching pairs; according to said characteristicsMatching pairs at said t n Time frame image and the t n+1 Calculating the coordinate information of the moving target at the t in the time frame image n Time frame image and the t n+1 The moving distance between the time frame images.
In some embodiments, said extracting the contextual features of the image region at the foreground comprises: extracting a plurality of context features in the image area at the foreground; the pair of the t n Context feature of time frame image and t n+1 The step of performing feature matching on the context features of the time frame image comprises the following steps: for the t n Combination of multiple contextual features of a time frame image and the t n+1 And performing feature matching on the combination of the plurality of context features of the time frame image.
In some embodiments, said matching pairs according to said characteristics are at said t n Time frame image and the t n+1 Calculating the coordinate information of the moving target at the t in the time frame image n Time frame image and the t n+1 The moving distance between the time frame images includes: and calculating the moving distance according to the average absolute difference or the median of the absolute differences of the coordinate information of the plurality of feature matching pairs.
In some embodiments, at least one of the contextual characteristics is related to an operating mode of a device that acquired the sequence of images and/or an operating environment of the device.
In some embodiments, said moving distance and said splicing position are respectively corresponding to said t n Time frame image and t n+1 The time frame image clipping comprises: determining a cutting area, wherein the cutting area is an area which takes the splicing position as a cutting starting point and defines the length of the cutting starting point along the opposite movement direction of the moving target as the moving distance.
In certain embodiments, the acquiring the sequence of images comprises: and acquiring a video to be spliced, and decoding the video to obtain the image sequence.
In certain embodiments, the separately determining t n Time of dayFrame image and t n+1 The foreground of the moving target in the time frame image comprises: determination of t by morphological methods n Time frame image sum t n+1 And the foreground of the moving target in the time frame image.
In some embodiments, after the splicing of the cropped images, if the t is determined n+2 And if the time frame image does not contain the moving target, finishing splicing the cut images.
Another aspect of the present disclosure provides an image stitching device in a fixed field of view, comprising: the device comprises an image sequence acquisition module, a motion estimation module and a motion estimation module, wherein the image sequence acquisition module is used for acquiring an image sequence which at least comprises a plurality of frame images containing a moving target; the splicing position setting module is used for determining the same position in the frame images as a splicing position; a foreground determination module for determining t respectively n Time frame image sum t n+1 The foreground of the moving target in the time frame image; the context feature extraction module is used for extracting context features of the image area at the foreground; a moving distance calculation module for determining the moving target at the t according to the context feature n Time frame image and the t n+1 Moving distance between time frame images; a clipping module for respectively matching the t according to the moving distance and the splicing position n Time frame image and t n+1 Cutting the time frame image to obtain a cut image; and the splicing module is used for splicing the cut images.
In certain embodiments, further comprising: the background image acquisition module is used for acquiring a background image in the frame image; a Gaussian mixture modeling module for modeling the background image and the t n Respectively carrying out Gaussian mixture modeling on the time frame images; a moving target judging module for judging the background image according to the Gaussian mixture model of the background image and the t n The comparison result of the Gaussian mixture model of the time frame image is used for judging the t n Whether the time frame image contains a moving object.
In certain embodiments, further comprising: a feature matching module for matching the t n Time frameContextual features of an image and the t n+1 And performing feature matching on the context features of the time frame images, and obtaining the feature matching pairs.
In certain embodiments, further comprising: and the video processing module is used for acquiring the video to be spliced and decoding the video.
In certain embodiments, further comprising: a Gaussian mixture model updating module for judging the t n When the time frame image does not contain the moving target, the t is added n And updating the Gaussian mixture model of the time frame image into the Gaussian mixture model of the background image.
Another aspect of the present disclosure also provides an image stitching system in a fixed field of view, comprising: the image stitching device in the fixed field of view and the image acquisition equipment are used for acquiring and forming a video containing a moving target and acquiring an image background.
Another aspect of the present disclosure also provides an electronic device including: one or more processors; a storage device to store one or more programs, wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of the above.
Another aspect of the disclosure also provides a computer-readable storage medium having stored thereon executable instructions that, when executed by a processor, cause the processor to perform the method of any of the above.
Another aspect of the disclosure also provides a computer program product comprising a computer program which, when executed by a processor, implements the method of any one of the above.
The image splicing method comprises the steps of obtaining an image sequence, wherein the image sequence at least comprises a plurality of frame images containing moving targets; determining the same position in the plurality of frame images as a splicing position; respectively determining t n Time frame image sum t n+1 Moving the foreground of the target in the time frame image, and up-down moving the image area of the foregroundExtracting character features; determining the moving target at t according to the context characteristics n Time frame image sum t n+1 Moving distance between time frame images; respectively aligning t according to the moving distance and the splicing position n Time frame image sum t n+1 Clipping the time frame image to obtain a clipped image; and splicing the cut images. The method disclosed by the invention determines the moving distance of the moving object between the adjacent frame images by comparing the context characteristics, cuts the frame images to obtain the part of the moving object according to the moving distance and the set splicing position, obtains the partial image of the moving object every time the frame images are cut, and finally splices the cut areas in all the frame images to realize the panoramic display of the moving object.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent from the following description of the embodiments of the present disclosure with reference to the accompanying drawings, in which:
FIG. 1 schematically illustrates a flow chart of a stitching method for a moving target in a fixed field of view according to an embodiment of the disclosure;
FIG. 2 schematically illustrates a flow chart of another embodiment of a method for stitching moving objects in a fixed field of view according to an embodiment of the present disclosure;
FIG. 3 schematically illustrates a flow chart of another embodiment of a method for stitching moving objects in a fixed field of view according to an embodiment of the present disclosure;
FIG. 4 schematically illustrates a flow chart of another embodiment of a stitching method for moving objects in a fixed field of view according to an embodiment of the present disclosure;
FIG. 5 schematically shows a moving target t according to an embodiment of the disclosure n A schematic of a time frame image;
FIG. 6 schematically shows a moving target t according to an embodiment of the disclosure n+1 A schematic diagram of a time frame image;
FIG. 7 schematically shows a moving target t according to an embodiment of the disclosure n+2 A schematic diagram of a time frame image;
FIG. 8 is a block diagram schematically illustrating a stitching arrangement for a moving target in a fixed field of view, in accordance with an embodiment of the present disclosure;
fig. 9 schematically shows a block diagram of an electronic device adapted to implement a stitching method for a moving object in a fixed field of view according to an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that these descriptions are illustrative only and are not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
Where a convention analogous to "at least one of A, B, or C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B, or C" would include but not be limited to systems that have A alone, B alone, C alone, A and B together, A and C together, B and C together, and/or A, B, and C together, etc.). The terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, features defined as "first" and "second" may explicitly or implicitly include one or more of the described features.
The detailed background may include other technical problems than those which are exclusively addressed.
In the embodiment of the present disclosure, aThe image splicing method in the fixed field of view comprises the steps of obtaining an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target; determining the same position in the plurality of frame images as a splicing position; respectively determine t n Time frame image sum t n+1 The foreground of the moving target in the time frame image is obtained, and context feature extraction is carried out on the image area at the foreground; determining the moving target at t according to the context characteristics n Time frame image sum t n+1 Moving distance between time frame images; respectively aligning t according to the moving distance and the splicing position n Time frame image sum t n+1 Cutting the time frame image to obtain a cut image; and splicing the cut images.
The image stitching method in the embodiment of the disclosure is suitable for stitching the panoramic image of the moving object in the fixed view field under the environment of the fixed view field, and is particularly suitable for a scene in which the volume of the moving object is large and the image capturing range of the fixed view field cannot directly image the whole moving object.
It should be noted that the moving target in the embodiment of the present disclosure may be a moving object that needs to be photographed in a fixed field of view.
Fig. 1 schematically shows an application scenario of an image stitching method in a fixed field of view according to an embodiment of the present disclosure.
As shown in fig. 1, an application scenario 100 according to this embodiment may comprise storage devices 101, 102, 103. Network 104 is the medium used to provide communication links between storage devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, to name a few.
A user may interact with the server 105 via the network 104 using the storage devices 101, 102, 103 to upload a sequence of images onto the server 105 for processing by the server 105.
The storage devices 101, 102, 103 may be various electronic devices having display screens including, but not limited to, smart phones, tablet computers, laptop portable computers, desktop computers, and the like.
The server 105 may be a server that performs context feature recognition, feature matching, image cropping, and image stitching fusion for the image sequence.
It should be noted that the stitching method for moving objects in a fixed field of view provided by the embodiments of the present disclosure may be generally executed by the server 105. Accordingly, the stitching device for moving objects in a fixed field of view provided by the embodiments of the present disclosure may be generally disposed in the server 105. The stitching method for the moving target in the fixed field of view provided by the embodiment of the present disclosure may also be executed by a server or a server cluster which is different from the server 105 and can communicate with the storage devices 101, 102, 103 and/or the server 105. Accordingly, the stitching system for moving objects in a fixed field of view provided by the embodiments of the present disclosure may also be disposed in a server or a cluster of servers that is different from the server 105 and is capable of communicating with the storage devices 101, 102, 103 and/or the server 105.
It should be understood that the number of storage devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for an implementation.
The following describes a splicing method for a moving target in a fixed field of view according to an embodiment of the present disclosure in detail with reference to fig. 2 to 7 based on the scenario described in fig. 1.
Fig. 2 schematically shows a flow chart of a stitching method of moving objects in a fixed field of view according to an embodiment of the present disclosure.
As shown in fig. 2, the stitching method for a moving object in a fixed field of view of this embodiment includes operations S210 to S260.
In operation 210, an image sequence is acquired, the image sequence including at least a plurality of frame images including a moving object at a motion time t.
The image sequence in the embodiment of the present disclosure may include a series of images that are sequentially and continuously obtained by moving an object at different times, and may further include a plurality of frame images obtained by processing a video, an animation, and the like that include the moving object. In an application scenario of the embodiment of the present disclosure, a moving object moves in a fixed view field, and accordingly, the image sequence at least includes a video generated during a whole moving process from when the moving object starts to enter the fixed view field to when the moving object completely leaves the fixed view field, and a plurality of frame images correspondingly generated by taking a moving time t as an axis are obtained after the video is analyzed.
In combination with the application scenario of the embodiment of the present disclosure, for example, a security inspection scenario for a large container at a port, a camera in a fixed field of view may keep a working state during working time, but a moving object may not move in the fixed field of view at all times, so that the image sequence may further include a frame image that does not include the moving object.
The manner of acquiring the image sequence in the embodiments of the present disclosure may include: and acquiring a video containing a moving target to be spliced, and decoding the video to obtain a plurality of frame images.
The video may include offline video files captured by cameras in a fixed field of view, and may include online video files uploaded to the internet by users, the format of the video files may include, but is not limited to, general video formats such as MPEG, AVI, MOV, WMV, and the like, and the format of the decoded frame images may include, but is not limited to, general image formats such as PNG, BMP, or JPG.
The video decoding processing in the embodiment of the present disclosure may be implemented by using a frame image processing function in related third-party software, such as photoshop software, adobe Premiere software, and the like.
It can be understood that, when the video is decoded, the number of the frame images can be controlled to ensure that there is a repeated portion between adjacent frames having moving objects, so that the method provided by the embodiment of the present disclosure has a basis for image stitching.
It should be noted that the frame image obtained by the video decoding process is named by the motion time t of the moving target, so as to facilitate understanding and explaining the technical idea of the embodiment of the present disclosure. Exemplarily, t 0 Time frame image to t n The time frame image is named t in the following for representing some frame images with special meanings or effects m Time frame images to better distinguish the description.
In operation S220, the same position in a plurality of the frame images is determined as a stitching position.
The frame image acquired in the embodiment of the present disclosure is an image in a fixed view field, and accordingly, the sizes of the background images correspondingly generated in the fixed view field are the same. Referring to fig. 5 to 7, the background image is a pixel region including a1 to a18 columns and b1 to b14 rows. In this embodiment, the a9 column is used as the above splicing position, and the splicing position is determined to facilitate the clipping and splicing of subsequent images.
It will be appreciated that the size of the pixel regions of the corresponding background image varies from field of view to field of view.
It will be appreciated that the stitching location may also be a row of pixels in the pixel area, or a polyline type stitching line comprising a plurality of row or column pixels.
In order to further optimize the integrity of the spliced moving target, the following conditions need to be satisfied when the splicing position is set:
in the moving direction of the moving target, the splicing position is positioned at t m In front of the moving object in the time frame image, t m The time frame image is a frame image which comprises the moving target for the first time, m is a positive integer and is less than or equal to n.
Referring to FIG. 5, the horizontal arrow in FIG. 5 indicates the moving direction of the moving object, assuming that the current t is n The time is t in the above m In the time frame image, assuming that the moving object is composed of parts c1, c2, c3, etc., for better clarity of illustration, and to avoid more overlap, only parts c1 and c2 of the moving object are shown in the drawing in the present embodiment. At t m At the moment, the moving target enters a fixed view field for the first time, and the set splicing position needs to be ensured to be on the left side of c1 at the moment, and the purpose of the arrangement is that when t is up to t m If the splicing position in the frame image at the moment is intersected with the moving target, the moving target positioned on the left side of the splicing position cannot be cut during subsequent image cutting, and a panoramic image of the moving target cannot be displayed during image splicing.
In operation S230, t is determined, respectively n Time frame image sum t n+1 And performing context feature extraction on an image area at the foreground of the moving target in the time frame image.
In the embodiments of the present disclosure, t is determined by morphological methods n Time frame image sum t n+1 And the foreground of the moving target in the time frame image. And performing opening operation in morphology on the background image and the foreground of the moving target, namely corroding and then expanding a binary image formed by the background image and the foreground to perform noise reduction on the frame image, so as to better distinguish the foreground in the background image.
The context feature in the embodiment of the present disclosure may include a SIFT feature, a HOG feature, and the like. The SIFT feature extraction method in the embodiment of the present disclosure may include the following operations:
constructing a DOG space;
extracting interest points (interest points), identifying by using corner points (corner points) after finding out all characteristic points, and performing curve fitting on discrete points by using RANSAC (random sample consensus) to obtain accurate position and scale information of key points;
and direction assignment is carried out, the feature points are assigned according to the detected local image structure of the key points, a gradient direction histogram method can be adopted, and when the histogram is calculated, each sampling point added into the histogram is weighted by using a circular Gaussian function.
It can be understood that when the context features are extracted, various context features can be extracted, so that more accurate clipping regions can be realized through comparison of the context features of various combinations through subsequent operations, and splicing is facilitated. Illustratively, the SIFT feature and the HOG feature in the above are extracted at the same time to further weaken the influence of external environment such as illumination and the like on image stitching.
It can be understood that, when extracting the context features, it is necessary to ensure that at least one of the context features is related to the working mode of the device that acquires the image sequence and/or the working environment of the device, so as to reduce the influence of the working mode and the working environment of the device (for example, day detection and night detection) on image stitching.
To say thatIt is clear that t is determined by morphological methods n Time frame image sum t n+1 The following operations are also required before the foreground of the moving object in the time frame image.
Fig. 3 schematically shows a flowchart of a stitching method for a moving target in a fixed field of view according to still another embodiment in the embodiment of the present disclosure, including operations S201 to S203.
In operation S201, a background image in a frame image is acquired.
The background image is an image that does not include a moving object. For example, when the fixed field of view does not contain a moving object, the photographing device may photograph the fixed field of view to obtain the background image.
In operation S202, the background image and the t are processed n And respectively carrying out Gaussian mixture modeling on the time frame images.
In operation S203, a Gaussian mixture model based on the background image and the t n The comparison result of the Gaussian mixture model of the time frame image is used for judging the t n Whether the time frame image contains a moving object. K Gaussian models are used in the Gaussian mixture model to represent the characteristics of each pixel point in the frame image, and K generally takes a value of 3-5. When making a judgment, by applying t n And matching each pixel point in the time frame image with the Gaussian mixture model of the background image, if the matching is successful, the pixel point is the background, and if the matching is unsuccessful, the pixel point is the foreground. At the time of judging t n And after the time frame image contains the moving target, further carrying out noise reduction optimization processing on the foreground in the frame image by adopting the method based on the morphology.
The acquisition of the background image is susceptible to the influence of the external environment, such as illumination, shooting angle, distance, etc., which all cause the change of the gaussian mixture model of the background image, and it can be understood that the gaussian mixture modeling is adopted to better distinguish the moving target in the background image, so as shown in fig. 4, in operation S2031, it is determined that t is the above-mentioned t n When the time frame image does not contain the moving target, the t is added n Updating the mixed Gaussian model of the time frame image into the mixed Gaussian model of the background imageType, to ensure that the background image is as close to t as possible n A background image in the time frame image.
Referring back to fig. 2, in operation S240, it is determined that the moving target is at the t according to the context feature n Time frame image and the t n+1 The moving distance between the time frame images.
In the embodiment of the present disclosure, for the t n Context feature of time frame image and t n+1 Carrying out feature matching on the context features of the time frame image and obtaining feature matching pairs; matching pairs at the t according to the characteristics n Time frame image and the t n+1 Calculating the coordinate information of the moving target at the t in the time frame image n Time frame image and t n+1 The moving distance between the time frame images. Taking SIFT feature in the following features as an example, respectively for t n Time frame image sum t n+1 Extracting SIFT features on the moving target in the time frame image, and determining the same part of the moving target at t through feature description n Time frame image and t n+1 The SIFT features corresponding to the frame images at the moment are respectively constructed, so that the SIFT features describing the same part of the moving object in different frame images are constructed into feature matching pairs, and the coordinate positions of the pixel regions of the feature matching pairs in the frame images are respectively obtained, for example, at t n One coordinate of the feature matching pair in the time frame image is a13 column in the pixel region at t n+1 When the coordinates of the other of the feature matching pairs in the time frame image are a9 columns in the pixel region, the moving object can be calculated at t n Time frame image sum t n+1 The moving distance between the time frame images is the length of four columns of pixel units.
It can be understood that, when the same context feature is extracted, feature extraction can be performed on multiple positions on the moving target, so as to obtain multiple feature matching pairs, and the moving distance can be calculated according to the average absolute difference or the median of the absolute differences of the coordinate information of multiple feature matching, so as to improve the accuracy of calculating the moving distance.
It can be understood that when extracting various context features, different feature extractions can be performed on a plurality of positions on the moving target, so as to obtain a plurality of different feature matching pairs, and the moving distance can be calculated according to an average absolute difference or a median of absolute differences of coordinate information of the plurality of different feature matching, so as to improve the accuracy of calculating the moving distance.
In operation S250, the t is respectively aligned with the splicing positions according to the moving distance n Time frame image and the t n+1 And clipping the time frame image to obtain a clipped image.
For example, in operation S250, a trimming area may be determined, where the trimming area is defined by a trimming start point at the splicing position and a length of the trimming start point in a direction opposite to the moving direction of the moving object as the moving distance. Referring to fig. 5 to 7, the moving direction of the moving object is a direction from right to left in the drawing. The splice position set forth in operation S220 is indicated at the a9 column position in the figure. T in FIG. 5 n At this time, the front end of the c1 portion of the moving object moves to the a13 column of the pixel region, t in fig. 6 n+1 At this time, the front end of the c1 portion of the moving object moves to the a9 column of the pixel region, and the moving distance can be calculated by using the above-mentioned method for calculating the moving distance by using the context feature, and the region can be obtained by determining the cropping region by performing the region delineation in such a way that the region indicated by the horizontal double-headed arrow in fig. 6 is the cropping region. The clipping region at least applies to t n Time frame image sum t n+1 The time frame image. At t n The blank area without moving object is obtained by cutting in the time frame image, and t is n+1 Cropping in the time frame image results in the shaded portion of the moving object shown in fig. 6. In this way t can be matched equally n+2 The time frame image is cropped to obtain a shaded portion of the moving object as shown in fig. 7.
In operation S260, the cropped images are stitched.
By way of example in fig. 5 and 7, the entire c1 and c2 portions of the moving object can be obtained by splicing the moving object in the cropping area, i.e., splicing the blank area cropped in fig. 5 and the shaded portion of the moving object in fig. 6 and 7. When splicing the clipping regions, the position a9 in fig. 5 is taken as a starting point, the position a13 in fig. 5 is butted with the position a9 in fig. 6 to splice the two clipping regions, and the subsequent splicing manner is analogized, and is not described again.
It can be understood that the splicing of the cropping areas may be performed according to the above rules after the cropping areas are all obtained, or may be performed once each time a cropping area is obtained.
After operation S260, at pair t n+2 When the time frame image is judged whether to contain the moving target, if t n+2 And if the frame image does not contain the moving target, finishing the whole method for splicing the images, wherein the splicing operation of the images is finished, and circularly executing the method when a new moving target is detected again.
Based on the splicing method of the moving target in the fixed view field, the embodiment of the disclosure also provides a splicing device of the moving target in the fixed view field. The apparatus will be described in detail below with reference to fig. 8.
Fig. 8 schematically shows a block diagram of a splicing apparatus for a moving target in a fixed field of view according to an embodiment of the present disclosure.
As shown in fig. 8, the stitching apparatus 300 includes an image sequence acquisition module 301, a stitching position setting module 302, a foreground determination module 303, a context feature extraction module 304, a movement distance calculation module 305, a cropping module 306, and a stitching module 307.
An image sequence acquiring module 301, configured to acquire an image sequence, where the image sequence at least includes a plurality of frame images of a moving object at a motion time t.
A stitching position setting module 302, configured to determine that the same position in the frame images is a stitching position.
A foreground determining module 303 for determining t respectively n Time frame image sum t n+1 And the foreground of the moving target in the time frame image.
A contextual feature extraction module 304, configured to perform contextual feature extraction on the image region at the foreground.
A moving distance calculation module 305 for determining the moving target at the t according to the context characteristics n Time frame image and the t n+1 The moving distance between the time frame images.
A cropping module 306 for respectively cropping the t according to the moving distance and the splicing position n Time frame image and t n+1 And clipping the time frame image to obtain a clipped image.
And a splicing module 307, configured to splice the cropped images.
For example, the splicing apparatus for a moving target in a fixed field of view according to an embodiment of the present disclosure further includes a background image acquisition module, configured to acquire a background image in a frame image; a Gaussian mixture modeling module for modeling the background image and the t n Respectively carrying out mixed Gaussian modeling on the time frame images; a moving target judgment module for judging the background image according to the Gaussian mixture model and the t n The comparison result of the Gaussian mixture model of the time frame image is used for judging the t n Whether the time frame image contains a moving object.
For example, the splicing device for the moving target in the fixed field of view according to the embodiment of the present disclosure further includes a feature matching module for matching the t n Context feature of time frame image and the t n+1 And performing feature matching on the context features of the time frame images, and obtaining the feature matching pairs.
For example, the splicing device for the moving target in the fixed field of view according to the embodiment of the present disclosure further includes: and the video processing module is used for acquiring the video to be spliced and decoding the video.
For example, the splicing device for moving targets in a fixed field of view according to the embodiment of the present disclosure further includes a gaussian mixture model updating module for updating t when determining t n When the time frame image does not contain the moving target, the t is determined n Updating the Gaussian mixture model of the time frame image into the Gaussian mixture model of the background imageA model of si.
In an embodiment of the present disclosure, an image sequence is acquired, the image sequence including at least a plurality of frame images including a moving object; determining the same position in the plurality of frame images as a splicing position; respectively determining t n Time frame image sum t n+1 The foreground of the moving target in the time frame image is extracted, and the context characteristics of the image area at the foreground are extracted; determining the moving target at t according to the context characteristics n Time frame image sum t n+1 Moving distance between time frame images; respectively aligning t according to the moving distance and the splicing position n Time frame image sum t n+1 Clipping the time frame image to obtain a clipped image; and splicing the cut images. The moving distance of the moving object between the adjacent frame images is determined by comparing the context characteristics, the frame images are cut to obtain the part of the moving target according to the moving distance and the set splicing position, the part of the moving target is obtained every time the frame images are cut, and finally the areas cut out from all the frame images are spliced, so that the panoramic display of the moving target can be realized.
According to the embodiment of the present disclosure, any multiple modules of the image sequence acquisition module 301, the stitching position setting module 302, the foreground determination module 303, the context feature extraction module 304, the moving distance calculation module 305, the cropping module 306, and the stitching module 307 may be combined into one module to be implemented, or any one of the modules may be split into multiple modules. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to the embodiment of the present disclosure, at least one of the image sequence acquisition module 301, the stitching location setting module 302, the foreground determination module 303, the context feature extraction module 304, the moving distance calculation module 305, the cropping module 306, and the stitching module 307 may be implemented at least partially as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented by hardware or firmware in any other reasonable manner of integrating or packaging a circuit, or implemented by any one of three implementation manners of software, hardware, and firmware, or in any suitable combination of any of the three implementation manners. Alternatively, at least one of the image sequence acquisition module 301, the stitching position setting module 302, the foreground determination module 303, the contextual feature extraction module 304, the movement distance calculation module 305, the cropping module 306, and the stitching module 307 may be implemented at least in part as a computer program module that, when executed, may perform corresponding functions.
The embodiment of the disclosure further provides a splicing system of a moving target in a fixed view field, and the image splicing device and the image acquisition equipment in the fixed view field in the embodiment are used for acquiring and forming a video containing the moving target and acquiring an image background.
Fig. 9 schematically shows a block diagram of an electronic device adapted to implement a stitching method of moving objects in a fixed field of view according to an embodiment of the present disclosure.
As shown in fig. 9, an electronic device 400 according to an embodiment of the present disclosure includes a processor 401 that can perform various appropriate actions and processes according to a program stored in a Read Only Memory (ROM) 402 or a program loaded from a storage section 408 into a Random Access Memory (RAM) 403. Processor 401 may include, for example, a general purpose microprocessor (e.g., a CPU), an instruction set processor and/or associated chipset, and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), among others. The processor 401 may also include on-board memory for caching purposes. Processor 401 may include a single processing unit or multiple processing units for performing the different actions of the method flows in accordance with embodiments of the present disclosure.
In the RAM 403, various programs and data necessary for the operation of the electronic apparatus 400 are stored. The processor 401, ROM 402 and RAM 403 are connected to each other by a bus 404. The processor 401 performs various operations of the method flows according to the embodiments of the present disclosure by executing programs in the ROM 402 and/or the RAM 403. Note that the programs may also be stored in one or more memories other than the ROM 402 and RAM 403. The processor 401 may also perform various operations of method flows according to embodiments of the present disclosure by executing programs stored in the one or more memories.
According to an embodiment of the present disclosure, electronic device 400 may also include an input/output (I/O) interface 405, input/output (I/O) interface 405 also being connected to bus 404. Electronic device 400 may also include one or more of the following components connected to I/O interface 405: an input portion 406 including a keyboard, a mouse, and the like; an output section 407 including a display device such as a Cathode Ray Tube (CRT), a Liquid Crystal Display (LCD), and the like, and a speaker; a storage section 408 including a hard disk and the like; and a communication section 405 including a network interface card such as a LAN card, a modem, or the like. The communication section 405 performs communication processing via a network such as the internet. A driver 410 is also connected to the I/O interface 405 as needed. A removable medium 411 such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like is mounted on the drive 410 as necessary, so that a computer program read out therefrom is mounted into the storage section 408 as necessary.
The present disclosure also provides a computer-readable storage medium, which may be contained in the apparatus/device/system described in the above embodiments; or may exist alone without being assembled into the device/apparatus/system. The computer-readable storage medium carries one or more programs which, when executed, implement the method according to an embodiment of the disclosure.
According to embodiments of the present disclosure, the computer-readable storage medium may be a non-volatile computer-readable storage medium, which may include, for example but is not limited to: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. For example, according to embodiments of the present disclosure, a computer-readable storage medium may include one or more memories other than the ROM 402 and/or RAM 403 and/or ROM 402 and RAM 403 described above.
Embodiments of the present disclosure also include a computer program product comprising a computer program containing program code for performing the method illustrated in the flow chart. When the computer program product runs in a computer system, the program code is used for causing the computer system to realize the item recommendation method provided by the embodiment of the disclosure.
The computer program performs the above-described functions defined in the system/apparatus of the embodiments of the present disclosure when executed by the processor 401. The systems, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In one embodiment, the computer program may be hosted on a tangible storage medium such as an optical storage device, a magnetic storage device, and the like. In another embodiment, the computer program may also be transmitted in the form of a signal, distributed over a network medium, downloaded and installed through the communication section 405, and/or installed from the removable medium 411. The computer program containing program code may be transmitted using any suitable network medium, including but not limited to: wireless, wired, etc., or any suitable combination of the foregoing.
In such an embodiment, the computer program may be downloaded and installed from a network through the communication section 405 and/or installed from the removable medium 411. The computer program, when executed by the processor 401, performs the above-described functions defined in the system of the embodiments of the present disclosure. The systems, devices, apparatuses, modules, units, etc. described above may be implemented by computer program modules according to embodiments of the present disclosure.
In accordance with embodiments of the present disclosure, program code for executing computer programs provided by embodiments of the present disclosure may be written in any combination of one or more programming languages, and in particular, these computer programs may be implemented using high level procedural and/or object oriented programming languages, and/or assembly/machine languages. The programming language includes, but is not limited to, programming languages such as Java, C + +, python, the "C" language, or the like. The program code may execute entirely on the user computing device, partly on the user device, partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
It will be understood by those skilled in the art that various combinations and/or combinations of the features recited in the various embodiments of the present disclosure and/or the claims may be made even if such combinations or combinations are not explicitly recited in the embodiments of the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
The embodiments of the present disclosure have been described above. However, these examples are for illustrative purposes only and are not intended to limit the scope of the present disclosure. Although the embodiments are described separately above, this does not mean that the measures in the embodiments cannot be used advantageously in combination. The scope of the disclosure is defined by the appended claims and equivalents thereof. Various alternatives and modifications can be devised by those skilled in the art without departing from the scope of the present disclosure, and such alternatives and modifications are intended to be within the scope of the present disclosure.

Claims (21)

1. An image stitching method in a fixed field of view, comprising:
acquiring an image sequence, wherein the image sequence at least comprises a plurality of frame images containing a moving target;
determining the same position in the frame images as a splicing position;
respectively determine t n Time frame image sum t n+1 The foreground of the moving target in the time frame image is obtained, and context characteristics of an image area at the foreground are extracted, wherein n is a positive integer;
determining the moving target at the t according to the context characteristics n Time frame image and the t n+1 Moving distance between time frame images;
respectively aligning the t according to the moving distance and the splicing position n Time frame image and the t n+1 Cutting the time frame image to obtain a cut image; and
and splicing the cut images.
2. Method for image stitching in a fixed field of view according to claim 1, characterized in that t is determined separately n Time frame image sum t n+1 Before the foreground of the moving target in the time frame image, the method further comprises:
acquiring a background image in a frame image;
to the aboveBackground image and the t n Respectively carrying out Gaussian mixture modeling on the time frame images; and
according to the mixed Gaussian model of the background image and the t n The comparison result of the Gaussian mixture model of the time frame image is used for judging the t n Whether the time frame image contains a moving object.
3. The method of image stitching in a fixed field of view of claim 2, wherein the t is determined n When the time frame image does not contain the moving target, the t is added n And updating the Gaussian mixture model of the time frame image into the Gaussian mixture model of the background image.
4. The method of claim 2, wherein said determining the same position in the plurality of frame images as a stitching position comprises:
setting a row or a column of the frame image as the splicing position, wherein the splicing position satisfies:
in the moving direction of the moving target, the splicing position is located at t m In front of the moving object in the time frame image, wherein t m The time frame image is a frame image which comprises the moving target for the first time, m is a positive integer and is less than or equal to n.
5. The method for image stitching in a fixed field of view of claim 1, wherein the determining the moving object at the t according to the contextual feature n Time frame image and the t n+1 The moving distance between time frame images includes:
for the t n Context feature of time frame image and the t n+1 Carrying out feature matching on the context features of the time frame image and obtaining feature matching pairs; and
matching pairs at the t according to the characteristics n Time frame image and t n+1 Calculating the movement from coordinate information in the time frame imageTarget at said t n Time frame image and t n+1 The moving distance between the time frame images.
6. The method of image stitching in a fixed field of view of claim 5, wherein the contextual feature extraction of the image region at the foreground comprises:
extracting a plurality of context features in the image area at the foreground;
the pair of the t n Context feature of time frame image and the t n+1 The feature matching of the context features of the time frame image comprises the following steps:
for the t n Combination of multiple contextual features of a time frame image and the t n+1 And performing feature matching on the combination of the plurality of context features of the time frame image.
7. The method of image stitching in a fixed field of view of claim 6, wherein said matching pairs at said t according to said features n Time frame image and the t n+1 Calculating the coordinate information of the moving target at the t in the time frame image n Time frame image and t n+1 The moving distance between time frame images includes:
and calculating the moving distance according to the average absolute difference or the median of the absolute differences of the coordinate information of the plurality of feature matching pairs.
8. The method for image stitching in a fixed field of view of claim 7, wherein at least one of the contextual features is related to an operating mode of a device acquiring the sequence of images and/or an operating environment of the device.
9. The method of image stitching in a fixed field of view of claim 1, wherein the t is separately mapped to the t according to the shift distance and the stitching location n Time frame image and t n+1 The cropping of the time frame image comprises:
and determining a cutting area, wherein the cutting area is an area which takes the splicing position as a cutting starting point and defines the length of the cutting starting point along the direction opposite to the moving direction of the moving target as the moving distance.
10. The method of image stitching in a fixed field of view according to any one of claims 1-9, wherein the acquiring the sequence of images comprises:
and acquiring a video to be spliced, and decoding the video to obtain the image sequence.
11. The method for image stitching in a fixed field of view according to any one of claims 1 to 9, wherein the separately determining t is performed n Time frame image sum t n+1 The foreground of the moving target in the time frame image comprises:
determination of t by morphological methods n Time frame image sum t n+1 And the foreground of the moving target in the time frame image.
12. The method according to claim 1, wherein after said stitching said cropped images, if said t is determined n+2 And if the time frame image does not contain the moving target, finishing the splicing of the cut images.
13. An image stitching device in a fixed field of view, comprising:
the image sequence acquisition module is used for acquiring an image sequence, and the image sequence at least comprises a plurality of frame images containing a moving target;
the splicing position setting module is used for determining the same position in the frame images as a splicing position;
a foreground determination module for determining t respectively n Time frame image sum t n+1 The foreground of the moving target in the time frame image;
the contextual feature extraction module is used for extracting contextual features of the image area at the foreground;
a moving distance calculation module for determining the moving target at the t according to the context feature n Time frame image and the t n+1 Moving distance between time frame images;
a clipping module for respectively matching the t according to the moving distance and the splicing position n Time frame image and the t n+1 Clipping the time frame image to obtain a clipped image;
and the splicing module is used for splicing the cut images.
14. The image stitching device in a fixed field of view of claim 13, further comprising:
the background image acquisition module is used for acquiring a background image in the frame image;
a Gaussian mixture modeling module for modeling the background image and the t n Respectively carrying out mixed Gaussian modeling on the time frame images;
a moving target judging module for judging the background image according to the Gaussian mixture model of the background image and the t n The comparison result of the Gaussian mixture model of the time frame image is used for judging the t n Whether the time frame image contains a moving object.
15. The image stitching device in a fixed field of view of claim 13, further comprising:
a feature matching module for matching the t n Context feature of time frame image and the t n+1 And performing feature matching on the context features of the time frame images, and obtaining the feature matching pairs.
16. The image stitching device in a fixed field of view of claim 13, further comprising:
and the video processing module is used for acquiring the video to be spliced and decoding the video.
17. The image stitching device in the fixed field of view of claim 13, further comprising:
a Gaussian mixture model updating module for judging the t n When the time frame image does not contain the moving target, the t is added n And updating the Gaussian mixture model of the time frame image into the Gaussian mixture model of the background image.
18. An image stitching system in a fixed field of view, comprising:
an image stitching device in a fixed field of view as claimed in any one of claims 13 to 17;
an image capturing device for capturing and forming a video comprising a moving object, an
And collecting an image background.
19. An electronic device, comprising:
one or more processors;
a storage device for storing one or more programs,
wherein the one or more programs, when executed by the one or more processors, cause the one or more processors to perform the method of any of claims 1-12.
20. A computer readable storage medium having stored thereon executable instructions which, when executed by a processor, cause the processor to perform the method of any one of claims 1 to 12.
21. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any one of claims 1 to 12.
CN202111058979.4A 2021-09-09 2021-09-09 Image stitching method, device and system in fixed view field Active CN115797164B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111058979.4A CN115797164B (en) 2021-09-09 2021-09-09 Image stitching method, device and system in fixed view field

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111058979.4A CN115797164B (en) 2021-09-09 2021-09-09 Image stitching method, device and system in fixed view field

Publications (2)

Publication Number Publication Date
CN115797164A true CN115797164A (en) 2023-03-14
CN115797164B CN115797164B (en) 2023-12-12

Family

ID=85473563

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111058979.4A Active CN115797164B (en) 2021-09-09 2021-09-09 Image stitching method, device and system in fixed view field

Country Status (1)

Country Link
CN (1) CN115797164B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612168A (en) * 2023-04-20 2023-08-18 北京百度网讯科技有限公司 Image processing method, device, electronic equipment, image processing system and medium

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301699A (en) * 2013-07-16 2015-01-21 浙江大华技术股份有限公司 Image processing method and device
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
US9947108B1 (en) * 2016-05-09 2018-04-17 Scott Zhihao Chen Method and system for automatic detection and tracking of moving objects in panoramic video
CN108230364A (en) * 2018-01-12 2018-06-29 东南大学 A kind of foreground object motion state analysis method based on neural network
CN109447082A (en) * 2018-08-31 2019-03-08 武汉尺子科技有限公司 A kind of scene motion Target Segmentation method, system, storage medium and equipment
CN110097063A (en) * 2019-04-30 2019-08-06 网易有道信息技术(北京)有限公司 Data processing method, medium, device and the calculating equipment of electronic equipment
CN110136199A (en) * 2018-11-13 2019-08-16 北京初速度科技有限公司 A kind of vehicle location based on camera, the method and apparatus for building figure
CN110675358A (en) * 2019-09-30 2020-01-10 上海扩博智能技术有限公司 Image stitching method, system, equipment and storage medium for long object
CN110876036A (en) * 2018-08-31 2020-03-10 腾讯数码(天津)有限公司 Video generation method and related device
US20200090303A1 (en) * 2016-12-16 2020-03-19 Hangzhou Hikvision Digital Technology Co., Ltd. Method and device for fusing panoramic video images
CN111612696A (en) * 2020-05-21 2020-09-01 网易有道信息技术(北京)有限公司 Image splicing method, device, medium and electronic equipment
CN112819694A (en) * 2021-01-18 2021-05-18 中国工商银行股份有限公司 Video image splicing method and device
CN112969037A (en) * 2021-02-26 2021-06-15 北京卓视智通科技有限责任公司 Video image lateral fusion splicing method, electronic equipment and storage medium
CN112991180A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Image splicing method, device, equipment and storage medium
WO2021129669A1 (en) * 2019-12-23 2021-07-01 RealMe重庆移动通信有限公司 Image processing method and system, electronic device, and computer-readable medium
CN113286194A (en) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 Video processing method and device, electronic equipment and readable storage medium

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104301699A (en) * 2013-07-16 2015-01-21 浙江大华技术股份有限公司 Image processing method and device
CN105096338A (en) * 2014-12-30 2015-11-25 天津航天中为数据系统科技有限公司 Moving object extracting method and device
US9947108B1 (en) * 2016-05-09 2018-04-17 Scott Zhihao Chen Method and system for automatic detection and tracking of moving objects in panoramic video
US20200090303A1 (en) * 2016-12-16 2020-03-19 Hangzhou Hikvision Digital Technology Co., Ltd. Method and device for fusing panoramic video images
CN108230364A (en) * 2018-01-12 2018-06-29 东南大学 A kind of foreground object motion state analysis method based on neural network
CN110876036A (en) * 2018-08-31 2020-03-10 腾讯数码(天津)有限公司 Video generation method and related device
CN109447082A (en) * 2018-08-31 2019-03-08 武汉尺子科技有限公司 A kind of scene motion Target Segmentation method, system, storage medium and equipment
CN110136199A (en) * 2018-11-13 2019-08-16 北京初速度科技有限公司 A kind of vehicle location based on camera, the method and apparatus for building figure
CN110097063A (en) * 2019-04-30 2019-08-06 网易有道信息技术(北京)有限公司 Data processing method, medium, device and the calculating equipment of electronic equipment
CN110675358A (en) * 2019-09-30 2020-01-10 上海扩博智能技术有限公司 Image stitching method, system, equipment and storage medium for long object
WO2021129669A1 (en) * 2019-12-23 2021-07-01 RealMe重庆移动通信有限公司 Image processing method and system, electronic device, and computer-readable medium
CN113286194A (en) * 2020-02-20 2021-08-20 北京三星通信技术研究有限公司 Video processing method and device, electronic equipment and readable storage medium
CN111612696A (en) * 2020-05-21 2020-09-01 网易有道信息技术(北京)有限公司 Image splicing method, device, medium and electronic equipment
CN112819694A (en) * 2021-01-18 2021-05-18 中国工商银行股份有限公司 Video image splicing method and device
CN112969037A (en) * 2021-02-26 2021-06-15 北京卓视智通科技有限责任公司 Video image lateral fusion splicing method, electronic equipment and storage medium
CN112991180A (en) * 2021-03-25 2021-06-18 北京百度网讯科技有限公司 Image splicing method, device, equipment and storage medium

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
丁莹;范静涛;杨华民;姜会林;: "基于图像配准的动态场景运动目标检测算法", 长春理工大学学报(自然科学版), no. 1, pages 4 - 9 *
卢斌;宋夫华;: "基于改进SIFT算法的视频图像序列自动拼接", 测绘科学, no. 01, pages 23 - 25 *
苗立刚;: "视频监控中的图像拼接与合成算法研究", 仪器仪表学报, no. 04, pages 857 - 861 *
蓝先迪: "全景视频拼接关键技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, pages 138 - 757 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116612168A (en) * 2023-04-20 2023-08-18 北京百度网讯科技有限公司 Image processing method, device, electronic equipment, image processing system and medium

Also Published As

Publication number Publication date
CN115797164B (en) 2023-12-12

Similar Documents

Publication Publication Date Title
JP7221089B2 (en) Stable simultaneous execution of location estimation and map generation by removing dynamic traffic participants
CN108256479B (en) Face tracking method and device
CN111598091A (en) Image recognition method and device, electronic equipment and computer readable storage medium
US20150103183A1 (en) Method and apparatus for device orientation tracking using a visual gyroscope
CN111444744A (en) Living body detection method, living body detection device, and storage medium
CN108337505B (en) Information acquisition method and device
CN108389172B (en) Method and apparatus for generating information
CN110062157B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN110660102B (en) Speaker recognition method, device and system based on artificial intelligence
KR102204269B1 (en) Notifications for deviations in depiction of different objects in filmed shots of video content
CN111553362A (en) Video processing method, electronic equipment and computer readable storage medium
US20240029303A1 (en) Three-dimensional target detection method and apparatus
CN112348828A (en) Example segmentation method and device based on neural network and storage medium
CN112396073A (en) Model training method and device based on binocular images and data processing equipment
CN113744256A (en) Depth map hole filling method and device, server and readable storage medium
US10198842B2 (en) Method of generating a synthetic image
CN111382695A (en) Method and apparatus for detecting boundary points of object
CN115797164B (en) Image stitching method, device and system in fixed view field
CN112435278B (en) Visual SLAM method and device based on dynamic target detection
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN111494947B (en) Method and device for determining movement track of camera, electronic equipment and storage medium
US20190304117A1 (en) Hardware disparity evaluation for stereo matching
CN115511870A (en) Object detection method and device, electronic equipment and storage medium
CN112329729B (en) Small target ship detection method and device and electronic equipment
US11373315B2 (en) Method and system for tracking motion of subjects in three dimensional scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant