CN116233339A - Mask combined filtering multi-video stitching method, system and electronic equipment - Google Patents
Mask combined filtering multi-video stitching method, system and electronic equipment Download PDFInfo
- Publication number
- CN116233339A CN116233339A CN202310240883.2A CN202310240883A CN116233339A CN 116233339 A CN116233339 A CN 116233339A CN 202310240883 A CN202310240883 A CN 202310240883A CN 116233339 A CN116233339 A CN 116233339A
- Authority
- CN
- China
- Prior art keywords
- pictures
- fused
- picture
- video
- mask
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001914 filtration Methods 0.000 title claims abstract description 52
- 238000000034 method Methods 0.000 title claims abstract description 40
- 238000012545 processing Methods 0.000 claims abstract description 45
- 230000004927 fusion Effects 0.000 claims abstract description 36
- 230000000873 masking effect Effects 0.000 claims abstract description 22
- 230000002194 synthesizing effect Effects 0.000 claims abstract description 18
- 239000011159 matrix material Substances 0.000 claims abstract description 16
- 230000008859 change Effects 0.000 claims description 13
- 230000015572 biosynthetic process Effects 0.000 claims description 12
- 238000003786 synthesis reaction Methods 0.000 claims description 12
- 230000009466 transformation Effects 0.000 claims description 12
- 238000000265 homogenisation Methods 0.000 abstract description 7
- 230000007704 transition Effects 0.000 abstract description 5
- 230000008569 process Effects 0.000 description 3
- 101100269850 Caenorhabditis elegans mask-1 gene Proteins 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/2628—Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Studio Circuits (AREA)
Abstract
The invention provides a multi-video splicing method, a multi-video splicing system and electronic equipment by combining masks and filtering, wherein a plurality of pictures to be fused are obtained by processing picture information of spliced segments in segmented video to be spliced; then, masking and filtering are carried out on the pictures to be fused, and the pictures to be fused at the same time point are combined to obtain a fused picture; and finally, serializing and synthesizing the fusion pictures to form a spliced video. According to the method, the problem of uneven alignment during splicing of a plurality of video panoramas is optimized through a matrix homogenization method, the color difference problem is corrected through a mask and filtering mode, the problem of hard splicing gaps and color transition of the multi-video panoramas is eliminated, and the technical problem of panoramic splicing of any plurality of videos with more than two is solved.
Description
Technical Field
The invention relates to the technical field of image stitching, in particular to a multi-video stitching method, a multi-video stitching system and electronic equipment with mask combined filtering.
Background
At present, the existing video panorama stitching technology basically only supports stitching of two video sources, and the stitched panoramic video has the problems of stitching gaps, color differences, uneven alignment and the like, so that panoramic stitching of more than two arbitrary videos cannot be completed.
Therefore, it is necessary to provide a method, a system and an electronic device for multi-video stitching with mask combined filtering to solve the above technical problems.
Disclosure of Invention
In order to solve the technical problems, the invention provides a mask-combined filtering multi-video stitching method, a system and electronic equipment, which are used for correcting color deviation problems in the stitching process by using a mask-combined filtering color difference correction technology, and performing homogenization treatment on matrix transformation by using a multi-matrix homogenization algorithm so as to achieve a good fusion effect and finish the stitching of a plurality of videos.
The invention provides a multi-video stitching method combining mask and filtering, which comprises the following steps:
processing picture information of a spliced segment in the segmented video to be spliced to obtain a plurality of pictures to be fused;
masking and filtering the multiple pictures to be fused, and merging the pictures to be fused at the same time point to obtain a fused picture;
and serializing and synthesizing the fusion pictures to form a spliced video.
Preferably, the processing the pictures of the segments to be spliced in the segmented video to be spliced includes:
reading segmented videos to be spliced;
carrying out picture serialization treatment on segmented video to be spliced to obtain frame pictures;
processing a frame picture to obtain a picture to be fused, wherein the picture to be fused is obtained by sequentially carrying out the following processing on the frame picture: feature matching, multi-matrix fusion and perspective transformation.
Preferably, the masking and filtering the multiple pictures to be fused, merging the pictures to be fused at the same time point to obtain a fused picture, including:
initializing a picture to be fused to generate a plurality of mask Msak pictures;
performing secondary masking operation on the Msak pictures to obtain a plurality of gradual change mask Msak1 pictures;
performing flip filtering operation on the gradual change mask Msak1 picture to obtain a picture to be fused;
and fusing the pictures to be fused at the same time point to obtain a fused picture.
Preferably, the serializing synthesizes the fused pictures to form a spliced video, including:
carrying out panoramic fusion on the fused pictures according to the sequence to obtain a panoramic picture collection of picture serialization;
and synthesizing a panoramic image collection according to the global frame number fps to obtain a panoramic video.
A mask-in-filter multi-video stitching system comprising:
the video processing module is used for processing the picture information of the spliced segment in the segmented video to be spliced to obtain a plurality of pictures to be fused;
the mask and filtering module is used for performing mask and filtering processing on the multiple pictures to be fused, and combining the pictures to be fused at the same time point to obtain a fused picture;
and the sequence synthesis module is used for synthesizing the fusion pictures in a serialization manner to form a spliced video.
Preferably, the video processing module further comprises:
the reading sub-module is used for reading the segmented video to be spliced;
the picture serialization submodule is used for carrying out picture serialization processing on the segmented video to be spliced to obtain a frame picture;
the processing sub-module is used for processing the frame pictures to obtain pictures to be fused, and the pictures to be fused are obtained by sequentially carrying out the following processing on the frame pictures: feature matching, multi-matrix fusion and perspective transformation.
Preferably, the mask and filter module further comprises:
the primary mask sub-module is used for initializing pictures to be fused and generating a plurality of mask Msak pictures;
the secondary mask sub-module is used for carrying out secondary masking operation on the Msak pictures to obtain a plurality of gradual change mask Msak1 pictures;
the filtering sub-module is used for performing flip filtering operation on the gradual change mask Msak1 picture to obtain a picture to be fused;
and the picture fusion sub-module is used for fusing the pictures to be fused at the same time point to obtain the fused pictures.
Preferably, the sequence synthesis module further comprises:
the sequence fusion sub-module is used for carrying out panoramic fusion on the fused pictures according to the sequence to obtain a panoramic picture collection of picture serialization;
and the panorama synthesis submodule is used for synthesizing a panorama image collection according to the global frame number fps so as to obtain a panoramic video.
An electronic device, comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the multi-video stitching method described above.
Compared with the related art, the multi-video stitching method, the system and the electronic equipment with mask combined filtering have the following beneficial effects:
according to the splicing method, firstly, picture information of spliced segments in segmented video to be spliced is processed to obtain a plurality of pictures to be fused; then, masking and filtering are carried out on the pictures to be fused, and the pictures to be fused at the same time point are combined to obtain a fused picture; and finally, serializing and synthesizing the fusion pictures to form a spliced video. According to the method, the problem of uneven alignment during splicing of a plurality of video panoramas is optimized through a matrix homogenization method, the color difference problem is corrected through a mask and filtering mode, the problem of hard splicing gaps and color transition of the multi-video panoramas is eliminated, and the technical problem of panoramic splicing of any plurality of videos with more than two is solved.
Drawings
FIG. 1 is a flow chart of a mask-combined filtered multi-video stitching method of the present disclosure;
FIG. 2 is another flow chart of a mask-combined filtered multi-video stitching method in accordance with the present disclosure;
FIG. 3 is another flow chart of a mask-combined filtered multi-video stitching method of the present disclosure;
FIG. 4 is another flow chart of a mask-combined filtered multi-video stitching method of the present disclosure;
FIG. 5 is another flow chart of a mask-combined filtered multi-video stitching method of the present disclosure;
FIG. 6 is a panoramic view fused by the method of the present invention;
FIG. 7 is a block diagram of a mask-combined filtered multi-video stitching system according to the present disclosure;
fig. 8 is a schematic diagram of an electronic device according to the present disclosure.
Detailed Description
In order to better understand the embodiments of the present application, the following description will make clear and complete descriptions of the embodiments of the present application with reference to the accompanying drawings in the embodiments of the present application, and it is apparent that the described embodiments are only some embodiments of the present application, and not all embodiments. All other examples, which can be made by one of ordinary skill in the art without undue burden based on the examples herein, are within the scope of the present application and are further described below in conjunction with the accompanying drawings and embodiments.
Referring to fig. 1 and fig. 5, a mask-combined filtering multi-video stitching method provided in an embodiment of the present application includes the following steps:
step S100: processing picture information of a spliced segment in the segmented video to be spliced to obtain a plurality of pictures to be fused;
in the embodiment of the present application, referring to fig. 2, step S100 includes the following steps.
Step S101: reading segmented videos to be spliced;
specifically, the multi-segment video to be spliced is read from the memory, the multi-segment video needs to be sequentially read according to the sequence from left to right, and two adjacent segments of the segmented video need to have a view overlapping part, otherwise, enough characteristic points cannot be provided to support the splicing of panoramic pictures.
Step S102: carrying out picture serialization treatment on segmented video to be spliced to obtain frame pictures;
specifically, each frame of picture information of each segmented video is read respectively, the pictures of the videos are processed in a serialization mode, and the picture frame information of each segmented video at the same time point is obtained in the serialization process at the same time, so that no time deviation exists in the panoramic video synthesis step finally.
More specifically, taking three segmented videos as an example, firstly, reading global attributes of the three segmented videos, in the reading process, respectively reading total frame numbers frame_count1, frame_count2 and frame_count3 of the videos through the attributes, and then obtaining the minimum total frame number new_frame_count3 by using a min function as the frame number of the final synthesized panoramic video length; using CAP_PROP_FPS to read the FPS of any video as the FPS of the final synthesized panoramic video; and simultaneously, circularly reading frame pictures of three segmented videos according to the generated new_frame_count to obtain frame pictures frame_1, frame_2 and frame_3.
Step S103: processing a frame picture to obtain a picture to be fused, wherein the picture to be fused is obtained by sequentially carrying out the following processing on the frame picture: feature matching, multi-matrix fusion and perspective transformation.
Specifically, first, grouping the serialized pictures to be fused generated in step S100, where a grouping formula is:
count=int(temp_count/2+0.5)
wherein, count is the number of packets obtained finally, temp_count is the total number of picture sequences needed to synthesize the panorama.
And then, carrying out positive sequence and reverse sequence grouping treatment on the serialized pictures to be fused to obtain the pictures to be fused, taking three pictures to be fused as examples, and obtaining two groups of pictures to be fused after treatment, wherein the two groups of pictures are [ image_1, image_2] and [ image_2, image3].
And then carrying out feature matching on the pictures to be fused according to the sequence, firstly extracting feature points of the pictures to be fused, and then matching features by utilizing a bidirectional KNN algorithm.
After extracting features, firstly searching K (K=2) nearest feature points p1, p2 of a target image from a target feature set features_train by utilizing a single KNN algorithm; searching K nearest feature points p11 and p22 of the reference picture from a reference feature set features_query; the Euclidean distance from the feature point of the target picture to P1 is d1, and the Euclidean distance from the feature point of the target picture to P2 is d2. The euclidean distance from the feature point of the target image to P11 is d11, and the euclidean distance from the feature point of the target image to P22 is d22.
If d1/d2 is less than or equal to r (r is the set error threshold value between 0 and 1), matching meets the condition, and adding the matching characteristics into the set A.
If d11/d22 is less than or equal to r, adding the matching features into the set B.
And extracting the public matching of the sets A and B as an initial matching pair, and adding the matching pair in the set F, wherein the matching pair in the set F is the initial matching result.
After the initial matching is completed, the feature point set F is subjected to comparison fusion through multiple matrixes to obtain an optimal transformation matrix H, and then perspective transformation is performed on the target image on the canvas according to the perspective transformation matrix M, so that a picture to be fused is finally obtained.
Step S200: masking and filtering the multiple pictures to be fused, and merging the pictures to be fused at the same time point to obtain a fused picture;
in the embodiment of the present application, referring to fig. 3, step S200 includes the following steps.
Step S201: initializing a picture to be fused to generate a plurality of mask Msak pictures;
specifically, masking the picture to be fused obtained in the step S100, where all values B, G, R (B, G, R value sub-tables represent blue, green, red) of the pixel points in the canvas are set to be pure white instead of pure black, and then a Mask picture is generated.
Step S202: performing secondary masking operation on the Msak pictures to obtain a plurality of gradual change mask Msak1 pictures;
specifically, masking is further performed based on the Mask picture, and a gradual change Mask is generated according to the picture after the wide combination perspective transformation of the Mask picture, wherein the formula is as follows:
Mask1(c,r)=s+(c-offset)*((e-c)/(w-o))
wherein s represents the start point of Mask; c represents the broad colum of Mask; e represents the end of Mask; w represents the width of the new empty table mask; o denotes offset, which is obtained by traversing a Mask picture.
And finally, the opposite starting point s and the opposite end point e are transmitted to obtain a final Mask1 picture.
Step S203: performing flip filtering operation on the gradual change mask Msak1 picture to obtain a picture to be fused;
specifically, flip operation is performed through a flip filter, so that the purpose of noise reduction is achieved.
Step S204: and fusing the pictures to be fused at the same time point to obtain a fused picture.
Specifically, the pictures to be fused extracted from the segmented videos are fused, and a plurality of groups of fused pictures are obtained.
Step S300: and serializing and synthesizing the fusion pictures to form a spliced video.
In the embodiment of the present application, referring to fig. 4, step S300 includes the following steps.
Step S301: carrying out panoramic fusion on the fused pictures according to the sequence to obtain a panoramic picture collection of picture serialization;
step S302: and synthesizing the panoramic image collection according to the global frame number fps (acquired in the step S102) to obtain panoramic videos.
The invention discloses a multi-video splicing method combining masks and filtering, which comprises the steps of firstly, processing picture information of spliced segments in segmented video to be spliced to obtain a plurality of pictures to be fused; then, masking and filtering are carried out on the pictures to be fused, and the pictures to be fused at the same time point are combined to obtain a fused picture; and finally, serializing and synthesizing the fusion pictures to form a spliced video. According to the method, the problem of uneven alignment during splicing of a plurality of video panoramas is optimized through a matrix homogenization method, the color difference problem is corrected through a mask and filtering mode, the problem of hard splicing gaps and color transition of the multi-video panoramas is eliminated, and the technical problem of panoramic splicing of any plurality of videos with more than two is solved.
The invention also discloses a mask combined filtering multi-video stitching system, in this embodiment, as shown in fig. 7, including:
the video processing module is used for processing the picture information of the spliced segments in the segmented video to be spliced to obtain a plurality of pictures to be fused.
Specifically, the video processing module further comprises a reading sub-module for reading the segmented video to be spliced, a picture serialization sub-module for carrying out picture serialization processing on the segmented video to be spliced to obtain a frame picture, and a picture serialization sub-module for processing the frame picture to obtain a picture to be fused, wherein the picture to be fused is obtained by sequentially carrying out the following processing on the frame picture: and a processing sub-module for feature matching, multi-matrix fusion and perspective transformation.
And the mask and filtering module is used for performing mask and filtering processing on the multiple pictures to be fused, and merging the pictures to be fused at the same time point to obtain a fused picture.
Specifically, the mask and filter module includes a primary mask sub-module for initializing a picture to be fused, generating a plurality of mask Msak pictures, a secondary mask sub-module for performing secondary masking operation on the Msak pictures to obtain a plurality of gradient mask Msak1 pictures, and a filter sub-module for performing flip filtering operation on the gradient mask Msak1 pictures to obtain a picture to be fused and a picture fusion sub-module for fusing the pictures to be fused at the same time point to obtain a fused picture
And the sequence synthesis module is used for synthesizing the fusion pictures in a serialization manner to form a spliced video.
Specifically, the sequence synthesis module comprises a sequence fusion sub-module for carrying out panoramic fusion on the fused pictures according to the sequence to obtain a panoramic picture collection of picture serialization and a panoramic synthesis sub-module for synthesizing the panoramic picture collection according to global frame number fps to obtain a panoramic video.
The mask-combined filtering multi-video stitching system disclosed in this embodiment is implemented based on the mask-combined filtering multi-video stitching method disclosed in the foregoing embodiment, and will not be described herein again.
According to the multi-video splicing system combining mask and filtering disclosed by the embodiment, a plurality of pictures to be fused are obtained by processing picture information of splicing segments in segmented video to be spliced; then, masking and filtering are carried out on the pictures to be fused, and the pictures to be fused at the same time point are combined to obtain a fused picture; and finally, serializing and synthesizing the fusion pictures to form a spliced video. According to the method, the problem of uneven alignment during splicing of a plurality of video panoramas is optimized through a matrix homogenization method, the color difference problem is corrected through a mask and filtering mode, the problem of hard splicing gaps and color transition of the multi-video panoramas is eliminated, and the technical problem of panoramic splicing of any plurality of videos with more than two is solved.
The invention also discloses an electronic device, in this embodiment, as shown in fig. 8, including:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the multi-video stitching method described above.
The processor is used for processing the picture information of the spliced segment in the segmented video to be spliced by utilizing the video processing module to obtain a plurality of pictures to be fused; masking and filtering the multiple pictures to be fused by using a masking and filtering module, and merging the pictures to be fused at the same time point to obtain a fused picture; and finally, utilizing a sequence synthesis module to synthesize the fusion pictures in a serialization manner to form a spliced video.
The processor is also used for reading the segmented video to be spliced by utilizing the reading submodule; carrying out picture serialization processing on the segmented video to be spliced by using a picture serialization submodule to obtain a frame picture; processing the frame pictures by utilizing a processing sub-module to obtain pictures to be fused, wherein the pictures to be fused are obtained by sequentially carrying out the following processing on the frame pictures: feature matching, multi-matrix fusion and perspective transformation.
The processor is further used for initializing the pictures to be fused by utilizing the initial mask submodule to generate a plurality of mask Msak pictures; performing secondary masking operation on the Msak pictures by using a secondary masking sub-module to obtain a plurality of gradual change mask Msak1 pictures; the filtering sub-module performs flip filtering operation on the gradual change mask Msak1 picture to obtain a picture to be fused; and fusing the pictures to be fused at the same time point by using a picture fusion submodule to obtain fused pictures.
The processor is also used for carrying out panoramic fusion on the fused pictures according to the sequence by utilizing the sequence fusion submodule to obtain a panoramic picture collection set with the pictures being serialized; and synthesizing a panoramic image collection by utilizing a panoramic synthesis submodule according to the global frame number fps so as to obtain a panoramic video.
The electronic device disclosed in this embodiment is implemented based on the above-mentioned method for multi-video stitching by combining mask and filtering, and will not be described herein.
According to the electronic equipment disclosed by the embodiment, a plurality of pictures to be fused are obtained by processing the picture information of the spliced segment in the segmented video to be spliced; then, masking and filtering are carried out on the pictures to be fused, and the pictures to be fused at the same time point are combined to obtain a fused picture; and finally, serializing and synthesizing the fusion pictures to form a spliced video. According to the method, the problem of uneven alignment during splicing of a plurality of video panoramas is optimized through a matrix homogenization method, the color difference problem is corrected through a mask and filtering mode, the problem of hard splicing gaps and color transition of the multi-video panoramas is eliminated, and the technical problem of panoramic splicing of any plurality of videos with more than two is solved.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present application. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the application. Thus, the present application is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
Claims (9)
1. A method for mask-combined filtering multi-video stitching, comprising:
processing picture information of a spliced segment in the segmented video to be spliced to obtain a plurality of pictures to be fused;
masking and filtering the multiple pictures to be fused, and merging the pictures to be fused at the same time point to obtain a fused picture;
and serializing and synthesizing the fusion pictures to form a spliced video.
2. The method for multi-video stitching with mask-combined filtering according to claim 1, wherein the processing the pictures of the segments to be stitched in the segmented video to be stitched comprises:
reading segmented videos to be spliced;
carrying out picture serialization treatment on segmented video to be spliced to obtain frame pictures;
processing a frame picture to obtain a picture to be fused, wherein the picture to be fused is obtained by sequentially carrying out the following processing on the frame picture: feature matching, multi-matrix fusion and perspective transformation.
3. The method for multi-video stitching with mask combined filtering according to claim 2, wherein the masking and filtering the multiple pictures to be fused, merging the pictures to be fused at the same time point, and obtaining the fused picture, includes:
initializing a picture to be fused to generate a plurality of mask Msak pictures;
performing secondary masking operation on the Msak pictures to obtain a plurality of gradual change mask Msak1 pictures;
performing flip filtering operation on the gradual change mask Msak1 picture to obtain a picture to be fused;
and fusing the pictures to be fused at the same time point to obtain a fused picture.
4. The method of claim 1, wherein the serializing the fused pictures to form a stitched video comprises:
carrying out panoramic fusion on the fused pictures according to the sequence to obtain a panoramic picture collection of picture serialization;
and synthesizing a panoramic image collection according to the global frame number fps to obtain a panoramic video.
5. A mask-combined filtered multi-video stitching system, comprising:
the video processing module is used for processing the picture information of the spliced segment in the segmented video to be spliced to obtain a plurality of pictures to be fused;
the mask and filtering module is used for performing mask and filtering processing on the multiple pictures to be fused, and combining the pictures to be fused at the same time point to obtain a fused picture;
and the sequence synthesis module is used for synthesizing the fusion pictures in a serialization manner to form a spliced video.
6. The mask-combined filtered multi-video stitching system according to claim 5, wherein the video processing module further comprises:
the reading sub-module is used for reading the segmented video to be spliced;
the picture serialization submodule is used for carrying out picture serialization processing on the segmented video to be spliced to obtain a frame picture;
the processing sub-module is used for processing the frame pictures to obtain pictures to be fused, and the pictures to be fused are obtained by sequentially carrying out the following processing on the frame pictures: feature matching, multi-matrix fusion and perspective transformation.
7. The mask-combined filtered multi-video stitching system according to claim 5, wherein the mask and filter module further comprises:
the primary mask sub-module is used for initializing pictures to be fused and generating a plurality of mask Msak pictures;
the secondary mask sub-module is used for carrying out secondary masking operation on the Msak pictures to obtain a plurality of gradual change mask Msak1 pictures;
the filtering sub-module is used for performing flip filtering operation on the gradual change mask Msak1 picture to obtain a picture to be fused;
and the picture fusion sub-module is used for fusing the pictures to be fused at the same time point to obtain the fused pictures.
8. The mask-combined filtered multi-video stitching system of claim 5, wherein the sequence synthesis module further comprises:
the sequence fusion sub-module is used for carrying out panoramic fusion on the fused pictures according to the sequence to obtain a panoramic picture collection of picture serialization;
and the panorama synthesis submodule is used for synthesizing a panorama image collection according to the global frame number fps so as to obtain a panoramic video.
9. An electronic device, comprising:
a memory storing executable program code;
a processor coupled to the memory;
the processor invokes the executable program code stored in the memory to perform the multi-video stitching method of any of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310240883.2A CN116233339A (en) | 2023-03-13 | 2023-03-13 | Mask combined filtering multi-video stitching method, system and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310240883.2A CN116233339A (en) | 2023-03-13 | 2023-03-13 | Mask combined filtering multi-video stitching method, system and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116233339A true CN116233339A (en) | 2023-06-06 |
Family
ID=86569382
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310240883.2A Pending CN116233339A (en) | 2023-03-13 | 2023-03-13 | Mask combined filtering multi-video stitching method, system and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116233339A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117575902A (en) * | 2024-01-16 | 2024-02-20 | 四川新视创伟超高清科技有限公司 | Large scene monitoring image splicing method and splicing system |
-
2023
- 2023-03-13 CN CN202310240883.2A patent/CN116233339A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117575902A (en) * | 2024-01-16 | 2024-02-20 | 四川新视创伟超高清科技有限公司 | Large scene monitoring image splicing method and splicing system |
CN117575902B (en) * | 2024-01-16 | 2024-03-29 | 四川新视创伟超高清科技有限公司 | Large scene monitoring image splicing method and splicing system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8031232B2 (en) | Image pickup apparatus including a first image formation system and a second image formation system, method for capturing image, and method for designing image pickup apparatus | |
KR100957261B1 (en) | Image pickup device and chromatic aberration correction method | |
US8792774B2 (en) | Image processing device, development apparatus, image processing method, development method, image processing program, development program and raw moving image format | |
US20150281542A1 (en) | Image generation apparatus and method for generating plurality of images with different resolution and/or brightness from single image | |
JP4346938B2 (en) | Image processing apparatus, method, program, and storage medium | |
US20060132628A1 (en) | Image data processing apparatus and electronic camera | |
US8085320B1 (en) | Early radial distortion correction | |
JP2003199034A (en) | Still picture format for post-picture stitch for forming panorama image | |
CN104363385B (en) | Line-oriented hardware implementing method for image fusion | |
CN102655564A (en) | Image processing apparatus, image processing method, and program | |
CN101093348A (en) | Apparatus and method for panoramic photography in portable terminal | |
JP2006345054A (en) | Image pickup apparatus | |
JP4992698B2 (en) | Chromatic aberration correction apparatus, imaging apparatus, chromatic aberration calculation method, and chromatic aberration calculation program | |
CN116233339A (en) | Mask combined filtering multi-video stitching method, system and electronic equipment | |
WO2020011112A1 (en) | Image processing method and system, readable storage medium, and terminal | |
JP2010252258A (en) | Electronic device and image capturing apparatus | |
JP2011151627A (en) | Image processor, image processing method, and program | |
US20110091100A1 (en) | Apparatus and method of removing false color in image | |
JP2006345053A (en) | Image pickup apparatus | |
US20110187903A1 (en) | Digital photographing apparatus for correcting image distortion and image distortion correcting method thereof | |
JP2006345055A (en) | Image pickup apparatus | |
CN116630174A (en) | Color difference correction method, system and electronic equipment for multi-video panorama stitching | |
US20130195379A1 (en) | Image processing apparatus and method | |
JP2002268624A (en) | Device and method for correcting image | |
CN114387183A (en) | Image processing method and system for vehicle all-round viewing system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Country or region after: China Address after: No. 401, 4th Floor, Building 2, No. 88 Shengtong Street, Chengdu High tech Zone, China (Sichuan) Pilot Free Trade Zone, Chengdu City, Sichuan Province 610095 Applicant after: Sichuan Guochuang Innovation Vision Ultra HD Video Technology Co.,Ltd. Address before: No. 2, Xinyuan south 2nd Road, Chengdu, Sichuan 610000 Applicant before: Sichuan Xinshi Chuangwei ultra high definition Technology Co.,Ltd. Country or region before: China |
|
CB02 | Change of applicant information |