CN110097527A - Video-splicing fusion method, device, terminal and storage medium - Google Patents
Video-splicing fusion method, device, terminal and storage medium Download PDFInfo
- Publication number
- CN110097527A CN110097527A CN201910205830.0A CN201910205830A CN110097527A CN 110097527 A CN110097527 A CN 110097527A CN 201910205830 A CN201910205830 A CN 201910205830A CN 110097527 A CN110097527 A CN 110097527A
- Authority
- CN
- China
- Prior art keywords
- video
- color value
- pictures
- dimensional coordinate
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000007500 overflow downdraw method Methods 0.000 title claims abstract description 20
- 238000003860 storage Methods 0.000 title claims abstract description 16
- 230000004927 fusion Effects 0.000 claims description 31
- 238000004590 computer program Methods 0.000 claims description 19
- 238000013507 mapping Methods 0.000 claims description 14
- 238000012545 processing Methods 0.000 claims description 11
- 238000004364 calculation method Methods 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 4
- 238000010168 coupling process Methods 0.000 claims description 4
- 238000005859 coupling reaction Methods 0.000 claims description 4
- 230000008878 coupling Effects 0.000 claims description 3
- 239000003086 colorant Substances 0.000 claims description 2
- 238000000034 method Methods 0.000 abstract description 10
- 230000011218 segmentation Effects 0.000 abstract description 3
- 238000012544 monitoring process Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 5
- 230000006872 improvement Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000012466 permeate Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 238000005267 amalgamation Methods 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 239000000155 melt Substances 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of video-splicing fusion method, device, terminal and storage mediums.Wherein, method includes: the video pictures for obtaining the shooting of at least two camera synchronizations;Construct the threedimensional model of entire video area;Video pictures are mapped on threedimensional model, preliminary 3 d video images are obtained;It is merged after handling the color value of two adjacent video pictures all in preliminary 3 d video images, until all video pictures are fused, obtains final 3 d video images;Output shows final 3 d video images.The present invention is spliced by the video pictures for shooting different cameras to threedimensional model, so that staff can quickly understand entire video area with 3 d video images, avoid causes staff to be difficult to the problem of monitored picture is mapped with actual scene because of video pictures segmentation.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of video-splicing fusion method, device, terminal and deposit
Storage media.
Background technique
Video monitoring is a kind of important means of public domain safeguard management, is used and is popularized on a large scale,
No matter from the simulation camera of hardware device all kinds of high-definition network cameras or software till now various intellectual analysis and
Linkage, video monitor technology all obtain always the concern and quickly development of height.Densely populated place complexity square, park, machine
The panorama of the large-scale public places such as field, harbour and other special areas dynamic is managed, and realizes large scene all standing.
Currently, the video monitoring for being directed to large-scale place is all to shoot video pictures realization respectively by multiple cameras
, pictured scene is segmentation, is caused when carrying out picture monitoring, and staff cannot be at the first time by monitored picture and reality
Border scene is mapped, and monitoring effect is poor.
Summary of the invention
The present invention provides a kind of video-splicing fusion method, device, terminal and storage mediums, to solve existing video
Monitored picture segmentation, leads to the problem that monitoring effect is poor.
To solve the above-mentioned problems, the present invention provides a kind of video-splicing fusion methods comprising:
Obtain the video pictures of at least two camera synchronizations shooting;
Construct the threedimensional model of entire video area;
Video pictures are mapped on threedimensional model, preliminary 3 d video images are obtained;
It is merged after handling the color value of two adjacent video pictures all in preliminary 3 d video images, until
All video pictures are fused, and obtain final 3 d video images;
Output shows final 3 d video images.
As a further improvement of the present invention, to the face of two adjacent video pictures all in preliminary 3 d video images
Color value merges after being handled, until the step of all video pictures are fused, obtain final 3 d video images, packet
It includes:
Target video picture is chosen from all video pictures of preliminary 3 d video images;
Obtain the first color value C of target video pictureA, and according to color value CAFusion factor a and brightness value b is set, is melted
The value range for closing factor a is [0,1];
Obtain the second color value C of any one current video picture adjacent with target video pictureB;
According to the first color value CA, the second color value CB, fusion factor a and brightness value b carry out calculation processing, obtain third
Color value CAB, CAB=(CA*a+CB*(1-a))*b;
According to third color value CABIt is merged after handling target video picture, current video picture, obtains new mesh
Video pictures are marked, and new target video picture is merged with the video pictures of any adjoining, until all video pictures are equal
It is fused, obtains final 3 d video images.
As a further improvement of the present invention, video pictures are mapped on threedimensional model, obtains preliminary 3 D video figure
The step of picture, comprising:
The two-dimensional coordinate system of entire video area and the three-dimensional system of coordinate of threedimensional model are constructed respectively;
Each video pictures are obtained in the two-dimensional coordinate of two-dimensional coordinate system;
Two-dimensional coordinate corresponding three-dimensional coordinate in three-dimensional system of coordinate is calculated, between two-dimensional coordinate system and three-dimensional system of coordinate
Mapping relations are preset;
Each video pictures are mapped on threedimensional model according to three-dimensional coordinate, obtain preliminary 3 d video images.
To solve the above-mentioned problems, the present invention also provides a kind of video-splicing fusing devices comprising:
Module is obtained, for obtaining the video pictures of at least two camera synchronizations shooting;
Module is constructed, for constructing the threedimensional model of entire video area;
Mapping block obtains preliminary 3 d video images for video pictures to be mapped to threedimensional model;
Fusion Module, at the color value to two adjacent video pictures all in preliminary 3 d video images
It is merged after reason, until all video pictures are fused, obtains final 3 d video images;
Output module shows final 3 d video images for exporting.
As a further improvement of the present invention, Fusion Module includes:
Selection unit, for choosing target video picture in all video pictures from preliminary 3 d video images;
First color value acquiring unit, for obtaining the first color value C of target video pictureA, and according to color value CAIf
Determine fusion factor a and brightness value b, the value range of fusion factor a is [0,1];
Second color value acquiring unit, for obtaining the of any one current video picture adjacent with target video picture
Second colors value CB;
Third color value acquiring unit, for according to the first color value CA, the second color value CB, fusion factor a and brightness value
B carries out calculation processing, obtains third color value CAB, CAB=(CA*a+CB*(1-a))*b;
Integrated unit, for according to third color value CABMelt after handling target video picture, current video picture
It closes, obtains new target video picture, and new target video picture is merged with the video pictures of any adjoining, until all
Video pictures be fused, obtain final 3 d video images.
As a further improvement of the present invention, mapping block includes:
Construction unit, for constructing the two-dimensional coordinate system of entire video area and the three-dimensional system of coordinate of threedimensional model respectively;
Two-dimensional coordinate acquiring unit, for obtaining each video pictures in the two-dimensional coordinate of two-dimensional coordinate system;
Three-dimensional coordinate computing unit, for calculating two-dimensional coordinate corresponding three-dimensional coordinate in three-dimensional system of coordinate, two dimension is sat
Mapping relations between mark system and three-dimensional system of coordinate are preset;
Map unit obtains preliminary three-dimensional for each video pictures to be mapped to threedimensional model according to three-dimensional coordinate
Video image.
To solve the above-mentioned problems, the present invention also provides a kind of terminals comprising at least two cameras, memory and
Processor, processor couple at least two cameras, memory, are stored with the computer that can be run on a processor on memory
Program;
When processor executes computer program, the step in any of the above-described video-splicing fusion method is realized.
To solve the above-mentioned problems, the present invention also provides a kind of storage mediums, are stored thereon with computer program, special
Sign is, when computer program is executed by processor, realizes the step in any of the above-described video-splicing fusion method.
Compared with the prior art, the present invention is mapped to by the video monitoring picture for shooting at least two cameras simultaneously
On the threedimensional model of entire video area, the preliminary 3 d video images of entire video area are obtained, then to preliminary 3 D video
Image is handled, and obtains fused final 3 d video images, the video pictures that daily camera is shot are spelled
Fusion is connect, so that a final 3 d video images of entire video area are obtained, it is divided so as to avoid monitored picture
Problem facilitates staff quickly to understand the situation of entire video area.
Detailed description of the invention
Fig. 1 is the flow chart of video-splicing fusion method one embodiment of the present invention;
Fig. 2 is the flow chart of second embodiment of video-splicing fusion method of the present invention;
Fig. 3 is the flow chart of video-splicing fusion method third embodiment of the present invention;
Fig. 4 is the functional block diagram of video-splicing fusing device one embodiment of the present invention;
Fig. 5 is the functional block diagram of second embodiment of video-splicing fusing device of the present invention;
Fig. 6 is the functional block diagram of video-splicing fusing device third embodiment of the present invention;
Fig. 7 is the structural block diagram of terminal one embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to the accompanying drawings and embodiments, right
The present invention is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, and
It is not used to limit the present invention.
Fig. 1 illustrates one embodiment of video-splicing fusion method of the present invention.In the present embodiment, as shown in Figure 1, the view
Frequency splicing and amalgamation method the following steps are included:
Step S1 obtains the video pictures of at least two camera synchronizations shooting.
It should be noted that being arranged after the range for confirming entire video area according to the range of entire video area
The quantity of camera, and guarantee that the video pictures of all camera shootings include entire video area, and between video pictures
The region that can be overlapped.Also, it after shooting video pictures by camera, needs to extract synchronization, each takes the photograph
The picture shot as head.
Step S2 constructs the threedimensional model of entire video area.
Specifically, the threedimensional model of entire video area can carry out structure according to the parameter of the entire video area obtained in advance
It builds.
Step S3, video pictures are mapped on threedimensional model, obtain preliminary 3 d video images.
Specifically, after the video pictures for obtaining the shooting of all camera synchronizations, by feature in video pictures with
Threedimensional model is compared, to confirm each video pictures corresponding region on 3-D image, and video pictures is mapped
To threedimensional model, to obtain preliminary 3 d video images, repeated at this point, existing between partial video picture and video pictures
The region of shooting, therefore, there are parts of images overlapping regions in preliminary 3 d video images.
In some embodiments, as shown in Fig. 2, step S3 includes following sub-step:
Step S31 constructs the two-dimensional coordinate system of entire video area and the three-dimensional system of coordinate of threedimensional model respectively.
Step S32 obtains each video pictures in the two-dimensional coordinate of two-dimensional coordinate system.
Step S33 calculates two-dimensional coordinate corresponding three-dimensional coordinate in three-dimensional system of coordinate.
It should be noted that the mapping relations between two-dimensional coordinate system and three-dimensional system of coordinate are preset.According to the mapping
Two-dimensional coordinate can be obtained in the corresponding three-dimensional coordinate of three-dimensional coordinate table in relationship.
Each video pictures are mapped on threedimensional model according to three-dimensional coordinate, obtain preliminary 3 D video figure by step S34
Picture.
Specifically, it obtains three-dimensional coordinate of each video pictures in three-dimensional system of coordinate and obtains video pictures in three-dimensional mould
Each video pictures are mapped on threedimensional model further according to three-dimensional coordinate, obtain preliminary 3 D video by the specific location in type
Image.
Step S4 melts after handling the color value of two adjacent video pictures all in preliminary 3 d video images
It closes, until all video pictures are fused, obtains final 3 d video images.
Specifically, the color value in preliminary 3 d video images per two adjacent video pictures is handled, thus
Adjust the color difference between two video pictures, then by treated two video pictures permeate new video pictures, from
And the fusion to two video pictures is completed, and so on, until when by all video pictures, all fusion is finished, obtain
To final 3 d video images.
In some embodiments, as shown in figure 3, step S4 includes following sub-step:
Step S41 chooses target video picture from all video pictures of preliminary 3 d video images.
Specifically, a video pictures are chosen from all video pictures as target video picture.
Step S42 obtains the first color value C of target video pictureA, and according to color value CASet fusion factor a and bright
Angle value b.
It should be noted that the value range of fusion factor a is [0,1].
Specifically, after confirming target video picture, the first color value C of the target video picture is obtainedA。
Step S43 obtains the second color value C of any one current video picture adjacent with target video pictureB。
Specifically, after confirming target video picture, target video picture may be adjacent with multiple video pictures, from
A video pictures are selected in other adjacent video pictures of target video picture as current video picture, and are obtained and deserved
Second color value C of preceding video picturesB。
Step S44, according to the first color value CA, the second color value CB, fusion factor a and brightness value b carry out calculation processing,
Obtain third color value CAB。
It should be noted that CAB=(CA*a+CB*(1-a))*b。
Step S45, according to third color value CABIt merges, obtains after handling target video picture, current video picture
It is merged to new target video picture, and by new target video picture with the video pictures of any adjoining, until all views
Frequency picture is fused, and obtains final 3 d video images.
Specifically, third color value C is being obtainedABAfterwards, according to third color value CABTo target video picture and work as forward sight
The color value of frequency picture is handled, so that the color value of target video picture and current video picture is third color
Value CAB, to complete the fusion to target video picture and current video picture.Further, it completes to draw the target video
After the fusion of face and current video picture, using obtained video pictures as new target video picture, and one is chosen again
A video pictures adjacent with new target video picture are completed to melt the two again as new current video picture
It closes, repeats the above process, until when all video pictures permeate a video pictures, to obtain final three-dimensional view
Frequency image.
It is handled by the color value to video pictures, and all video pictures is permeated video pictures, solution
The problem for causing video pictures edge stiff because of color difference problem between adjacent video of having determined picture improves final three-dimensional view
The display effect of frequency image.
Step S5, output show final 3 d video images.
The present embodiment maps to entire video area by the video monitoring picture for shooting at least two cameras simultaneously
Threedimensional model on, obtain the preliminary 3 d video images of entire video area, then handle preliminary 3 d video images,
Fused final 3 d video images are obtained, the video pictures that each camera is shot carry out splicing fusion, thus
A final 3 d video images to entire video area facilitate work so as to avoid the divided problem of monitored picture
Personnel quickly understand the situation of entire video area.
Fig. 4 illustrates one embodiment of video-splicing fusing device of the present invention.In the present embodiment, as shown in figure 4, the view
Frequency splicing fusing device includes obtaining module 10, building module 11, mapping block 12, Fusion Module 13 and output module 14.
Wherein, module 10 is obtained, for obtaining the video pictures of at least two camera synchronizations shooting;Construct module
11, for constructing the threedimensional model of entire video area;Mapping block 12, for video pictures to be mapped to threedimensional model,
Obtain preliminary 3 d video images;Fusion Module 13, for being drawn to two adjacent videos all in preliminary 3 d video images
The color value in face merges after being handled, until all video pictures are fused, obtains final 3 d video images;Output
Module 14 shows final 3 d video images for exporting.
On the basis of above-described embodiment, in other embodiments, as shown in figure 5, Fusion Module 13 include selection unit 131,
First color value acquiring unit 132, the second color value acquiring unit 133, third color value acquiring unit 134 and integrated unit
135。
Wherein, selection unit 131, for choosing target video in all video pictures from preliminary 3 d video images
Picture;First color value acquiring unit 132, for obtaining the first color value C of target video pictureA, and according to color value CAIf
Determine fusion factor a and brightness value b, the value range of fusion factor a is [0,1];Second color value acquiring unit 133, for obtaining
Take the second color value C of any one current video picture adjacent with target video pictureB;Third color value acquiring unit 134,
For according to the first color value CA, the second color value CB, fusion factor a and brightness value b carry out calculation processing, obtain third color
Value CAB, CAB=(CA*a+CB*(1-a))*b;Integrated unit 135, for according to third color value CABTo target video picture, when
Preceding video pictures merge after being handled, and obtain new target video picture, and by new target video picture and any adjoining
Video pictures fusion obtain final 3 d video images until all video pictures are fused.
On the basis of above-described embodiment, in other embodiments, as shown in fig. 6, mapping block 12 include construction unit 121,
Two-dimensional coordinate acquiring unit 122, three-dimensional coordinate computing unit 123 and map unit 124.
Wherein, construction unit 121, for constructing the two-dimensional coordinate system of entire video area and the three-dimensional of threedimensional model respectively
Coordinate system;Two-dimensional coordinate acquiring unit 122, for obtaining each video pictures in the two-dimensional coordinate of two-dimensional coordinate system;Three-dimensional is sat
Computing unit 123 is marked, for calculating two-dimensional coordinate corresponding three-dimensional coordinate in three-dimensional system of coordinate, two-dimensional coordinate system and three-dimensional seat
Mapping relations between mark system are preset;Map unit 124, for each video pictures to be mapped to three according to three-dimensional coordinate
On dimension module, preliminary 3 d video images are obtained.
Fig. 7 illustrates the schematic block diagram that terminal one embodiment of the present invention provides, the terminal referring to Fig. 7, in the embodiment
Include: one or at least two processors 80, memory 81, at least two cameras 82, and is stored in the memory 81
And the computer program 810 that can be run on processor 80.When processor 80 executes computer program 810, above-mentioned implementation is realized
Step in the video-splicing fusion method of example description, such as: step S1- step S5 shown in FIG. 1.Alternatively, processor 80 is held
When row computer program 810, the function of each module/unit in above-mentioned video-splicing fusing device embodiment is realized, such as: Fig. 4
The function of shown module 10- module 14.
Computer program 810 can be divided into one or more module/units, one or more module/unit quilt
It is stored in memory 81, and is executed by processor 80, to complete the application.One or more module/units can be can
The series of computation machine program instruction section of specific function is completed, which is used to describe computer program 810 in the terminal
Implementation procedure.
Terminal includes but are not limited to processor 80, memory 81, at least two cameras 82.Those skilled in the art can
To understand, Fig. 7 is only an example of terminal, and the not restriction of structure paired terminal may include more more or less than illustrating
Component, perhaps combine certain components or different components, such as terminal can also include input equipment, output equipment, net
Network access device, bus etc..
Processor 80 can be central processing unit (Central Processing Unit, CPU), can also be other
General processor, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor is also possible to any conventional processor
Deng.
Memory 81 can be read-only memory, the static storage device that can store static information and instruction, arbitrary access
Memory or the dynamic memory that can store information and instruction are also possible to Electrically Erasable Programmable Read-Only Memory, read-only
CD or other optical disc storages, optical disc storage, magnetic disk storage medium or other magnetic storage apparatus.Memory 81 and processor
80 can be connected by communication bus, can also be integrated with processor 80.
In the above-described embodiments, it all emphasizes particularly on different fields to the description of each embodiment, is not described in detail or remembers in some embodiment
The part of load may refer to the associated description of other embodiments.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure
Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually
It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician
Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed
Scope of the present application.
In embodiment provided herein, it should be understood that disclosed device and method can pass through others
Mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of module or unit, only
For a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can combine
Or it is desirably integrated into another device, or some features can be ignored or not executed.Another point, shown or discussed phase
Coupling or direct-coupling or communication connection between mutually can be through some interfaces, the INDIRECT COUPLING or communication of device or unit
Connection can be electrical property, mechanical or other forms.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
The embodiment of the present application also provides a kind of storage mediums, and for storing computer program, it includes for executing sheet
Apply for program data designed by above-mentioned video-splicing fusion method embodiment.By executing the calculating stored in the storage medium
Video-splicing fusion method provided by the present application may be implemented in machine program.
If integrated module/unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the application realizes above-described embodiment side
All or part of the process in method can also instruct relevant hardware to complete, computer program by computer program 810
810 can be stored in a computer readable storage medium, the computer program 810 by processor 80 execute when, it can be achieved that on
The step of stating each embodiment of the method.Wherein, computer program 810 includes computer program code, and computer program code can
Think source code form, object identification code form, executable file or certain intermediate forms etc..Computer-readable medium can wrap
It includes: any entity or device, recording medium, USB flash disk, mobile hard disk, magnetic disk, CD, meter of computer program code can be carried
Calculation machine memory, read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access
Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It should be noted that computer-readable medium includes
Content can be according to making laws in jurisdiction and the requirement of patent practice carries out increase and decrease appropriate, such as in certain judicial pipes
Area under one's jurisdiction, according to legislation and patent practice, computer-readable medium do not include be electric carrier signal and telecommunication signal.
The specific embodiment of invention is described in detail above, but it is only used as example, the present invention is not intended to limit
In specific embodiments described above.For a person skilled in the art, any equivalent modifications that the invention is carried out
Or substitute also all among scope of the invention, therefore, the made equalization in the case where not departing from the spirit and principles in the present invention range
Transformation and modification, improvement etc., all should be contained within the scope of the invention.
Claims (8)
1. a kind of video-splicing fusion method, characterized in that it comprises:
Obtain the video pictures of at least two camera synchronizations shooting;
Construct the threedimensional model of entire video area;
The video pictures are mapped on the threedimensional model, preliminary 3 d video images are obtained;
It is merged after handling the color value of two adjacent video pictures all in the preliminary 3 d video images, until
All video pictures are fused, and obtain final 3 d video images;
Output shows the final 3 d video images.
2. video-splicing fusion method according to claim 1, which is characterized in that described to the preliminary 3 D video figure
It is merged after being handled as the color value of interior all two adjacent video pictures, until all video pictures are fused,
The step of obtaining final 3 d video images, comprising:
Target video picture is chosen from all video pictures of the preliminary 3 d video images;
Obtain the first color value C of target video pictureA, and according to the color value CASet fusion factor a and brightness value b, institute
The value range for stating fusion factor a is [0,1];
Obtain the second color value C of any one current video picture adjacent with the target video pictureB;
According to the first color value CA, the second color value CB, fusion factor a and brightness value b carry out calculation processing, obtain third
Color value CAB, CAB=(CA*a+CB*(1-a))*b;
According to the third color value CABIt merges, obtains after handling the target video picture, the current video picture
It is merged to new target video picture, and by the new target video picture with the video pictures of any adjoining, until all
Video pictures be fused, obtain the final 3 d video images.
3. video-splicing fusion method according to claim 1, which is characterized in that described to map to the video pictures
On the threedimensional model, the step of obtaining preliminary 3 d video images, comprising:
The two-dimensional coordinate system of the entire video area and the three-dimensional system of coordinate of the threedimensional model are constructed respectively;
Each video pictures are obtained in the two-dimensional coordinate of the two-dimensional coordinate system;
Calculate the two-dimensional coordinate corresponding three-dimensional coordinate in the three-dimensional system of coordinate, the two-dimensional coordinate system and the three-dimensional
Mapping relations between coordinate system are preset;
Each video pictures are mapped on the threedimensional model according to the three-dimensional coordinate, obtain the preliminary 3 D video figure
Picture.
4. a kind of video-splicing fusing device, characterized in that it comprises:
Module is obtained, for obtaining the video pictures of at least two camera synchronizations shooting;
Module is constructed, for constructing the threedimensional model of entire video area;
Mapping block obtains preliminary 3 d video images for the video pictures to be mapped to the threedimensional model;
Fusion Module, at the color value to two adjacent video pictures all in the preliminary 3 d video images
It is merged after reason, until all video pictures are fused, obtains final 3 d video images;
Output module shows the final 3 d video images for exporting.
5. video-splicing fusing device according to claim 4, which is characterized in that the Fusion Module includes:
Selection unit, for choosing target video picture from all video pictures of the preliminary 3 d video images;
First color value acquiring unit, for obtaining the first color value C of target video pictureA, and according to the color value CAIf
The value range for determining fusion factor a and brightness value b, the fusion factor a is [0,1];
Second color value acquiring unit, for obtaining the of any one current video picture adjacent with the target video picture
Second colors value CB;
Third color value acquiring unit, for according to the first color value CA, the second color value CB, fusion factor a and brightness value
B carries out calculation processing, obtains third color value CAB, CAB=(CA*a+CB*(1-a))*b;
Integrated unit, for according to the third color value CABThe target video picture, the current video picture are carried out
It is merged after processing, obtains new target video picture, and by the video pictures of the new target video picture and any adjoining
Fusion obtains the final 3 d video images until all video pictures are fused.
6. video-splicing fusing device according to claim 4, which is characterized in that the mapping block includes:
Construction unit, for constructing the two-dimensional coordinate system of the entire video area and the three-dimensional coordinate of the threedimensional model respectively
System;
Two-dimensional coordinate acquiring unit, for obtaining each video pictures in the two-dimensional coordinate of the two-dimensional coordinate system;
Three-dimensional coordinate computing unit, for calculating the two-dimensional coordinate corresponding three-dimensional coordinate in the three-dimensional system of coordinate, institute
The mapping relations stated between two-dimensional coordinate system and the three-dimensional system of coordinate are preset;
Map unit obtains described for each video pictures to be mapped to the threedimensional model according to the three-dimensional coordinate
Preliminary 3 d video images.
7. a kind of terminal, which is characterized in that it includes at least two cameras, memory and processor, the processor coupling
At least two camera, the memory are stored with the computer that can be run on the processor on the memory
Program;
When the processor executes the computer program, any one of the claim 1-3 video-splicing fusion method is realized
In step.
8. a kind of storage medium, is stored thereon with computer program, which is characterized in that the computer program is executed by processor
When, realize the step in any one of the claim 1-3 video-splicing fusion method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910205830.0A CN110097527A (en) | 2019-03-19 | 2019-03-19 | Video-splicing fusion method, device, terminal and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910205830.0A CN110097527A (en) | 2019-03-19 | 2019-03-19 | Video-splicing fusion method, device, terminal and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110097527A true CN110097527A (en) | 2019-08-06 |
Family
ID=67443396
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910205830.0A Pending CN110097527A (en) | 2019-03-19 | 2019-03-19 | Video-splicing fusion method, device, terminal and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110097527A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910338A (en) * | 2019-12-03 | 2020-03-24 | 煤炭科学技术研究院有限公司 | Three-dimensional live-action video acquisition method, device, equipment and storage medium |
CN114143528A (en) * | 2020-09-04 | 2022-03-04 | 北京大视景科技有限公司 | Multi-video stream fusion method, electronic device and storage medium |
CN115861070A (en) * | 2022-12-14 | 2023-03-28 | 湖南凝服信息科技有限公司 | Three-dimensional video fusion splicing method |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104581196A (en) * | 2014-12-30 | 2015-04-29 | 北京像素软件科技股份有限公司 | Video image processing method and device |
CN105578145A (en) * | 2015-12-30 | 2016-05-11 | 天津德勤和创科技发展有限公司 | Method for real-time intelligent fusion of three-dimensional virtual scene and video monitoring |
US20160133006A1 (en) * | 2014-03-03 | 2016-05-12 | Tencent Technology (Shenzhen) Company Limited | Video processing method and apparatus |
CN106791621A (en) * | 2016-12-06 | 2017-05-31 | 深圳市元征科技股份有限公司 | The monitored picture forming method and system of supervising device |
-
2019
- 2019-03-19 CN CN201910205830.0A patent/CN110097527A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160133006A1 (en) * | 2014-03-03 | 2016-05-12 | Tencent Technology (Shenzhen) Company Limited | Video processing method and apparatus |
CN104581196A (en) * | 2014-12-30 | 2015-04-29 | 北京像素软件科技股份有限公司 | Video image processing method and device |
CN105578145A (en) * | 2015-12-30 | 2016-05-11 | 天津德勤和创科技发展有限公司 | Method for real-time intelligent fusion of three-dimensional virtual scene and video monitoring |
CN106791621A (en) * | 2016-12-06 | 2017-05-31 | 深圳市元征科技股份有限公司 | The monitored picture forming method and system of supervising device |
Non-Patent Citations (1)
Title |
---|
李洋洋等: "多摄像机图像拼接视觉归一化技术研究", 《计算机工程与应用》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110910338A (en) * | 2019-12-03 | 2020-03-24 | 煤炭科学技术研究院有限公司 | Three-dimensional live-action video acquisition method, device, equipment and storage medium |
CN114143528A (en) * | 2020-09-04 | 2022-03-04 | 北京大视景科技有限公司 | Multi-video stream fusion method, electronic device and storage medium |
CN115861070A (en) * | 2022-12-14 | 2023-03-28 | 湖南凝服信息科技有限公司 | Three-dimensional video fusion splicing method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11410320B2 (en) | Image processing method, apparatus, and storage medium | |
KR100894874B1 (en) | Apparatus and Method for Generating a Stereoscopic Image from a Two-Dimensional Image using the Mesh Map | |
CN109598673A (en) | Image split-joint method, device, terminal and computer readable storage medium | |
CN108769462B (en) | Free visual angle scene roaming method and device | |
CN105894451A (en) | Method and device for splicing panoramic image | |
CN111766951B (en) | Image display method and apparatus, computer system, and computer-readable storage medium | |
CN109741388A (en) | Method and apparatus for generating binocular depth estimation model | |
CN104735435B (en) | Image processing method and electronic device | |
US20210243426A1 (en) | Method for generating multi-view images from a single image | |
US9536347B2 (en) | Apparatus and method for forming light field image | |
CN113256781B (en) | Virtual scene rendering device, storage medium and electronic equipment | |
CN105023260A (en) | Panorama image fusion method and fusion apparatus | |
CN110866936A (en) | Video labeling method, tracking method, device, computer equipment and storage medium | |
WO2013120308A1 (en) | Three dimensions display method and system | |
CN110097527A (en) | Video-splicing fusion method, device, terminal and storage medium | |
CN106227628B (en) | A kind of test method and device of mosaic screen | |
CN107133981B (en) | Image processing method and device | |
CN109493376A (en) | Image processing method and device, storage medium and electronic device | |
CN116523738A (en) | Task triggering method and device, storage medium and electronic equipment | |
CN108734791B (en) | Panoramic video processing method and device | |
CN113438541B (en) | Viewpoint animation generation method, device, equipment and storage medium | |
CN113298868B (en) | Model building method, device, electronic equipment, medium and program product | |
CN107678329A (en) | A kind of panorama interconnected control systems and control method | |
CN116664794A (en) | Image processing method, device, storage medium and electronic equipment | |
CN117651125A (en) | Video generation method, device, nonvolatile storage medium and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190806 |