CN114449130A - Multi-camera video fusion method and system - Google Patents

Multi-camera video fusion method and system Download PDF

Info

Publication number
CN114449130A
CN114449130A CN202210215554.8A CN202210215554A CN114449130A CN 114449130 A CN114449130 A CN 114449130A CN 202210215554 A CN202210215554 A CN 202210215554A CN 114449130 A CN114449130 A CN 114449130A
Authority
CN
China
Prior art keywords
camera
light field
frame synchronization
video
cameras
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210215554.8A
Other languages
Chinese (zh)
Other versions
CN114449130B (en
Inventor
温建伟
其他发明人请求不公开姓名
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhuohe Technology Co Ltd
Original Assignee
Beijing Zhuohe Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhuohe Technology Co Ltd filed Critical Beijing Zhuohe Technology Co Ltd
Priority to CN202210215554.8A priority Critical patent/CN114449130B/en
Publication of CN114449130A publication Critical patent/CN114449130A/en
Application granted granted Critical
Publication of CN114449130B publication Critical patent/CN114449130B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • H04N5/06Generation of synchronising signals

Abstract

The invention provides a multi-camera video fusion method and system, and belongs to the technical field of video fusion. The method comprises the following steps: s1: acquiring a plurality of video images shot by a plurality of cameras; s2: obtaining a plurality of time-synchronized key frames from a plurality of video images; s3: calculating frame synchronization parameters Fs of a plurality of time-synchronized key frames; s4: judging whether the frame synchronization parameter Fs meets a preset condition; if yes, performing video fusion on the plurality of video images; if not, adjusting the shooting angle of at least one camera in the plurality of cameras, and returning to the step S1. The system comprises a frame synchronization parameter calculation module, a camera angle adjustment module, a video fusion module and a judgment module. According to the invention, the camera angle adjustment, camera calibration and/or video fusion are/is executed through the judgment of the frame synchronization parameters, so that the problem of picture disorder caused by fusion of a plurality of different videos when the videos are not synchronized is avoided.

Description

Multi-camera video fusion method and system
Technical Field
The invention belongs to the technical field of video fusion, and particularly relates to a multi-camera video fusion method and system, computer equipment for implementing the method and a computer-readable storage medium.
Background
Video fusion techniques generally refer to video fusion of a plurality of image sequences captured by different video capture devices with respect to a scene or model to generate a new video scene or model with respect to the scene. In practical implementation, video fusion usually needs to configure a plurality of cameras to shoot an area containing a predetermined tracking target or a standard reference target at the same time, and perform video fusion after obtaining a plurality of videos with different angles shot by the plurality of cameras respectively so as to enhance a display effect. There may also be differences in the resolution of the multiple different cameras. Therefore, the obtained video to be fused cannot be directly subjected to inter-frame fusion enhancement.
In the prior art, after a plurality of different video data are obtained, a temporally synchronized key frame is found out for image fusion usually in a key frame registration manner. This approach is simple to implement, but when the video stream is generated at a fast speed and the amount of video stream data is large, there is a significant delay; meanwhile, the mode defaults that the shooting of a plurality of cameras is time synchronization, but different cameras have different performances in practical application, and the fact that the complete synchronization is kept is impossible.
In addition, with the development of the light field imaging technology, the shooting acquisition equipment is gradually upgraded to a light field camera array, the light field camera array comprises a plurality of light field cameras with adjustable different angles, light field image data are different from data shot by a traditional camera, and the original key frame synchronization method cannot be applied; moreover, the consistency of the shooting parameters (shooting time, shooting rate and shooting angle) of different light field cameras contained in the same light field camera array has more obvious influence on later-stage video fusion.
How to solve the asynchronous problem that brings among a plurality of different video fusion processes that current video fusion technique, especially the camera array that contains the light field camera was shot, prior art needs further improvement.
Disclosure of Invention
In order to solve the technical problem, the invention provides a multi-camera video fusion method and system, computer equipment for implementing the method and a computer readable storage medium.
In a first aspect of the present invention, there is provided a multi-camera video fusion method, comprising the steps of:
s1: acquiring a plurality of video images shot by a plurality of cameras;
s2: obtaining a plurality of time-synchronized key frames from the plurality of video images;
s3: calculating a frame synchronization parameter Fs of the plurality of time-synchronized key frames;
s4: judging whether the frame synchronization parameter Fs meets a preset condition;
if yes, performing video fusion on the plurality of video images;
if not, adjusting the shooting angle of at least one camera in the plurality of cameras, and returning to the step S1.
As a further adaptive scene, the camera is a light field camera;
prior to the step S1, the method further includes:
s01: obtaining, by a plurality of the light field cameras, a plurality of light field video reference images at a plurality of angles;
s02: performing corrective synchronization for the light field camera based on the plurality of light field video reference images for the plurality of angles.
Specifically, the step S01 specifically includes:
placing a plurality of standard reference targets within a shooting range of the light field camera;
shooting the standard reference target by adopting the light field camera to obtain a plurality of light field video reference images at a plurality of angles;
the step S02 specifically includes:
calculating frame synchronization parameters Fst of a plurality of light field video reference images of the plurality of angles;
and if the frame synchronization parameter Fst is smaller than a preset Threshold value Threshold, performing time synchronization on a plurality of light field cameras included in the light field camera array.
In a second aspect of the present invention, a multi-camera video fusion system is provided, where the system includes a frame synchronization parameter calculation module, a camera calibration module, a camera angle adjustment module, a video fusion module, and a determination module;
wherein at least two of the multiple cameras are light field cameras;
the frame synchronization parameter calculation module is used for calculating real-time frame synchronization parameters among a plurality of real-time image frames;
the camera calibration module performs calibration synchronization on the light field camera;
the camera adjusting module is used for adjusting the shooting angle of the camera within a preset range;
the video fusion module is used for carrying out video fusion on videos of a plurality of different angles shot by a plurality of cameras;
the judging module is used for judging whether the real-time frame synchronization parameter output by the frame synchronization parameter meets the preset condition,
when the real-time frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the video fusion module;
when the real-time frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the camera adjusting module;
the video images shot by the cameras comprise preset tracking targets.
The frame synchronization parameter calculation module is further used for calculating a reference frame synchronization parameter Fst among a plurality of reference light field image frames;
and if the reference frame synchronization parameter Fst is smaller than a preset Threshold value Threshold, performing time synchronization on the light field cameras in the multiple cameras.
The reference light field image frame is obtained by:
placing a plurality of standard reference targets within a shooting range of the light field camera;
shooting the standard reference target by adopting the light field camera to obtain a plurality of light field video reference images at a plurality of angles;
obtaining a plurality of reference light field image frames from the plurality of light field video reference images for the plurality of angles.
The invention may be implemented as a computer medium having stored thereon computer program instructions for executing the method of the first aspect.
The invention may also be embodied as a computer program product carried on a computer readable storage medium for execution by a processor of the program to perform the method of the first aspect.
According to the invention, the camera angle adjustment, camera calibration and/or video fusion are/is executed through the judgment of the frame synchronization parameters, so that the problem of picture disorder caused by fusion of a plurality of different videos when the videos are not synchronized is avoided.
Further advantages of the invention will be apparent in the detailed description section in conjunction with the drawings attached hereto.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flow chart of a multi-camera video fusion method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of a multi-camera video fusion method according to another preferred embodiment of the present invention;
FIG. 3 is a schematic view of the minimum pixel area in the present invention;
FIG. 4 is a schematic diagram of the angle between the center line of the light field camera and the geometric center of the minimum pixel area in the present invention;
FIG. 5 is a schematic diagram of a partial structure module of a multi-camera video fusion system according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of a partial structure module of a multi-camera video fusion system according to still another preferred embodiment of the present invention;
fig. 7 is a schematic structural diagram of a computer-readable storage medium for implementing the method of fig. 1 or 2.
Detailed Description
The invention is further described with reference to the following drawings and detailed description.
Referring to fig. 1, fig. 1 is a schematic flowchart of a multi-camera video fusion method according to an embodiment of the present invention.
In FIG. 1, the method includes steps S1-S4, each implemented as follows:
s1: acquiring a plurality of video images shot by a plurality of cameras;
s2: obtaining a plurality of time-synchronized key frames from the plurality of video images;
s3: calculating a frame synchronization parameter Fs of the plurality of time-synchronized key frames;
s4: judging whether the frame synchronization parameter Fs meets a preset condition;
if yes, performing video fusion on the plurality of video images;
if not, adjusting the shooting angle of at least one camera in the plurality of cameras, and returning to the step S1.
The shooting angles of the cameras are different; the shooting angle of each camera is adjustable within a predetermined range.
Preferably, in a specific implementation, the plurality of video images captured by the plurality of cameras in step S1 are a plurality of light field video reference images at a plurality of angles, and are captured by a plurality of light field cameras at a plurality of different capturing angles, each light field camera capturing one angle at a time; the shooting angle of each light field camera is adjustable within a preset range.
Therefore, on the basis of fig. 1, referring to fig. 2, fig. 2 is a schematic flow chart of a multi-camera video fusion method according to still another preferred embodiment of the present invention.
In fig. 2, the camera is a light field camera;
prior to the step S1, the method further includes:
s01: obtaining, by a plurality of the light field cameras, a plurality of light field video reference images at a plurality of angles;
s02: performing corrective synchronization for the light field camera based on the plurality of light field video reference images for the plurality of angles.
The step S01 specifically includes:
placing a plurality of standard reference targets within a shooting range of the light field camera;
shooting the standard reference target by adopting the light field camera to obtain a plurality of light field video reference images at a plurality of angles;
the step S02 specifically includes:
calculating frame synchronization parameters Fst of a plurality of light field video reference images of the plurality of angles;
and if the frame synchronization parameter Fst is smaller than a preset Threshold, performing time synchronization on a plurality of light field cameras included in the light field camera array.
The calculation of the key parameter frame synchronization parameter Fs/Fst used by various embodiments of the present invention will be described with reference to fig. 3-4.
In a specific implementation, the frame synchronization parameter Fs is determined as follows:
Figure BDA0003534427040000071
Figure BDA0003534427040000072
Fsijit can be understood as the inter-frame synchronization parameter between the video images shot by the ith and jth cameras.
The area (i) and the area (j) are respectively the minimum pixel area containing a preset tracking target in time synchronization key frames obtained from video images shot by the ith camera and the jth camera;
angle (i) and angle (j) are respectively included angles between the central line of the ith camera and the geometric center of the minimum pixel area containing the preset tracking target;
pix (i) and pix (j) are resolutions of the ith camera and the jth camera respectively;
and n is the number of the cameras.
The frame synchronization parameter Fst is determined as follows:
Figure BDA0003534427040000073
Figure BDA0003534427040000074
λ12=1;
Fstijit can be understood as the inter-frame synchronization parameter between the light field video reference images shot by the ith and jth light field cameras.
The areas of the minimum pixel regions containing the standard reference target in the light field video reference images shot by the ith light field camera and the jth light field camera are respectively (area (i) and area (j));
angless (i) and angless (j) are respectively included angles between the central line of the ith light field camera and the geometric center of the minimum pixel area containing the standard reference target;
pixs (i), pixs (j) are the resolutions of the ith and jth light field cameras, respectively;
n is the number of light field cameras comprised by the light field camera array, n>1,λ1、λ2Is an adjustable weight parameter.
Fig. 3 is a region schematic view of the minimum pixel region area.
In fig. 3, a light field video reference image containing a standard reference object taken by a certain light field camera is schematically shown.
As an illustrative example, the light field video reference image shown in fig. 3 includes 5 × 6-30 pixel regions, which are denoted as pixel regions 1-30, and the area of each pixel region is assumed to be 1.
The minimum pixel area may be a minimum block unit of the image determined according to the current screen resolution.
Based on this, in fig. 3, the standard reference object refers to a total of 8 pixel regions in the light field video reference image, 14, 15-16 (covered in the figure), 20, 21-22 (covered in the figure), and 9-10.
Therefore, the minimum pixel area of the light field video reference image shot by the light field camera, which contains the standard reference target, is 8.
That is, the minimum pixel area in the light field video reference image shot by the light field camera and containing the standard reference target is the number of all the minimum pixel areas related to the standard reference target.
As another preference, more neighborhood minimum pixel regions may be considered, that is, on the basis of the above 8 pixel regions, the pixel region No. 8 is considered to be all the minimum pixel regions involved by the standard reference target, so as to constitute a minimum pixel region of 3 × 3.
In the above embodiment, the minimum pixel region including the standard reference target in the light field video reference image shot by the light field camera is a minimum pixel region of a × b specification formed by all the minimum pixel regions related to the standard reference target and a part of the neighborhood pixel regions, where a and b are positive integers greater than 1.
Reference is next made to fig. 4.
In the above calculation formulas, angle (i) and angle (j) are respectively included angles between the central line of the ith and jth light field cameras and the geometric center of the minimum pixel region containing the standard reference target;
in fig. 4, the center line of each light field camera, the geometric center of the minimum pixel area containing the standard reference target, and the adjustable photographing range (preset range) of each light field camera are shown.
In a specific implementation, the included angle takes radians as units.
To implement the method shown in fig. 1 or fig. 2, referring to fig. 5, fig. 5 is a schematic diagram of a partial structure module of a multi-camera video fusion system according to an embodiment of the present invention.
In fig. 5, a multi-camera video fusion system is shown, which includes a frame synchronization parameter calculation module, a camera calibration module, a camera angle adjustment module, a video fusion module, and a determination module.
As a further preference, where at least two of the multiple cameras are light field cameras, see fig. 6, the multiple camera array comprises a light field camera array.
At this time, the frame synchronization parameter calculation module is configured to calculate a real-time frame synchronization parameter Fs between a plurality of real-time image frames, and is further configured to calculate a reference frame synchronization parameter Fst between a plurality of reference light field image frames;
the camera calibration module performs calibration synchronization on the light field camera;
the camera adjusting module is used for adjusting the shooting angle of the camera within a preset range;
the video fusion module is used for carrying out video fusion on videos of a plurality of different angles shot by a plurality of cameras;
the judging module is used for judging whether the real-time frame synchronization parameter output by the frame synchronization parameter meets the preset condition,
when the real-time frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the video fusion module;
when the real-time frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the camera adjusting module;
the video images shot by the cameras comprise preset tracking targets;
the frame synchronization parameter is determined as follows:
Figure BDA0003534427040000101
Figure BDA0003534427040000102
wherein Fs is the frame synchronization parameter;
area (i) and area (j) are respectively the minimum pixel area containing the preset tracking target in the time synchronization key frame obtained from the video images shot by the ith camera and the jth camera;
angle (i) and angle (j) are respectively included angles between the central line of the ith camera and the geometric center of the minimum pixel area containing the preset tracking target;
pix (i) and pix (j) are resolutions of the ith camera and the jth camera respectively;
and n is the number of the cameras.
If the reference frame synchronization parameter Fst is smaller than a preset Threshold, performing time synchronization on the light field cameras in the multiple cameras;
the reference frame synchronization parameter Fst is determined as follows:
Figure BDA0003534427040000103
Figure BDA0003534427040000104
λ12=1;
wherein, areas (i) and (j) are the minimum pixel area areas of the standard reference target in the light field video reference images shot by the ith and jth light field cameras respectively;
angless (i) and angless (j) are respectively included angles between the central line of the ith light field camera and the geometric center of the minimum pixel area containing the standard reference target;
pixs (i), pixs (j) are the resolutions of the ith and jth light field cameras, respectively;
n is the number of the light field cameras, n>1,λ1、λ2Is an adjustable weight parameter.
Wherein the reference light field image frame is obtained by:
placing a plurality of standard reference targets within a shooting range of the light field camera;
shooting the standard reference target by adopting the light field camera to obtain a plurality of light field video reference images at a plurality of angles;
obtaining a plurality of reference light field image frames from the plurality of light field video reference images for the plurality of angles.
The technical scheme of the invention can be automatically realized by computer equipment based on computer program instructions. Similarly, the present invention can also be embodied as a computer program product, which is loaded on a computer storage medium and executed by a processor to implement the above technical solution.
Further embodiments therefore include a computer device comprising a memory storing a computer executable program and a processor configured to perform the steps of the above method.
Referring to fig. 7, fig. 7 is a schematic diagram of a computer-readable storage medium for implementing the method of fig. 1 or 2. The computer readable medium has stored thereon computer program instructions for execution by a processor for implementing the method steps of:
s01: obtaining, by a plurality of light field cameras, a plurality of light field video reference images at a plurality of angles;
s02: performing a calibration synchronization for the light field camera based on the plurality of light field video reference images for the plurality of angles;
s1: acquiring a plurality of video images shot by a plurality of cameras;
s2: obtaining a plurality of time-synchronized key frames from the plurality of video images;
s3: calculating a frame synchronization parameter Fs of the plurality of time-synchronized key frames;
s4: judging whether the frame synchronization parameter Fs meets a preset condition;
if yes, performing video fusion on the plurality of video images;
if not, adjusting the shooting angle of at least one camera in the plurality of cameras, and returning to the step S1.
In the present invention, the contents of the module structure or the technical terms not specifically defined are subject to the description of the prior art. Such as key frame, synchronization, fusion, etc. For example, the key frame may be a certain frame of consecutive frames containing the reference target or adopt other definitions, etc., and the synchronization is time synchronization (the same).
A large number of sample data tests show that the camera angle adjustment, camera calibration and/or video fusion are/is executed through the judgment of the frame synchronization parameters, and the problem of picture disorder caused by the fusion of a plurality of different videos when the videos are not synchronized is solved.
It should be noted that the present invention can solve a plurality of technical problems or achieve corresponding technical effects, but does not require that each embodiment of the present invention solves all the technical problems or achieves all the technical effects, and an embodiment that separately solves one or several technical problems or achieves one or more improved effects also constitutes a separate technical solution.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. A multi-camera video fusion method is characterized by comprising the following steps:
s1: acquiring a plurality of video images shot by a plurality of cameras;
s2: obtaining a plurality of time-synchronized key frames from the plurality of video images;
s3: calculating a frame synchronization parameter Fs of the plurality of time-synchronized key frames;
s4: judging whether the frame synchronization parameter Fs meets a preset condition;
if yes, performing video fusion on the plurality of video images;
if not, adjusting the shooting angle of at least one camera in the plurality of cameras, and returning to the step S1.
2. The multi-camera video fusion method of claim 1, wherein:
shooting angles of the cameras are different;
the shooting angle of each camera is adjustable within a predetermined range.
3. The multi-camera video fusion method of claim 1, wherein:
the frame synchronization parameter Fs is determined as follows:
Figure FDA0003534427030000011
Figure FDA0003534427030000012
the area (i) and the area (j) are respectively the minimum pixel area containing a preset tracking target in time synchronization key frames obtained from video images shot by the ith camera and the jth camera;
angle (i) and angle (j) are respectively included angles between the central line of the ith camera and the geometric center of the minimum pixel area containing the preset tracking target;
pix (i) and pix (j) are resolutions of the ith camera and the jth camera respectively;
and n is the number of the cameras.
4. The multi-camera video fusion method of claim 1, wherein:
the camera is a light field camera;
prior to the step S1, the method further includes:
s01: obtaining, by a plurality of the light field cameras, a plurality of light field video reference images at a plurality of angles;
s02: performing corrective synchronization for the light field camera based on the plurality of light field video reference images for the plurality of angles.
5. The multi-camera video fusion method of claim 4, wherein:
the step S01 specifically includes:
placing a plurality of standard reference targets within a shooting range of the light field camera;
shooting the standard reference target by adopting the light field camera to obtain a plurality of light field video reference images at a plurality of angles;
the step S02 specifically includes:
calculating frame synchronization parameters Fst of a plurality of light field video reference images of the plurality of angles;
if the frame synchronization parameter Fst is smaller than a preset Threshold, performing time synchronization on a plurality of light field cameras included in the light field camera array;
the frame synchronization parameter Fst is determined as follows:
Figure FDA0003534427030000021
Figure FDA0003534427030000022
λ12=1;
wherein, areas (i) and (j) are the minimum pixel area areas of the standard reference target in the light field video reference images shot by the ith and jth light field cameras respectively;
angless (i) and angless (j) are respectively included angles between the central line of the ith light field camera and the geometric center of the minimum pixel area containing the standard reference target;
pixs (i), pixs (j) are the resolutions of the ith and jth light field cameras, respectively;
n isThe number of light field cameras comprised by the light field camera array, n>1,λ1、λ2Is an adjustable weight parameter.
6. A multi-camera video fusion system comprises a frame synchronization parameter calculation module, a camera angle adjustment module, a video fusion module and a judgment module;
the method is characterized in that:
the frame synchronization parameter calculation module is used for calculating a frame synchronization parameter Fs among a plurality of image frames;
the camera adjusting module is used for adjusting the shooting angle of the camera within a preset range;
the video fusion module is used for carrying out video fusion on videos of a plurality of different angles shot by a plurality of cameras;
the judging module is used for judging whether the frame synchronization parameter output by the frame synchronization parameter meets a preset condition,
when the frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the video fusion module;
when the frame synchronization parameter output by the frame synchronization parameter does not meet a preset condition, starting the camera adjusting module;
the video images shot by the cameras comprise preset tracking targets;
the frame synchronization parameter Fs is determined as follows:
Figure FDA0003534427030000031
Figure FDA0003534427030000032
wherein Fs is the frame synchronization parameter; area (i) and area (j) are respectively the minimum pixel area containing the preset tracking target in the time synchronization key frame obtained from the video images shot by the ith camera and the jth camera;
angle (i) and angle (j) are respectively included angles between the central line of the ith camera and the geometric center of the minimum pixel area containing the preset tracking target;
pix (i) and pix (j) are resolutions of the ith camera and the jth camera respectively;
and n is the number of the cameras.
7. A multi-camera video fusion system comprises a frame synchronization parameter calculation module, a camera calibration module, a camera angle adjustment module, a video fusion module and a judgment module;
the system is characterized in that at least two of the multiple cameras are light field cameras;
the frame synchronization parameter calculation module is used for calculating real-time frame synchronization parameters among a plurality of real-time image frames;
the camera calibration module performs calibration synchronization on the light field camera;
the camera adjusting module is used for adjusting the shooting angle of the camera within a preset range;
the video fusion module is used for carrying out video fusion on videos of a plurality of different angles shot by a plurality of cameras;
the judging module is used for judging whether the real-time frame synchronization parameter output by the frame synchronization parameter meets the preset condition,
when the real-time frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the video fusion module;
when the real-time frame synchronization parameter output by the frame synchronization parameter meets a preset condition, starting the camera adjusting module;
the video images shot by the cameras comprise preset tracking targets;
the frame synchronization parameter is determined as follows:
Figure FDA0003534427030000051
Figure FDA0003534427030000052
wherein Fs is the frame synchronization parameter;
area (i) and area (j) are respectively the minimum pixel area containing the preset tracking target in the time synchronization key frame obtained from the video images shot by the ith camera and the jth camera;
angle (i) and angle (j) are respectively included angles between the central line of the ith camera and the geometric center of the minimum pixel area containing the preset tracking target;
pix (i) and pix (j) are resolutions of the ith camera and the jth camera respectively;
and n is the number of the cameras.
8. The multi-camera video fusion system of claim 7, wherein:
the frame synchronization parameter calculation module is further used for calculating a reference frame synchronization parameter Fst among a plurality of reference light field image frames;
if the reference frame synchronization parameter Fst is smaller than a preset Threshold, performing time synchronization on the light field cameras in the multiple cameras;
the reference frame synchronization parameter Fst is determined as follows:
Figure FDA0003534427030000053
Figure FDA0003534427030000054
λ12=1;
the areas of the minimum pixel regions containing the standard reference target in the light field video reference images shot by the ith light field camera and the jth light field camera are respectively (area (i) and area (j));
angless (i) and angless (j) are respectively included angles between the central line of the ith light field camera and the geometric center of the minimum pixel area containing the standard reference target;
pixs (i), pixs (j) are the resolutions of the ith and jth light field cameras, respectively;
n is the number of the light field cameras, n>1,λ1、λ2Is an adjustable weight parameter.
9. The multi-camera video fusion system of claim 8, wherein:
the reference light field image frame is obtained by:
placing a plurality of standard reference targets within a shooting range of the light field camera;
shooting the standard reference target by adopting the light field camera to obtain a plurality of light field video reference images at a plurality of angles;
obtaining a plurality of reference light field image frames from the plurality of light field video reference images for the plurality of angles.
10. A computer readable medium having computer program instructions stored thereon for execution by a processor for performing the method steps of:
s01: obtaining a plurality of light field video reference images at a plurality of angles by a plurality of light field cameras;
s02: performing a corrective synchronization for the light field camera based on the plurality of light field video reference images for the plurality of angles;
s1: acquiring a plurality of video images shot by a plurality of cameras;
s2: obtaining a plurality of time-synchronized key frames from the plurality of video images;
s3: calculating a frame synchronization parameter Fs of the plurality of time-synchronized key frames;
s4: judging whether the frame synchronization parameter Fs meets a preset condition;
if yes, performing video fusion on the plurality of video images;
if not, adjusting the shooting angle of at least one camera in the plurality of cameras, and returning to the step S1.
CN202210215554.8A 2022-03-07 2022-03-07 Multi-camera video fusion method and system Active CN114449130B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210215554.8A CN114449130B (en) 2022-03-07 2022-03-07 Multi-camera video fusion method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210215554.8A CN114449130B (en) 2022-03-07 2022-03-07 Multi-camera video fusion method and system

Publications (2)

Publication Number Publication Date
CN114449130A true CN114449130A (en) 2022-05-06
CN114449130B CN114449130B (en) 2022-09-09

Family

ID=81359734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210215554.8A Active CN114449130B (en) 2022-03-07 2022-03-07 Multi-camera video fusion method and system

Country Status (1)

Country Link
CN (1) CN114449130B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593472A (en) * 2024-01-18 2024-02-23 成都市灵奇空间软件有限公司 Method and system for modeling and reconstructing local three-dimensional scene in real time by video stream

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521816A (en) * 2011-11-25 2012-06-27 浪潮电子信息产业股份有限公司 Real-time wide-scene monitoring synthesis method for cloud data center room
CN110120012A (en) * 2019-05-13 2019-08-13 广西师范大学 The video-splicing method that sync key frame based on binocular camera extracts
CN111355928A (en) * 2020-02-28 2020-06-30 济南浪潮高新科技投资发展有限公司 Video stitching method and system based on multi-camera content analysis
CN112017216A (en) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN113160053A (en) * 2021-04-01 2021-07-23 华南理工大学 Pose information-based underwater video image restoration and splicing method
CN113873328A (en) * 2021-09-27 2021-12-31 四川效率源信息安全技术股份有限公司 Method for splitting multi-camera fusion video file into multiple single-camera video files

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102521816A (en) * 2011-11-25 2012-06-27 浪潮电子信息产业股份有限公司 Real-time wide-scene monitoring synthesis method for cloud data center room
CN110120012A (en) * 2019-05-13 2019-08-13 广西师范大学 The video-splicing method that sync key frame based on binocular camera extracts
CN111355928A (en) * 2020-02-28 2020-06-30 济南浪潮高新科技投资发展有限公司 Video stitching method and system based on multi-camera content analysis
CN112017216A (en) * 2020-08-06 2020-12-01 影石创新科技股份有限公司 Image processing method, image processing device, computer-readable storage medium and computer equipment
CN113160053A (en) * 2021-04-01 2021-07-23 华南理工大学 Pose information-based underwater video image restoration and splicing method
CN113873328A (en) * 2021-09-27 2021-12-31 四川效率源信息安全技术股份有限公司 Method for splitting multi-camera fusion video file into multiple single-camera video files

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117593472A (en) * 2024-01-18 2024-02-23 成都市灵奇空间软件有限公司 Method and system for modeling and reconstructing local three-dimensional scene in real time by video stream

Also Published As

Publication number Publication date
CN114449130B (en) 2022-09-09

Similar Documents

Publication Publication Date Title
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
WO2021073331A1 (en) Zoom blurred image acquiring method and device based on terminal device
KR101699919B1 (en) High dynamic range image creation apparatus of removaling ghost blur by using multi exposure fusion and method of the same
CN106713755B (en) Panoramic image processing method and device
CN109922275B (en) Self-adaptive adjustment method and device of exposure parameters and shooting equipment
US8274572B2 (en) Electronic camera capturing a group of a plurality of specific objects
CN110290323B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
US9467625B2 (en) Imaging device capable of combining images
JP6656035B2 (en) Image processing apparatus, imaging apparatus, and control method for image processing apparatus
WO2020029679A1 (en) Control method and apparatus, imaging device, electronic device and readable storage medium
CN113875219B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN114449130B (en) Multi-camera video fusion method and system
US8731327B2 (en) Image processing system and image processing method
JP6231816B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP7051365B2 (en) Image processing equipment, image processing methods, and programs
CN108476286B (en) Image output method and electronic equipment
CN110072050B (en) Self-adaptive adjustment method and device of exposure parameters and shooting equipment
JP6730423B2 (en) Image processing apparatus, image processing method, and image processing program
CN115035013A (en) Image processing method, image processing apparatus, terminal, and readable storage medium
CN110930340B (en) Image processing method and device
CN113259594A (en) Image processing method and device, computer readable storage medium and terminal
CN114612360B (en) Video fusion method and system based on motion model
CN115546042B (en) Video processing method and related equipment thereof
CN112738425A (en) Real-time video splicing system with multiple cameras for acquisition
WO2022109897A1 (en) Time-lapse photography method and device, and time-lapse video generation method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant