CN108234904B - Multi-video fusion method, device and system - Google Patents

Multi-video fusion method, device and system Download PDF

Info

Publication number
CN108234904B
CN108234904B CN201810112281.8A CN201810112281A CN108234904B CN 108234904 B CN108234904 B CN 108234904B CN 201810112281 A CN201810112281 A CN 201810112281A CN 108234904 B CN108234904 B CN 108234904B
Authority
CN
China
Prior art keywords
video information
fusion
video
panoramic
picture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810112281.8A
Other languages
Chinese (zh)
Other versions
CN108234904A (en
Inventor
刘捷
高明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810112281.8A priority Critical patent/CN108234904B/en
Publication of CN108234904A publication Critical patent/CN108234904A/en
Application granted granted Critical
Publication of CN108234904B publication Critical patent/CN108234904B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention discloses a multi-video fusion method, a multi-video fusion device and a multi-video fusion system, which are used for fusing multi-video, and the multi-video fusion method comprises the following steps: acquiring panoramic video information and multi-picture video information; and fusing the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode. By adopting the technical scheme of the invention, the effect of mixed video fusion is achieved. The multi-angle multi-picture video fusion method has the advantages that the panoramic video and the multi-angle multi-picture video are seamlessly fused, the multi-picture video and the panoramic video are allowed to be seamlessly fused at any position and any number in any size in the fusion process, and the fusion effect is observed in a panoramic player without any distortion.

Description

Multi-video fusion method, device and system
Technical Field
The invention relates to the field of video processing, in particular to a multi-video fusion method, a multi-video fusion device and a multi-video fusion system.
Background
According to the existing video fusion scheme, firstly, multiple paths of cameras are used for collecting, the collected images are subjected to feature extraction, and finally, panoramic stitching is carried out according to the positions of feature points. Video acquisition generally can adopt a plurality of cameras to the overall arrangement of camera can require to reach the coincidence of camera lens focus as far as possible usually, just so can guarantee that the later stage crosses seam effect not obvious. The collected images need to be subjected to feature extraction through a computer vision algorithm, and the part has high calculation complexity and provides a basis for the subsequent video splicing. And finally, performing video fusion according to the relative positions of the feature points of the original image and the target image, wherein the parallax phenomenon caused by the fact that the optical centers of two cameras cannot be absolutely superposed is removed in the fusion process.
The existing video fusion algorithm has the following problems:
firstly, the existing panoramic video fusion algorithm is only a panoramic video splicing algorithm, namely, a plurality of paths of videos are spliced into a panoramic video according to the series of steps. This approach can only preserve a large field of view, and local details are often difficult to capture. Meanwhile, the optical centers of adjacent lenses are difficult to coincide physically, so that the joint of video splicing is distorted, and sometimes, the human body is even defaulted. The two points result in that the panoramic video fusion technology cannot well replace the traditional camera equipment in many application scenes, and especially in the monitoring field, the blurring of video pictures and the default of people can cause very serious results.
Second, the conventional surveillance video fusion simply splices together the video frames from different angles collected by multiple cameras. Therefore, a plurality of cameras need to be arranged to ensure that the whole space is monitored without dead angles, and pictures at a plurality of angles are displayed on a plane to cause a viewer to lose spatial position feeling, so that the method is difficult to be applied to some application scenes needing remote command.
The present invention has been made in view of the above problems.
Disclosure of Invention
The invention mainly aims to disclose a multi-video fusion method, a multi-video fusion device and a multi-video fusion system, which are used for solving the problems of distortion, default, dead angles and the like existing in video fusion in the prior art.
In order to achieve the above object, according to an aspect of the present invention, a multi-video fusion method is disclosed, and the following technical solutions are adopted:
a multi-video fusion method comprises the following steps: acquiring panoramic video information and multi-picture video information; and fusing the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode.
Further, the acquiring the panoramic video information and the multi-picture video information includes: acquiring first video information through a multi-channel video acquisition device, and acquiring second video information through a panoramic video acquisition device;
performing image point feature extraction on the second video information to obtain an extraction result; performing video splicing on the second video information based on the extraction result to obtain a splicing result; performing geometric mapping on the second video information based on the splicing result to obtain the panoramic video information; and performing positioning setting in the panorama on the first video information to obtain a positioning result; and carrying out geometric mapping on the first video information based on the positioning result to obtain the multi-picture video information.
Further, the preset fusion algorithm is a radiation fusion algorithm, and specifically includes: first, the seam position p is calculatediThe calculation formula of the pixel difference value is as follows: d (p)i)=IL(pi)-IR(pi) In which IL(pi) And IR(pi) Respectively represent at piPixel values of the left image and pixel values of the right image; then, at piThe upper part and the lower part at the position respectively radiate n/2 points, the difference value of the total n points is obtained by multiplying the difference value by a weight coefficient to obtain a q-position pixel difference value, and the calculation formula is as follows:
Figure GDA0002556187470000021
wherein, wi(q) is a weight coefficient, and the calculation formula of the weight coefficient is:
Figure GDA0002556187470000022
wherein pi-q | | represents piAnd q is the Euclidean distance; and carrying out fusion processing on the panoramic video information and the multi-picture video information according to the size of a fusion area and a q-position pixel difference value, wherein the calculation formula is as follows:
Figure GDA0002556187470000031
Figure GDA0002556187470000032
wherein, I'L(q) represents the pixel value after left image fusion, and x represents the q point and piDistance of points, xbRepresenting the size of the radius of the fusion zone, I ″RAnd (q) represents the pixel value after the right image is fused.
Further, the preset fusion mode at least comprises one of the following modes: fusing the multi-picture video information in the panoramic video information; fusing the panoramic video information into the multi-picture video information; and fusing the panoramic video information in the panoramic video information.
According to another aspect of the present invention, a multi-video fusion apparatus is provided, and the following technical solutions are adopted:
a multi-video fusion apparatus comprising: the acquisition module is used for acquiring panoramic video information and multi-picture video information; and the fusion module is used for fusing the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode.
Further, the acquisition module includes: the acquisition module is used for acquiring first video information through the multi-channel video acquisition equipment and acquiring second video information through the panoramic video acquisition equipment; the extraction module is used for extracting image point characteristics of the second video information to obtain an extraction result; the splicing module is used for carrying out video splicing on the second video information based on the extraction result to obtain the splicing result; the first mapping module is used for carrying out geometric mapping on the second video information based on the splicing result to obtain the panoramic video information; the positioning module is used for carrying out positioning setting in the panorama on the first video information to obtain a positioning result; and the second mapping module is used for carrying out geometric mapping on the first video information based on the positioning result to acquire the multi-picture video information.
Further, the fusion module comprises a calculation module configured to: first, the seam position p is calculatediThe calculation formula of the pixel difference value is as follows: ,
D(pi)=IL(pi)-IR(pi)
wherein IL(pi) And IR(pi) Respectively represent at piPixel values of the left image and pixel values of the right image; then, at piThe upper part and the lower part at the position respectively radiate n/2 points, the difference value of the total n points is obtained by multiplying the difference value by a weight coefficient to obtain a q-position pixel difference value, and the calculation formula is as follows:
Figure GDA0002556187470000041
wherein, wi(q) is a weight coefficient, and the calculation formula of the weight coefficient is:
Figure GDA0002556187470000042
wherein pi-q | | represents piAnd q is the Euclidean distance; and carrying out fusion processing on the panoramic video information and the multi-picture video information according to the size of a fusion area and a q-position pixel difference value, wherein the calculation formula is as follows:
Figure GDA0002556187470000043
Figure GDA0002556187470000044
wherein, I'L(q) represents the pixel value after left image fusion, and x represents the q point and piDistance of points, xbRepresenting the size of the radius of the fusion zone, I ″RAnd (q) represents the pixel value after the right image is fused.
Further, the preset fusion mode at least comprises one of the following modes: fusing the multi-picture video information in the panoramic video information; fusing the panoramic video information into the multi-picture video information; and fusing the panoramic video information in the panoramic video information.
According to another aspect of the present invention, a multi-video fusion system is provided, and the following technical solutions are adopted:
a multi-video fusion system comprises the multi-video fusion device.
According to the invention, through a preset fusion algorithm, not only can panoramic stitching be realized, but also the multi-view video picture and the panoramic video are seamlessly fused, so that a user can watch the video picture at 360 degrees without dead angles, the feeling of being personally on the scene is brought, and meanwhile, the details of the multi-angle video picture can be carefully observed. Meanwhile, according to the technical scheme, the user can independently set the relative position of the multi-angle video picture in the panoramic video, automatic video fusion is carried out according to the position set by the user, and the user can operate conveniently and quickly. The traditional multi-angle video is only placed according to a simple fixed position, multi-picture videos are pieced together, an autonomous selection function is not left for a user, and the relative position relation between the multi-angle video and the panoramic video is not processed. Moreover, the application can enable the user to select the fusion mode independently: 1) fusing multi-picture videos in a panoramic video; 2) the panoramic video is fused in the panoramic video; 3) the panoramic video is fused in the multi-picture video; and the user can select to display the multi-view fusion video in different geometric mapping modes. Therefore, more personalized and more diversified video rendering effects can be provided for the user.
Drawings
In order to more clearly illustrate the embodiments of the present application or technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments described in the present invention, and other drawings can be obtained by those skilled in the art according to the drawings.
Fig. 1 is a flowchart of a multi-video fusion method according to an embodiment of the present invention;
FIG. 2 is a flow chart of the processing of the captured video information in accordance with the practice of the present invention;
FIG. 3 is a schematic diagram of a multi-view video merged into a panoramic video according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a panoramic video fused in a panoramic video according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a panoramic video merged into a multi-picture video according to an embodiment of the present invention;
fig. 6 is a block diagram of a multi-video fusion apparatus according to an embodiment of the present invention.
Detailed Description
The embodiments of the invention will be described in detail below with reference to the drawings, but the invention can be implemented in many different ways as defined and covered by the claims.
Fig. 1 is a flowchart of a multi-video fusion method according to an embodiment of the present invention.
Referring to fig. 1, a multi-video fusion method includes:
s101: acquiring panoramic video information and multi-picture video information;
s103: and fusing the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode.
Specifically, in step S101, panoramic video information and multi-picture video information are acquired, and video acquisition is performed first, where the video acquisition portion is divided into multi-channel panoramic video acquisition and multi-picture video acquisition. The multi-path panoramic video acquisition means that a plurality of cameras are arranged by taking the coincidence of lens focuses as a target as much as possible, and finally the gap effect is not obvious. The multi-picture video acquisition means that the cameras can be randomly arranged, and the arrangement position depends on the angle of a user wanting to shoot pictures, so that the arrangement of the user is simple and the operation is convenient. After the collected video information is processed, step S103 is executed, that is, the panoramic video information and the multi-picture video information are fused according to a preset fusion algorithm and a preset fusion mode. Specifically, the preset fusion algorithm adopted by the invention is an independently developed radiation fusion algorithm, and the algorithm can effectively solve the problem of chromatic aberration of the fused image, so that the two images can be fused seamlessly. The preset fusion mode is not limited to one fusion mode, and several fusion modes can be combined optionally.
Specifically, processing the collected video information may be as shown in fig. 2, and includes the following steps:
step 20: collecting multiple paths of panoramic videos;
step 20 a: collecting a multi-picture video;
step 21: extracting characteristics;
step 21 a: setting the position in the panorama;
step 22: video splicing;
step 22a, geometric mapping;
step 23: carrying out geometric mapping;
step 24: a multi-view fusion algorithm.
Specifically, a series of processing is performed on first video information acquired by a multi-channel video acquisition device in the step 20 and second video information acquired by a panoramic video acquisition device in the step 20a, and in the step 21, image point feature extraction is performed on the second video information to obtain an extraction result; in step 22, performing video stitching on the second video information based on the extraction result to obtain the panoramic video information; in step 21a, positioning setting in panorama is carried out on the first video information to obtain a positioning result; in step 22a, the second video information is geometrically mapped based on the positioning result to obtain the multi-picture video information, in step 23, the first video information is geometrically mapped based on the stitching result to obtain panoramic video information, and in step 24, video fusion is performed through a preset fusion algorithm.
Further, the spherical surface mapping is geometrically mapped to ensure that the multi-picture video does not have image distortion in the display of the panoramic player.
More specifically, the preset fusion algorithm is a radiation fusion algorithm, and specifically includes: first, the seam position p is calculatediThe calculation formula of the pixel difference value is as follows: d (p)i)=IL(pi)-IR(pi) In which IL(pi) And IR(pi) Respectively represent at piPixel values of the left image and pixel values of the right image; then, at piThe upper part and the lower part at the position respectively radiate n/2 points, the difference value of the total n points is obtained by multiplying the difference value by a weight coefficient to obtain a q-position pixel difference value, and the calculation formula is as follows:
Figure GDA0002556187470000071
wherein, wi(q) is a weight coefficient,and the calculation formula of the weight coefficient is as follows:
Figure GDA0002556187470000072
wherein pi-q | | represents piAnd q is the Euclidean distance; and carrying out fusion processing on the panoramic video information and the multi-picture video information according to the size of a fusion area and a q-position pixel difference value, wherein the calculation formula is as follows:
Figure GDA0002556187470000073
Figure GDA0002556187470000074
wherein, I'L(q) represents the pixel value after left image fusion, and x represents the q point and piDistance of points, xbRepresenting the size of the radius of the fusion zone, I ″RAnd (q) represents the pixel value after the right image is fused.
Preferably, the preset fusion mode at least comprises one of the following modes: fusing the multi-picture video information in the panoramic video information, as shown in fig. 3; fusing the panoramic video information into the multi-picture video information, as shown in fig. 5; and fusing the panoramic video information into the panoramic video information, as shown in fig. 4.
Fig. 6 is a block diagram of a multi-video fusion apparatus according to an embodiment of the present invention.
Referring to fig. 6, a multi-video fusion apparatus includes: an obtaining module 60, configured to obtain panoramic video information and multi-picture video information; and a fusion module 62, configured to fuse the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode.
Preferably, the obtaining module 60 comprises: the acquisition module is used for acquiring first video information through the multi-channel video acquisition equipment and acquiring second video information through the panoramic video acquisition equipment; the extraction module is used for extracting image point characteristics of the second video information to obtain an extraction result; the splicing module is used for carrying out video splicing on the second video information based on the extraction result to obtain the splicing result; the first mapping module is used for carrying out geometric mapping on the second video information based on the splicing result to obtain the panoramic video information; the positioning module is used for carrying out positioning setting in the panorama on the first video information to obtain a positioning result; and the second mapping module is used for carrying out geometric mapping on the first video information based on the positioning result to acquire the multi-picture video information.
Preferably, the fusion module comprises a calculation module for: first, the seam position p is calculatediThe calculation formula of the pixel difference value is as follows: ,
D(pi)=IL(pi)-IR(pi)
wherein IL(pi) And IR(pi) Respectively represent at piPixel values of the left image and pixel values of the right image; then, at piThe upper part and the lower part at the position respectively radiate n/2 points, the difference value of the total n points is obtained by multiplying the difference value by a weight coefficient to obtain a q-position pixel difference value, and the calculation formula is as follows:
Figure GDA0002556187470000081
wherein, wi(q) is a weight coefficient, and the calculation formula of the weight coefficient is:
Figure GDA0002556187470000082
wherein pi-q | | represents piAnd q is the Euclidean distance; and carrying out fusion processing on the panoramic video information and the multi-picture video information according to the size of a fusion area and a q-position pixel difference value, wherein the calculation formula is as follows:
Figure GDA0002556187470000083
Figure GDA0002556187470000084
wherein, I'L(q) represents the pixel value after left image fusion, and x represents the q point and piDistance of points, xbRepresenting the size of the radius of the fusion zone, I ″RAnd (q) represents the pixel value after the right image is fused.
Preferably, the preset fusion mode at least comprises one of the following modes: fusing the multi-picture video information in the panoramic video information; fusing the panoramic video information into the multi-picture video information; and fusing the panoramic video information in the panoramic video information.
The multi-video fusion system provided by the invention comprises the multi-video fusion device.
According to the invention, through a preset fusion algorithm, not only can panoramic stitching be realized, but also the multi-view video picture and the panoramic video are seamlessly fused, so that a user can watch the video picture at 360 degrees without dead angles, the feeling of being personally on the scene is brought, and meanwhile, the details of the multi-angle video picture can be carefully observed. Meanwhile, according to the technical scheme, the user can independently set the relative position of the multi-angle video picture in the panoramic video, automatic video fusion is carried out according to the position set by the user, and the user can operate conveniently and quickly. The traditional multi-angle video is only placed according to a simple fixed position, multi-picture videos are pieced together, an autonomous selection function is not left for a user, and the relative position relation between the multi-angle video and the panoramic video is not processed. Moreover, the application can enable the user to select the fusion mode independently: 1) fusing multi-picture videos in a panoramic video; 2) the panoramic video is fused in the panoramic video; 3) the panoramic video is fused in the multi-picture video; and the user can select to display the multi-view fusion video in different geometric mapping modes. Therefore, more personalized and more diversified video rendering effects can be provided for the user.
While certain exemplary embodiments of the present invention have been described above by way of illustration only, it will be apparent to those of ordinary skill in the art that the described embodiments may be modified in various different ways without departing from the spirit and scope of the invention. Accordingly, the drawings and description are illustrative in nature and should not be construed as limiting the scope of the invention.

Claims (5)

1. A multi-video fusion method, comprising:
acquiring panoramic video information and multi-picture video information;
fusing the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode;
the acquiring panoramic video information and the multi-picture video information includes:
acquiring first video information through a multi-channel video acquisition device, and acquiring second video information through a panoramic video acquisition device;
performing image point feature extraction on the second video information to obtain an extraction result;
performing video splicing on the second video information based on the extraction result to obtain a splicing result;
performing geometric mapping on the second video information based on the splicing result to obtain the panoramic video information;
and performing positioning setting in the panorama on the first video information to obtain a positioning result;
performing geometric mapping on the first video information based on the positioning result to acquire the multi-picture video information; the preset fusion algorithm is a radiation fusion algorithm, and specifically comprises the following steps:
first, the seam position p is calculatediThe calculation formula of the pixel difference value is as follows:
D(pi)=IL(pi)-IR(pi)
wherein IL(pi) And IR(pi) Respectively representAt piPixel values of the left image and pixel values of the right image;
then, at piThe upper part and the lower part at the position respectively radiate n/2 points, the difference value of the total n points is obtained by multiplying the difference value by a weight coefficient to obtain a q-position pixel difference value, and the calculation formula is as follows:
Figure FDA0002582616500000011
wherein, wi(q) is a weight coefficient, and the calculation formula of the weight coefficient is:
Figure FDA0002582616500000012
wherein pi-q | | represents piAnd q is the Euclidean distance;
and carrying out fusion processing on the panoramic video information and the multi-picture video information according to the size of a fusion area and a q-position pixel difference value, wherein the calculation formula is as follows:
Figure FDA0002582616500000013
Figure FDA0002582616500000014
wherein, I'L(q) represents the pixel value after left image fusion, and x represents the q point and piDistance of points, xbRepresents the radius size, I 'of the fusion region'RAnd (q) represents the pixel value after the right image is fused.
2. The multi-video fusion method of claim 1, wherein the predetermined fusion mode comprises at least one of:
fusing the multi-picture video information in the panoramic video information;
fusing the panoramic video information into the multi-picture video information; and
and fusing the panoramic video information in the panoramic video information.
3. A multi-video fusion apparatus, comprising:
the acquisition module is used for acquiring panoramic video information and multi-picture video information;
the fusion module is used for fusing the panoramic video information and the multi-picture video information according to a preset fusion algorithm and a preset fusion mode;
the acquisition module includes:
the acquisition module is used for acquiring first video information through the multi-channel video acquisition equipment and acquiring second video information through the panoramic video acquisition equipment;
the extraction module is used for extracting image point characteristics of the second video information to obtain an extraction result;
the splicing module is used for carrying out video splicing on the second video information based on the extraction result to obtain the splicing result;
the first mapping module is used for carrying out geometric mapping on the second video information based on the splicing result to obtain the panoramic video information;
the positioning module is used for carrying out positioning setting in the panorama on the first video information to obtain a positioning result;
the second mapping module is used for carrying out geometric mapping on the first video information based on the positioning result to obtain the multi-picture video information;
the fusion module includes a computation module to:
first, the seam position p is calculatediThe calculation formula of the pixel difference value is as follows:
D(pi)=IL(pi)-IR(pi)
wherein IL(pi) And IR(pi) Respectively represent at piPixel values of the left image and pixel values of the right image;
then, at piThe upper part and the lower part at the position respectively radiate n/2 points, the difference value of the total n points is obtained by multiplying the difference value by a weight coefficient to obtain a q-position pixel difference value, and the calculation formula is as follows:
Figure FDA0002582616500000021
wherein, wi(q) is a weight coefficient, and the calculation formula of the weight coefficient is:
Figure FDA0002582616500000022
wherein pi-q | | represents piAnd q is the Euclidean distance;
and carrying out fusion processing on the panoramic video information and the multi-picture video information according to the size of a fusion area and a q-position pixel difference value, wherein the calculation formula is as follows:
Figure FDA0002582616500000023
Figure FDA0002582616500000024
wherein, I'L(q) represents the pixel value after left image fusion, and x represents the q point and piDistance of points, xbRepresents the radius size, I 'of the fusion region'RAnd (q) represents the pixel value after the right image is fused.
4. The multi-video fusion apparatus of claim 3 wherein the predetermined fusion mode comprises at least one of:
fusing the multi-picture video information in the panoramic video information;
fusing the panoramic video information into the multi-picture video information; and
and fusing the panoramic video information in the panoramic video information.
5. A multi-video fusion system comprising the multi-video fusion apparatus according to any one of claims 3 to 4.
CN201810112281.8A 2018-02-05 2018-02-05 Multi-video fusion method, device and system Expired - Fee Related CN108234904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810112281.8A CN108234904B (en) 2018-02-05 2018-02-05 Multi-video fusion method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810112281.8A CN108234904B (en) 2018-02-05 2018-02-05 Multi-video fusion method, device and system

Publications (2)

Publication Number Publication Date
CN108234904A CN108234904A (en) 2018-06-29
CN108234904B true CN108234904B (en) 2020-10-27

Family

ID=62670687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810112281.8A Expired - Fee Related CN108234904B (en) 2018-02-05 2018-02-05 Multi-video fusion method, device and system

Country Status (1)

Country Link
CN (1) CN108234904B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671045A (en) * 2018-12-28 2019-04-23 广东美电贝尔科技集团股份有限公司 A kind of more image interfusion methods
CN110866889A (en) * 2019-11-18 2020-03-06 成都威爱新经济技术研究院有限公司 Multi-camera data fusion method in monitoring system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102256111A (en) * 2011-07-17 2011-11-23 西安电子科技大学 Multi-channel panoramic video real-time monitoring system and method
CN103618881A (en) * 2013-12-10 2014-03-05 深圳英飞拓科技股份有限公司 Multi-lens panoramic stitching control method and multi-lens panoramic stitching control device
CN104539896A (en) * 2014-12-25 2015-04-22 桂林远望智能通信科技有限公司 Intelligent panoramic monitoring and hotspot close-up monitoring system and method
CN104835178A (en) * 2015-02-02 2015-08-12 郑州轻工业学院 Low SNR(Signal to Noise Ratio) motion small target tracking and identification method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150130800A1 (en) * 2013-11-12 2015-05-14 Fyusion, Inc. Segmentation of surround view data

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102256111A (en) * 2011-07-17 2011-11-23 西安电子科技大学 Multi-channel panoramic video real-time monitoring system and method
CN103618881A (en) * 2013-12-10 2014-03-05 深圳英飞拓科技股份有限公司 Multi-lens panoramic stitching control method and multi-lens panoramic stitching control device
CN104539896A (en) * 2014-12-25 2015-04-22 桂林远望智能通信科技有限公司 Intelligent panoramic monitoring and hotspot close-up monitoring system and method
CN104835178A (en) * 2015-02-02 2015-08-12 郑州轻工业学院 Low SNR(Signal to Noise Ratio) motion small target tracking and identification method

Also Published As

Publication number Publication date
CN108234904A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
US11044458B2 (en) Methods, systems, and computer-readable storage media for generating three-dimensional (3D) images of a scene
US20200236280A1 (en) Image Quality Assessment
US9635348B2 (en) Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional images
US8810635B2 (en) Methods, systems, and computer-readable storage media for selecting image capture positions to generate three-dimensional images
JP5140210B2 (en) Imaging apparatus and image processing method
JP5204350B2 (en) Imaging apparatus, playback apparatus, and image processing method
JP5204349B2 (en) Imaging apparatus, playback apparatus, and image processing method
JP5814692B2 (en) Imaging apparatus, control method therefor, and program
CN105376471A (en) Panorama shooting system based on moving platform and method
US9792698B2 (en) Image refocusing
CN111818304B (en) Image fusion method and device
CN108347505B (en) Mobile terminal with 3D imaging function and image generation method
CN108391116B (en) Whole body scanning device and method based on 3D imaging technology
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
CN108234904B (en) Multi-video fusion method, device and system
US11044399B2 (en) Video surveillance system
CN105744158A (en) Video image display method and device and mobile terminal
US20140347548A1 (en) Method and system for rendering an image from a light-field camera
JP5704885B2 (en) Imaging device, imaging method, and imaging control program
US20110267434A1 (en) Camera device, arrangement and system
TWI382267B (en) Auto depth field capturing system and method thereof
Steurer et al. 3d holoscopic video imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201027