CN108234904A - A kind of more video fusion method, apparatus and system - Google Patents

A kind of more video fusion method, apparatus and system Download PDF

Info

Publication number
CN108234904A
CN108234904A CN201810112281.8A CN201810112281A CN108234904A CN 108234904 A CN108234904 A CN 108234904A CN 201810112281 A CN201810112281 A CN 201810112281A CN 108234904 A CN108234904 A CN 108234904A
Authority
CN
China
Prior art keywords
video information
fusion
video
picture
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810112281.8A
Other languages
Chinese (zh)
Other versions
CN108234904B (en
Inventor
刘捷
高明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN201810112281.8A priority Critical patent/CN108234904B/en
Publication of CN108234904A publication Critical patent/CN108234904A/en
Application granted granted Critical
Publication of CN108234904B publication Critical patent/CN108234904B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Abstract

The invention discloses a kind of more video fusion method, apparatus and systems, and for being merged to more videos, which includes:Obtain panoramic video information and more picture video information;The panoramic video information and more picture video information are merged with default amalgamation mode according to default blending algorithm.Technical solution using the present invention achievees the effect that a kind of mixed video fusion.And accomplish panoramic video and the seamless fusion of the more picture videos of multi-angle, during fusion, allow more picture videos with arbitrary size, carried out at an arbitrary position with any number and panoramic video it is seamless merge, syncretizing effect is watched in panorama player without any distortion.

Description

A kind of more video fusion method, apparatus and system
Technical field
The present invention relates to field of video processing more particularly to a kind of more video fusion method, apparatus and system.
Background technology
Existing video fusion scheme, first passes around the acquisition of multi-path camera, and the image after acquiring carries out feature extraction, Panoramic mosaic is finally carried out according to the position of characteristic point.Video acquisition can generally use multiple cameras, and the cloth of camera Office usually may require that reaches the coincidence of camera lens focus as far as possible, just can guarantee that the later stage crosses seam effect unobvious in this way.Figure after acquisition As needing to carry out feature extraction by computer vision algorithms make, the part computation complexity is higher, is next video-splicing Foundation is provided.Video fusion is finally carried out according to the relative position of original image and target image characteristics point, to be gone in fusion process Except camera parallax effect caused by optical center cannot be overlapped absolutely two-by-two.
Existing video fusion algorithm has the following problems:
First, existing panoramic video blending algorithm is all only panoramic video stitching algorithm, exactly presses multi-channel video A panoramic video is spliced into according to above-mentioned series of steps.This way can only retain a large-scale visual field, and local detail is past It is past to be difficult to capture.Simultaneously as being difficult to accomplish that optical center physically overlaps between adjacent camera lens, thus lead to connecing for video-splicing It is distorted at seam, even will appear the default of human limbs sometimes.More than 2 points lead to panoramic video integration technology very much Traditional picture pick-up device can not be substituted under application scenarios well, especially in monitoring field, the fuzzy and personage of video pictures Default will lead to very serious consequence.
Second, the fusion of traditional monitor video is only merely the video pictures letter for the different angle for acquiring multiple cameras Singly piece together.It thus needs to lay many cameras and just can guarantee that entire space is monitored without dead angle, and multiple angles The picture of degree, which is in one plane shown, can cause viewer to lose spatial position sense, this needs the application of remote command at some Application is hardly resulted in scene.
Label and the above problem propose the present invention.
Invention content
It is a primary object of the present invention to disclose a kind of more video fusion method, apparatus and system, for solving existing skill Distortion in art present in video fusion, it is default and have the problems such as dead angle.
In order to achieve the above object, according to an aspect of the present invention, a kind of more video fusion methods are disclosed, and use following skill Art scheme:
A kind of more video fusion methods include:Obtain panoramic video information and more picture video information;Melted according to default Hop algorithm merges the panoramic video information and more picture video information with default amalgamation mode.
Further, the acquisition panoramic video information and more picture video information include:It is adopted by multi-channel video Collect equipment to obtain the first video information and obtain the second video information by panoramic video collecting device;
Picture point feature extraction is carried out to second video information, obtains extraction result;Based on the extraction result pair Second video information carries out video-splicing, obtains the splicing result;Based on the splicing result to second video Information carries out geometric maps and obtains the panoramic video information;And first video information is done, setting is positioned in panorama, Obtain positioning result;Geometric maps are carried out to first video information based on the positioning result, more pictures is obtained and regards Frequency information.
Further, the default blending algorithm is radiation blending algorithm, is specifically included:First, it calculates at seaming position piPixel value difference, calculation formula is:, D (pi)=IL(pi)-IR(pi), wherein IL(pi) and IR(pi) represented respectively in piPosition The pixel value of left image and the pixel value of right image;Then, in piUpper and bottom section respectively radiates n/2 at position The difference of point, in total n point, and be multiplied by weight coefficient and obtain q positions pixel value difference, calculation formula is:
Wherein, wi(q) it is weight coefficient, and the calculation formula of weight coefficient is:
Wherein | | pi- q | | represent piEuclidean distance between q;The panoramic video information and more pictures are regarded Frequency information carries out fusion treatment according to integration region size and q positions pixel value difference, and calculation formula is:
Wherein, IL(q) pixel value after left image fusion is represented, x represents q points and piThe distance of point, xbRepresent corresponding circle of sensation The radius size in domain, I,R(q) pixel value after right image fusion is represented.
Further, the default amalgamation mode includes at least one of following:More picture video information fusions are existed In the panoramic video information;By panoramic video information fusion in more picture video information;It and will be described complete Scape video information is merged in the panoramic video information.
According to another aspect of the present invention, a kind of more video fusion devices are provided, and are adopted the following technical scheme that:
A kind of more video fusion devices include:Acquisition module, for obtaining panoramic video information and more picture videos letter Breath;Fusion Module, for the default blending algorithm of basis with default amalgamation mode to the panoramic video information and more pictures Plane video information is merged.
Further, acquisition module includes:Acquisition module obtains the first video for passing through multi-channel video collecting device Information and pass through panoramic video collecting device obtain the second video information;Extraction module, for second video information Picture point feature extraction is carried out, obtains extraction result;Concatenation module believes second video for being based on the extraction result Breath carries out video-splicing, obtains the splicing result;First mapping block regards for being based on the splicing result to described second Frequency information carries out geometric maps and obtains the panoramic video information;Locating module is used for and first video information is done Setting is positioned in panorama, obtains positioning result;Second mapping block believes first video for being based on the positioning result Breath carries out geometric maps, obtains more picture video information.
Further, the Fusion Module includes computing module, and the computing module is used for:First, seaming position is calculated Locate piPixel value difference, calculation formula is:,
D(pi)=IL(pi)-IR(pi)
Wherein IL(pi) and IR(pi) represented respectively in piThe pixel value of position left image and the pixel value of right image;So Afterwards, in piUpper and bottom section respectively radiates n/2 point at position, in total the difference of n point, and is multiplied by weight coefficient and obtains q Position pixel value difference, calculation formula are:
Wherein, wi(q) it is weight coefficient, and the calculation formula of weight coefficient is:
Wherein | | pi- q | | represent piEuclidean distance between q;The panoramic video information and more pictures are regarded Frequency information carries out fusion treatment according to integration region size and q positions pixel value difference, and calculation formula is:
Wherein, I'L(q) pixel value after left image fusion is represented, x represents q points and piThe distance of point, xbRepresent fusion The radius size in region, I,R(q) pixel value after right image fusion is represented.
Further, the default amalgamation mode includes at least one of following:More picture video information fusions are existed In the panoramic video information;By panoramic video information fusion in more picture video information;It and will be described complete Scape video information is merged in the panoramic video information.
According to a further aspect of the invention, a kind of more Multisensor video fusion systems are provided, and are adopted the following technical scheme that:
A kind of more Multisensor video fusion systems include above-mentioned more video fusion devices.
The present invention by presetting blending algorithm, not only realize panoramic mosaic also by multi-angle video picture and panoramic video without Seam fusion, such user can both be watched with 360 degree without dead angle, bring feeling on the spot in person, while can also examine more Angle video picture detail.Meanwhile technical scheme can allow user independently to set multi-angle video pictures in aphorama Relative position in frequency carries out automatic video frequency fusion according to position set by user, and user operates convenient and efficient.And it passes The multi-angle video of system is only to be put according to simple fixed position, and more picture videos are pieced together, both without to use Family leaves the function of independently selecting, the more relative position relation without processing multi-angle video and panoramic video.Moreover, this Shen Please can both user be allowed independently to select amalgamation mode:1) more picture video fusions are in panoramic video;2) panoramic video fusion exists In panoramic video;3) panoramic video fusion is in more picture videos;User's selection can be allowed again in a manner of different geometric maps Show and regard fusion video more.More personalized, more various Video Rendering effect can be provided to the user in this way.
Description of the drawings
It in order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to institute in embodiment Attached drawing to be used is needed to be briefly described, it should be apparent that, the accompanying drawings in the following description is only one described in the present invention A little embodiments for those of ordinary skill in the art, can also be obtained according to these attached drawings other attached drawings.
Fig. 1 is a kind of flow chart of more video fusion methods described in the embodiment of the present invention;
Fig. 2 is that the present invention implements the flow chart that the video information to acquisition is handled;
Fig. 3 is that more picture videos described in the embodiment of the present invention merge the schematic diagram in panoramic video;
Fig. 4 is that the panoramic video described in the embodiment of the present invention merges the schematic diagram in panoramic video;
Fig. 5 is that the panoramic video described in the embodiment of the present invention merges the schematic diagram in more picture videos;
Fig. 6 is the structure chart of more video fusion devices described in the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention is described in detail below in conjunction with attached drawing, but the present invention can be defined by the claims Implement with the multitude of different ways of covering.
Fig. 1 is a kind of flow chart of more video fusion methods described in the embodiment of the present invention.
Shown in Figure 1, a kind of more video fusion methods include:
S101:Obtain panoramic video information and more picture video information;
S103:The panoramic video information and more pictures are regarded with default amalgamation mode according to default blending algorithm Frequency information is merged.
Specifically, in step S101, panoramic video information and more picture video information are obtained, carries out video first Acquisition, video acquisition part are divided into the acquisition of multichannel panoramic video and the acquisition of more picture videos.The acquisition of multichannel panoramic video refers to more A camera is overlapped with camera lens focus laid for target as far as possible, finally ensures seam effect unobvious.More picture videos Acquisition refers to that camera can arbitrarily be laid, and installation position depends on the angle that user thinks shooting picture, in this way can use Person lays and is simple and convenient to operate.After the video information to acquisition is handled, step S103 is performed, i.e., is melted according to default Hop algorithm merges the panoramic video information and more picture video information with default amalgamation mode.Specifically, Default blending algorithm of the present invention is the radiation blending algorithm of independent research, which, which can effectively be handled, melts The aberration problem of image is closed, two images is enable to complete seamless fusion.Default amalgamation mode is not limited to a kind of amalgamation mode, It can arbitrarily be combined with several amalgamation modes.
It specifically, the video information to acquisition is handled, may refer to shown in Fig. 2, be following steps:
Step 20:Multichannel panoramic video acquires;
Step 20a:More picture video acquisitions;
Step 21:Feature extraction;
Step 21a:Position is set in panorama;
Step 22:Video-splicing;
Step 22a:Geometric maps;
Step 23:Geometric maps;
Step 24:Regard blending algorithm more.
Specifically, to obtaining the first video information and step 20a by multi-channel video collecting device in step 20 In the second video information obtained by panoramic video collecting device carry out a series of processing, in step 21, to described second Video information carries out picture point feature extraction, obtains extraction result;In step 22, based on the extraction result to described second Video information carries out video-splicing, obtains the panoramic video information;And in step 21a, to first video information It does and setting is positioned in panorama, obtain positioning result;In step 22a, based on the positioning result to second video information It carries out geometric maps and obtains more picture video information, in step 23, based on splicing result, the first video information is carried out Geometric maps obtain panoramic video information, in step 24, video fusion are carried out by default blending algorithm.
Furthermore, geometric maps spherical Map, to ensure that more picture videos do not occur in the displaying of panorama player Pattern distortion.
More specifically, the default blending algorithm is radiation blending algorithm, is specifically included:First, it calculates at seaming position piPixel value difference, calculation formula is:, D (pi)=IL(pi)-IR(pi), wherein IL(pi) and IR(pi) represented respectively in piPosition The pixel value of left image and the pixel value of right image;Then, in piUpper and bottom section respectively radiates n/2 at position The difference of point, in total n point, and be multiplied by weight coefficient and obtain q positions pixel value difference, calculation formula is:
Wherein, wi(q) it is weight coefficient, and the calculation formula of weight coefficient is:
Wherein | | pi- q | | represent piEuclidean distance between q;The panoramic video information and more pictures are regarded Frequency information carries out fusion treatment according to integration region size and q positions pixel value difference, and calculation formula is:
Wherein, I'L(q) pixel value after left image fusion is represented, x represents q points and piThe distance of point, xbRepresent fusion The radius size in region, I`R(q) pixel value after right image fusion is represented.
Preferably, the default amalgamation mode includes at least one of following:By more picture video information fusions in institute It states in panoramic video information, as shown in Figure 3;By panoramic video information fusion in more picture video information, referring to Shown in Fig. 5;It is shown in Figure 4 and by panoramic video information fusion in the panoramic video information.
Fig. 6 is the structure chart of more video fusion devices described in the embodiment of the present invention.
Shown in Figure 6, a kind of more video fusion devices include:Acquisition module 60, for obtain panoramic video information with And more picture video information;Fusion Module 62, for the default blending algorithm of basis with default amalgamation mode to the panoramic video Information and more picture video information are merged.
Preferably, acquisition module 60 includes:Acquisition module obtains the first video for passing through multi-channel video collecting device Information and pass through panoramic video collecting device obtain the second video information;Extraction module, for second video information Picture point feature extraction is carried out, obtains extraction result;Concatenation module believes second video for being based on the extraction result Breath carries out video-splicing, obtains the splicing result;First mapping block regards for being based on the splicing result to described second Frequency information carries out geometric maps and obtains the panoramic video information;Locating module is used for and first video information is done Setting is positioned in panorama, obtains positioning result;Second mapping block believes first video for being based on the positioning result Breath carries out geometric maps, obtains more picture video information.
Preferably, the Fusion Module includes computing module, and the computing module is used for:First, it calculates at seaming position piPixel value difference, calculation formula is:,
D(pi)=IL(pi)-IR(pi)
Wherein IL(pi) and IR(pi) represented respectively in piThe pixel value of position left image and the pixel value of right image;So Afterwards, in piUpper and bottom section respectively radiates n/2 point at position, in total the difference of n point, and is multiplied by weight coefficient and obtains q Position pixel value difference, calculation formula are:
Wherein, wi(q) it is weight coefficient, and the calculation formula of weight coefficient is:
Wherein | | pi- q | | represent piEuclidean distance between q;The panoramic video information and more pictures are regarded Frequency information carries out fusion treatment according to integration region size and q positions pixel value difference, and calculation formula is:
Wherein, I'L(q) pixel value after left image fusion is represented, x represents q points and piThe distance of point, xbRepresent fusion The radius size in region, I`R(q) pixel value after right image fusion is represented.
Preferably, the default amalgamation mode includes at least one of following:By more picture video information fusions in institute It states in panoramic video information;By panoramic video information fusion in more picture video information;And by the panorama Video information is merged in the panoramic video information.
A kind of more Multisensor video fusion systems provided by the invention include above-mentioned more video fusion devices.
The present invention by presetting blending algorithm, not only realize panoramic mosaic also by multi-angle video picture and panoramic video without Seam fusion, such user can both be watched with 360 degree without dead angle, bring feeling on the spot in person, while can also examine more Angle video picture detail.Meanwhile technical scheme can allow user independently to set multi-angle video pictures in aphorama Relative position in frequency carries out automatic video frequency fusion according to position set by user, and user operates convenient and efficient.And it passes The multi-angle video of system is only to be put according to simple fixed position, and more picture videos are pieced together, both without to use Family leaves the function of independently selecting, the more relative position relation without processing multi-angle video and panoramic video.Moreover, this Shen Please can both user be allowed independently to select amalgamation mode:1) more picture video fusions are in panoramic video;2) panoramic video fusion exists In panoramic video;3) panoramic video fusion is in more picture videos;User's selection can be allowed again in a manner of different geometric maps Show and regard fusion video more.More personalized, more various Video Rendering effect can be provided to the user in this way.
Above certain exemplary embodiments that the present invention is only described by way of explanation, undoubtedly, for ability The those of ordinary skill in domain, without departing from the spirit and scope of the present invention, can be with a variety of different modes to institute The embodiment of description is modified.Therefore, above-mentioned attached drawing and description are regarded as illustrative in nature, and should not be construed as to the present invention The limitation of claims.

Claims (9)

  1. A kind of 1. more video fusion methods, which is characterized in that including:
    Obtain panoramic video information and more picture video information;
    According to default blending algorithm and default amalgamation mode to the panoramic video information and more picture video information into Row fusion.
  2. 2. more video fusion methods as described in claim 1, which is characterized in that the acquisition panoramic video information and more pictures Plane video information includes:
    It obtains the first video information by multi-channel video collecting device and obtains second by panoramic video collecting device and regard Frequency information;
    Picture point feature extraction is carried out to second video information, obtains extraction result;
    Video-splicing is carried out to second video information based on the extraction result, obtains the splicing result;
    Geometric maps are carried out to second video information based on the splicing result and obtain the panoramic video information;
    And first video information is done, setting is positioned in panorama, obtain positioning result;
    Geometric maps are carried out to first video information based on the positioning result, obtain more picture video information.
  3. 3. more video fusion methods as described in claim 1, which is characterized in that the default blending algorithm is calculated for radiation fusion Method specifically includes:
    First, p at seaming position is calculatediPixel value difference, calculation formula is:,
    D(pi)=IL(pi)-IR(pi)
    Wherein IL(pi) and IR(pi) represented respectively in piThe pixel value of position left image and the pixel value of right image;
    Then, in piUpper and bottom section respectively radiates n/2 point, the in total difference of n point, and be multiplied by weight coefficient at position Q positions pixel value difference is obtained, calculation formula is:
    Wherein, wi(q) it is weight coefficient, and the calculation formula of weight coefficient is:
    Wherein | | pi- q | | represent piEuclidean distance between q;
    To the panoramic video information and more picture video information according to integration region size and q positions pixel value difference into Row fusion treatment, calculation formula are:
    Wherein, I'L(q) pixel value after left image fusion is represented, x represents q points and piThe distance of point, xbRepresent integration region Radius size, I,R(q) pixel value after right image fusion is represented.
  4. 4. more video fusion methods as described in claim 1, which is characterized in that the default amalgamation mode includes at least as follows One of:
    By more picture video information fusions in the panoramic video information;
    By panoramic video information fusion in more picture video information;And
    By panoramic video information fusion in the panoramic video information.
  5. 5. a kind of more video fusion devices, which is characterized in that including:
    Acquisition module, for obtaining panoramic video information and more picture video information;
    Fusion Module, for the default blending algorithm of basis with default amalgamation mode to the panoramic video information and more pictures Plane video information is merged.
  6. 6. more video fusion devices as claimed in claim 5, which is characterized in that the acquisition module includes:
    Acquisition module, for passing through multi-channel video collecting device the first video information of acquisition and being acquired by panoramic video Equipment obtains the second video information;
    Extraction module for carrying out picture point feature extraction to second video information, obtains extraction result;
    Concatenation module carries out video-splicing to second video information for being based on the extraction result, obtains the splicing As a result;
    First mapping block is described complete to second video information progress geometric maps acquisition for being based on the splicing result Scape video information;
    Locating module is used to and does first video information position setting in panorama, obtains positioning result;
    Second mapping block carries out geometric maps, described in acquisition for being based on the positioning result to first video information More picture video information.
  7. 7. more video fusion devices as described in claim 1, which is characterized in that the Fusion Module includes computing module, institute Computing module is stated to be used for:
    First, p at seaming position is calculatediPixel value difference, calculation formula is:,
    D(pi)=IL(pi)-IR(pi)
    Wherein IL(pi) and IR(pi) represented respectively in piThe pixel value of position left image and the pixel value of right image;
    Then, in piUpper and bottom section respectively radiates n/2 point, the in total difference of n point, and be multiplied by weight coefficient at position Q positions pixel value difference is obtained, calculation formula is:
    Wherein, wi(q) it is weight coefficient, and the calculation formula of weight coefficient is:
    Wherein | | pi- q | | represent piEuclidean distance between q;
    To the panoramic video information and more picture video information according to integration region size and q positions pixel value difference into Row fusion treatment, calculation formula are:
    Wherein, I'L(q) pixel value after left image fusion is represented, x represents q points and piThe distance of point, xbRepresent integration region Radius size, I,R(q) pixel value after right image fusion is represented.
  8. 8. more video fusion devices as described in claim 1, which is characterized in that the default amalgamation mode includes at least as follows One of:
    By more picture video information fusions in the panoramic video information;
    By panoramic video information fusion in more picture video information;And
    By panoramic video information fusion in the panoramic video information.
  9. 9. a kind of more Multisensor video fusion systems, which is characterized in that remove the more video fusion dresses of 5 to 8 any one of them including right It puts.
CN201810112281.8A 2018-02-05 2018-02-05 Multi-video fusion method, device and system Expired - Fee Related CN108234904B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810112281.8A CN108234904B (en) 2018-02-05 2018-02-05 Multi-video fusion method, device and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810112281.8A CN108234904B (en) 2018-02-05 2018-02-05 Multi-video fusion method, device and system

Publications (2)

Publication Number Publication Date
CN108234904A true CN108234904A (en) 2018-06-29
CN108234904B CN108234904B (en) 2020-10-27

Family

ID=62670687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810112281.8A Expired - Fee Related CN108234904B (en) 2018-02-05 2018-02-05 Multi-video fusion method, device and system

Country Status (1)

Country Link
CN (1) CN108234904B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671045A (en) * 2018-12-28 2019-04-23 广东美电贝尔科技集团股份有限公司 A kind of more image interfusion methods
CN110866889A (en) * 2019-11-18 2020-03-06 成都威爱新经济技术研究院有限公司 Multi-camera data fusion method in monitoring system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102256111A (en) * 2011-07-17 2011-11-23 西安电子科技大学 Multi-channel panoramic video real-time monitoring system and method
CN103618881A (en) * 2013-12-10 2014-03-05 深圳英飞拓科技股份有限公司 Multi-lens panoramic stitching control method and multi-lens panoramic stitching control device
CN104539896A (en) * 2014-12-25 2015-04-22 桂林远望智能通信科技有限公司 Intelligent panoramic monitoring and hotspot close-up monitoring system and method
US20150130894A1 (en) * 2013-11-12 2015-05-14 Fyusion, Inc. Analysis and manipulation of panoramic surround views
CN104835178A (en) * 2015-02-02 2015-08-12 郑州轻工业学院 Low SNR(Signal to Noise Ratio) motion small target tracking and identification method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101236599A (en) * 2007-12-29 2008-08-06 浙江工业大学 Human face recognition detection device based on multi- video camera information integration
CN102256111A (en) * 2011-07-17 2011-11-23 西安电子科技大学 Multi-channel panoramic video real-time monitoring system and method
US20150130894A1 (en) * 2013-11-12 2015-05-14 Fyusion, Inc. Analysis and manipulation of panoramic surround views
CN103618881A (en) * 2013-12-10 2014-03-05 深圳英飞拓科技股份有限公司 Multi-lens panoramic stitching control method and multi-lens panoramic stitching control device
CN104539896A (en) * 2014-12-25 2015-04-22 桂林远望智能通信科技有限公司 Intelligent panoramic monitoring and hotspot close-up monitoring system and method
CN104835178A (en) * 2015-02-02 2015-08-12 郑州轻工业学院 Low SNR(Signal to Noise Ratio) motion small target tracking and identification method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109671045A (en) * 2018-12-28 2019-04-23 广东美电贝尔科技集团股份有限公司 A kind of more image interfusion methods
CN110866889A (en) * 2019-11-18 2020-03-06 成都威爱新经济技术研究院有限公司 Multi-camera data fusion method in monitoring system

Also Published As

Publication number Publication date
CN108234904B (en) 2020-10-27

Similar Documents

Publication Publication Date Title
US20200236280A1 (en) Image Quality Assessment
CN102148965B (en) Video monitoring system for multi-target tracking close-up shooting
CN101964918B (en) Image reproducing apparatus and image reproducing method
CN109348119B (en) Panoramic monitoring system
TWI532460B (en) Reconstruction of images from an in vivo multi-camera capsule
CN108769578B (en) Real-time panoramic imaging system and method based on multiple cameras
WO2021012856A1 (en) Method for photographing panoramic image
CN103501409B (en) Ultrahigh resolution panorama speed dome AIO (All-In-One) system
CN106713755A (en) Method and apparatus for processing panoramic image
CN106657910A (en) Panoramic video monitoring method for power substation
CN112085659B (en) Panorama splicing and fusing method and system based on dome camera and storage medium
WO2014023231A1 (en) Wide-view-field ultrahigh-resolution optical imaging system and method
CN108200360A (en) A kind of real-time video joining method of more fish eye lens panoramic cameras
CN107318009A (en) A kind of panoramic picture harvester and acquisition method
CN109166076B (en) Multi-camera splicing brightness adjusting method and device and portable terminal
CN107578450A (en) A kind of method and system for the demarcation of panorama camera rigging error
CN111866523B (en) Panoramic video synthesis method and device, electronic equipment and computer storage medium
CN110278366B (en) Panoramic image blurring method, terminal and computer readable storage medium
CN107995408A (en) A kind of 360 ° of panoramic shooting systems and method
CN109428987A (en) A kind of 360 degree of stereo photographic devices of wear-type panorama and image pickup processing method
CN108234904A (en) A kind of more video fusion method, apparatus and system
CN103793901A (en) Infrared thermal image system supporting real-time panoramic stitching of total-radiation infrared thermal image video streaming
JP3232408B2 (en) Image generation device, image presentation device, and image generation method
CN107659786A (en) A kind of panoramic video monitoring device and processing method
CN105719235A (en) Circular scanning based video image splicing and split-screen display method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201027