EP2984815A1 - Fusion de plusieurs flux video - Google Patents
Fusion de plusieurs flux videoInfo
- Publication number
- EP2984815A1 EP2984815A1 EP14717732.3A EP14717732A EP2984815A1 EP 2984815 A1 EP2984815 A1 EP 2984815A1 EP 14717732 A EP14717732 A EP 14717732A EP 2984815 A1 EP2984815 A1 EP 2984815A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- video streams
- merging
- images
- video
- panoramic image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000004927 fusion Effects 0.000 title claims description 40
- 238000000034 method Methods 0.000 claims abstract description 53
- 238000010276 construction Methods 0.000 claims abstract description 43
- 238000003745 diagnosis Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 238000002844 melting Methods 0.000 claims description 4
- 230000008018 melting Effects 0.000 claims description 4
- 238000007500 overflow downdraw method Methods 0.000 claims description 4
- 230000006870 function Effects 0.000 claims description 3
- 238000004091 panning Methods 0.000 claims 1
- 238000012545 processing Methods 0.000 description 5
- 230000008901 benefit Effects 0.000 description 4
- 230000002349 favourable effect Effects 0.000 description 4
- 230000001360 synchronised effect Effects 0.000 description 4
- 238000013459 approach Methods 0.000 description 3
- 230000000295 complement effect Effects 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000012800 visualization Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000004907 flux Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000009194 climbing Effects 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000010309 melting process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000011179 visual inspection Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/698—Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
Definitions
- the invention relates to a method for generating a video type data file by merging several video data files. It also relates to a software for the implementation of this method and a human machine interface used in the implementation of this method. It also relates to a device and a system for merging video files.
- Existing cameras can generate a video data file.
- Such a file corresponds to a film comprising a view of the space limited by the field of view of the camera, which can be for example around 170 degrees.
- a first object of the invention is a fusion solution of several video data files which makes it possible to obtain a resulting film of good quality.
- a second subject of the invention is a solution for merging a plurality of user-friendly and fast video data files.
- the invention is based on a method for merging a plurality of video streams, characterized in that it comprises the following steps:
- the method for merging a plurality of video streams may comprise the following steps:
- the method for merging a plurality of video streams may comprise a step of entering a reference instant by a human-machine interface and a step of presenting the panoramic image bringing together the images of the video streams at this reference time in a zone of visualization of the human machine interface of the device.
- the method for merging a plurality of video streams may comprise the repetition of the following steps:
- the method for merging multiple video streams may include a video encoding step of the wide field video stream at the end of each repetition of the steps listed above or the repetition of a small number of these steps.
- the method for merging a plurality of video streams may comprise the following steps:
- the step of measuring the time shift may comprise the use of the soundtrack associated with the different video streams to identify an identical sound on the different soundtracks, then may include a step of deduction of the lag time between the different video streams, then the synchronization step can associate for each moment the images different video streams closest taking into account their time lag.
- the method for merging a plurality of video streams may include a step of entering a choice of a start and end time by a human machine interface of the merge device.
- the step of merging the video streams may include associating at least one of the audio tapes, associated with at least one of the video streams, with a wide field video stream resulting from the merging.
- the invention also relates to a device for merging a plurality of video streams comprising at least one computer and a memory, characterized in that it implements the steps of the method of merging a plurality of video streams as described above.
- the invention also relates to a human-machine interface of a device for merging a plurality of video streams as described above, characterized in that it comprises an interface for entering a reference time for the calculation of the parameters of panoramic construction for merging images of video streams.
- the human-machine interface of a device for merging multiple video streams may comprise all or some of the following interfaces:
- a window for viewing a panoramic image resulting from the fusion of the images of the different video streams at the reference time - A preview window of a wide field video stream representing the fusion of video streams.
- the invention also relates to a video stream fusion system, characterized in that it comprises a video stream fusion device as described above and a multi-camera support comprising at least two housing for fixing cameras. , especially such that two adjacent cameras are oriented in an orientation substantially perpendicular to each other.
- the video stream merger system may include a player of a wide field video stream resulting from the merging of multiple video streams.
- the invention also relates to a method for merging a plurality of video streams, characterized in that it comprises a preliminary step of positioning at least one multi-camera support at the level of a show stage, within a a sports enclosure, an athlete during a sporting event, a transport vehicle, a drone or a helicopter, a step of filming several video source streams from cameras positioned on this at least one multi-camera support, a fusion phase of said plurality of source video streams according to the fusion method described above, and a step of displaying on at least one display space of at least one screen of the resulting wide field video stream.
- FIG. 1 schematically represents the structure of the video flux merging device according to one embodiment of the invention
- FIG. 2 diagrammatically represents the steps of the method of fusing video streams according to one embodiment of the invention
- FIG. 3 represents a menu of a man-machine interface according to the embodiment of the invention.
- the chosen solution makes it possible to merge at best several video type files, which we more simply call "video stream” or “source video streams”, by an at least partially automatic optimization of the merger operations, these automatic operations making it possible to guarantee a quality satisfactory result.
- the method allows the manual intervention of an operator by a user friendly machine interface according to one embodiment of the invention. The result thus represents a compromise of a few manual interventions and automatic operations, to finally reach an optimal quality in a fast and user-friendly way for the operator.
- video stream used in simplified manner also refers to an audio-video stream
- the soundtrack (audio) is also, preferably but optional, recovered in parallel processing that will be especially detailed for the video part.
- the video flux merging device 1 comprises an input 2, which may be in the form of a connector, for example of USB type, by it receives by a communication means 3, in a memory not shown for a processing by several blocks 4-10, different video source streams from several cameras 20,.
- these different video streams have been generated by several cameras 20 fixed on the same multi-camera support, commonly known by its Anglo-Saxon name “Rig”, which has the advantage of allowing to film several views from a single point of view and to guarantee a chosen and constant orientation between the different cameras.
- Such a multi-camera support may for example allow the attachment of six cameras, such that the axes of the fields of view of two adjacent cameras are oriented in substantially perpendicular directions, which allows to obtain a vision of all the space around from the point of view.
- the method is suitable for processing any number of video streams from at least two cameras.
- the merging device 1 then comprises a first block 4 which implements a first step E1 for detecting time offsets between the different video streams received.
- the fusion device 1 is adapted to operate with a multi-camera support on which are mounted cameras that are independent and operate with independent clocks.
- the received source video streams are shifted in time.
- This offset is for example caused by a start time different from the cameras, a time reference distinct from the cameras, and / or a sliding offset due to camera internal clock differences.
- This first step E1 uses the soundtracks of the different video streams and implements the recognition of an identical sound on the different video streams, to deduce their offset in time.
- this search for a particular sound to deduce the offset of the video streams is limited around a reference time indicated by an operator, for example indicated via a man-machine interface so similar to the step E30 for selecting a reference time which will be described later.
- this search can be fully automatic. It can also be performed for the duration of the source video streams.
- the device can then implement an intermediate optional step E15 automatic diagnosis of measured offset, by automatically measuring the quality of the registration obtained.
- this step can in particular detect possible inconsistencies between all calculated offsets for all combinations of two video streams among all the source video streams considered.
- the method can then either transmit the result of this diagnosis to an operator, by a human machine interface, or automatically determined that this result is satisfactory or not, by a comparison with a predefined threshold for example, and possibly implement a new calculation. shift in case of insufficient result.
- the method implements a second video signal synchronization step E2 in a second block 5 of the video stream merging device.
- This second step consists of an inverse shift operation of the video streams to synchronize them as best as possible.
- a first stream is chosen as the reference video stream, preferably the video stream having started last in time, then each other stream is synchronized with this reference video stream.
- the offset time obtained in the first step is used to deduce for each video stream the number of offset images with respect to the reference video stream.
- the frames (frame in English denomination) closest to the time of each video stream are then known.
- Each stream may be inversely offset from the number of offset images to achieve synchronization with the reference stream. As a remark, the images of each stream can remain in spite of this mechanism slightly offset with each other, and therefore not perfectly synchronized, but this residual offset is minimized by these synchronization steps.
- the soundtracks of each video stream are likewise offset by the same offset time as the video portion associated with them in the audio-video stream, and are also synchronized.
- the previous steps E1, E15, E2 are optional. Indeed, the merging device 1 can also receive as input video streams already synchronized by another means external to the fusion device, such as a sophisticated multi-camera support integrating a common clock managing the different cameras. In such a case, synchronization is no longer necessary. In all other cases, it is strongly recommended, even mandatory, to obtain a quality video stream output.
- the melting device 1 then comprises two complementary blocks 6, 7, which make it possible to define panoramic construction parameters which are subsequently used during the phase of merging the video streams, which will be detailed. Further. These two blocks 6, 7 implement steps E30, E3, E4 of the video stream melting process.
- a reference instant t re f is chosen during a step E30, then a step E3 of decoding images corresponding to this instant of the respective respective flows is performed.
- this decoding makes it possible to transform the data of the video streams that are initially in a standard video format, for example into MPEG, MP4, etc., into a different format by which the subsequent processing by a computer, described below. , are possible.
- a step E4 of constructing a panoramic image from these decoded images is performed.
- This construction is then diagnosed, according to an optional diagnostic step, either automatically or manually by a visual presentation to an operator, the latter then having the opportunity to modify some parameters of construction of the panoramic image if the result is not suitable for him. not, or even modify the reference time t re f for a new implementation of the steps E3 and E4 at a different time, which may be more favorable to the panoramic construction algorithms.
- the panoramic construction parameters are memorized for their subsequent application to the merge of the video streams which will now be described.
- the method uses a method known from the state of the art, of which existing elements are for example mentioned in the document US671 1 293.
- the different images are grouped together to to form only one image.
- the method must notably manage areas of overlapping of the different images, because several cameras may have filmed common areas of space, and non-overlapping areas, filmed by a single camera. It must also handle the border areas between images from different cameras to ensure a continuous and visually undetectable boundary.
- merging we mean a method of combining information from multiple cameras in overlapping areas to achieve a continuous, high-quality result in these areas. Specifically, a pixel in a cross-over area will be constructed from information from multiple cameras, not by the choice of a single camera. A simple juxtaposition of films does not therefore represent a fusion within the meaning of the invention.
- a fusion implements complex calculations, a transformation using fusion parameters including in particular geometric parameters and radiometric parameters, to take account, for example, of differences in color and / or exposure between images from different cameras.
- the method then engages the video stream merging phase to output a single video stream, which accumulates video data from each video stream.
- this resulting video stream will be referred to as a wide field video stream later, although this video stream may however have any field of view value since it depends on the video streams considered input.
- panoramic image will be used to designate an image obtained by the grouping / merging of several images, the result being able to form a very wide angle of view, but in a nonlimiting manner.
- the method advantageously implements a repetition of the following steps, over the entire chosen duration of the merging of the video streams.
- the method implements a step E5 of decoding an image or several images for each video stream at a given instant or around this instant, in a block 8 of the fusion device.
- These decoded images are stored in a memory of the device for processing in the next step.
- the fact of only partially decoding here preferably very limited, for example to less than ten images, or even three or less per video stream, is advantageous because it does not require the use of a large memory size.
- each video stream has a reasonable size in its standard coded format, which incorporates a data compression method, but occupies a much larger size in a decoded format.
- the method implements a step E6 of constructing a panoramic image, in a block 9 of the fusion device, bringing together for each video stream, the corresponding image substantially at the given instant.
- a panoramic image is then constructed from the image of each video stream corresponding to the given instant considered. This construction is carried out using the panoramic construction parameters which were previously calculated by the steps E30, E3 and E4 described previously, which allows a rapid construction.
- a last step E7 of construction of the wide-field video stream implemented by a block 10 of the fusion device, the resulting panoramic image is added to the previously-built wide-field video stream and the whole is encoded in video format.
- This encoding makes it possible to form the wide-field video output stream in a selected standard video format, such as MPEG, MP4, H264, etc., for example.
- the iteration mechanism of the steps E5 to E7 over the chosen duration allows the progressive construction of the wide field video stream: this avoids having to decode the entirety of the video streams for their subsequent fusion, as mentioned above, this would require a very large memory space in the device, and in addition, it also avoids storing the whole of the resulting wide-field video stream in a format of the same size, since only a small portion of the wide-field video stream output is likewise in memory of the device in a decoded manner.
- the advantageous solution adopted only a few images are decoded and processed at each moment, which requires only a small memory space, as well as a reasonable computing power.
- the different video streams and the wide field video stream as a whole are stored in the standard encoded video format, for example MPEG, which occupies a standardized, compressed memory space, intended to optimize the memory space of a computing device.
- MPEG which occupies a standardized, compressed memory space
- This approach is compatible with the use of a simple personal computer (PC) for the implementation of the method and the formation of the fusion device 1.
- PC personal computer
- one or more audio bands associated with one or more source video streams can also be encoded with the wide-field video stream. , to actually form a wide field audio-video stream.
- the video stream merging device 1 comprises a memory, not shown, which keeps the generated wide field video stream, which can then be transmitted by an output 1 1 of the fusion device, to a possible external reader for example.
- the video stream merging device 1 also comprises an integrated reader that makes it possible to display the wide field video stream, on a screen 12 of the device for example.
- FIG. 3 thus exposes a menu with the main functionalities of the human machine interface according to one embodiment, the specific steps of which in the method described above will now be detailed.
- the human machine interface proposes a window 35 in which the operator can position the different source video streams to be merged, in an initial step E0 of the method. More specifically, at least one image 36 from each video stream is displayed in this space, as well as the name associated with the video streams. Each video stream can be completely viewed independently within this window 35, which therefore offers the function of multi-video players.
- the operator has the possibility of adding or removing the video streams from this window 35. For this, he can either use a manual search in the memory space of the fusion device to select the video streams to be added, or the select on another window and move them in the window 35 mentioned. Conversely, he can remove them from space, either by delete key either by manually moving them out of space.
- the human machine interface allows an operator to choose the time limits of the fusion of the source video streams, that is to say the start and end times of the merger.
- the human machine interface presents the operator with a time line 30, on which he can position two cursors 31, 32 fixing the start and end times of the wide field video to be generated, for example in another previous step E05 of the process.
- another interface can alternatively allow him to enter these moments.
- the operator adds an additional cursor 33 on the time line 30 to define the reference time t ref , to a preliminary intermediate step E30 to steps E3 and E4.
- the method then realizes a panoramic image between the images 36 at the chosen reference time of the different video streams.
- the result obtained is a panoramic image 39, which is displayed in a viewing area 38 of the man-machine interface.
- This merge then uses the same panorama construction parameters as those validated at the reference time t ref by the selected cursor 33 over the entire duration of the video streams merge.
- the resulting wide field video stream is displayed in another wide field video preview window 37, which allows its simple viewing as a standard video.
- the manual steps described above can also be automated, in alternative embodiments of the device.
- a few predefined instants distributed over the time line can be tested, an automatic diagnosis of the panoramic result making it possible to retain the best choice.
- several reference times are automatically selected, for example obtained automatically according to a predefined period on all or part of the duration selected for the wide field video stream.
- a step of combining the different results obtained for the parameters of panorama construction calculated on all the moments chosen is implemented. This combination consists for example in an average of these different parameters, this average meaning in the broad sense and can be arithmetical, geometric, or finally be replaced by any mathematical function to deduce a final value for each panorama construction parameter to from the different values obtained.
- an operator or an automatic step of the method determines a reference instant, preferably considered favorable, or even randomly, then the method automatically implements a step of calculating the panorama construction parameters over several selected instants over a time range distributed in the vicinity of this reference time.
- This time range can be determined by parameters (duration, proportion before and / or after the reference time) predefined previously, or entered by the operator via a human machine interface.
- the panorama construction parameters are finally determined by the combination of the different parameters obtained for each of these instants chosen over said time range, similarly to the principle explained above in the preceding variant embodiment.
- one or more reference times may not be chosen but fixed in a random manner or according to a predefined rule without taking into account a quality criterion.
- the parameters for constructing a panoramic image are defined at one or more selected times, or over a reference time range, specifically to obtain an optimal quality of merger.
- at least one reference time will not correspond to the initial time of the resulting wide-field video stream, since it is a question of searching on all or part of the duration of the fusion a moment or a favorable range.
- at least one reference time, or a reference time range is selected on the basis of an automatic or manual diagnosis, via a visualization on a screen via a human interface. machine, to allow to obtain an optimal quality of the fusion.
- the video stream fusion method described above can be implemented in a distinct and successive manner over several portions of the total duration chosen, before the final collage of the different wide field video streams obtained, to construct the stream final for the duration sought.
- This approach may have the advantage of obtaining different panoramic construction parameters on the different portions of the wide field video stream, which can achieve a better quality result in some configurations.
- the video stream merging device described above can be presented as a simple computer, or any other device comprising at least a computer, a memory and means of communication to external devices for receiving input video streams and / or the transmission of the resulting wide field video stream at the output.
- This device advantageously comprises a screen for the presentation of a human machine interface to an operator, as described above.
- the invention also relates to a system that comprises a multi-camera support (Rig) on which are mounted several cameras, at least two and preferably at least six, and a fusion device as described above.
- a multi-camera support Rig
- the different video streams advantageously come from cameras positioned in the same place but oriented differently, to obtain a wide field video stream obtained from the observation of the space at the same point.
- the wide field video stream generated by the fusion method as described above has the advantage of offering a video stream comprising a quantity of information greater than that of a simple video of the state of the art, obtained by a only camera, and allows, with the help of a reader adapted, to offer a richer visualization of a scene filmed than what one can easily obtain with the existing solutions.
- the system mentioned above is particularly suitable for filming an event gathering a large crowd, such as a concert, a sports event in a stadium, a family celebration such as a wedding, etc.
- a multi-camera support as mentioned above can be positioned on the stage, and can film the show as well as the audience simultaneously, which then allows the reading to easily obtain the possibility of visualize the show and / or the audience at any moment of the film.
- one or more multi-camera support (s) can be arranged within a stadium enclosure, to allow from one point of view to simultaneously film the entire enclosure , the sports field as the public.
- the system with a multi-camera support is also interesting for an "embedded" application, that is to say accompanying a person or a device that moves.
- this support can be attached to the helmet of an athlete during a test, during a paragliding flight, a parachute jump, climbing, downhill skiing, etc. It can be arranged on a vehicle, such as a bike, a motorcycle, a car.
- the multi-camera support can be associated with a drone or a helicopter, to obtain a complete aerial video, allowing a wide field recording of a landscape, a tourist site, a site to be watched, a sporting event seen from the sky, etc.
- a remote monitoring system can also be used for a remote monitoring system.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Studio Circuits (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
FR1353346A FR3004565B1 (fr) | 2013-04-12 | 2013-04-12 | Fusion de plusieurs flux video |
PCT/EP2014/057352 WO2014167085A1 (fr) | 2013-04-12 | 2014-04-11 | Fusion de plusieurs flux video |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2984815A1 true EP2984815A1 (fr) | 2016-02-17 |
Family
ID=48795715
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP14717732.3A Withdrawn EP2984815A1 (fr) | 2013-04-12 | 2014-04-11 | Fusion de plusieurs flux video |
Country Status (4)
Country | Link |
---|---|
US (1) | US20160037068A1 (fr) |
EP (1) | EP2984815A1 (fr) |
FR (1) | FR3004565B1 (fr) |
WO (1) | WO2014167085A1 (fr) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20160115466A (ko) * | 2015-03-27 | 2016-10-06 | 한국전자통신연구원 | 파노라믹 비디오를 스티칭하는 장치 및 이를 위한 스티칭 방법 |
US10269257B1 (en) | 2015-08-11 | 2019-04-23 | Gopro, Inc. | Systems and methods for vehicle guidance |
US9896205B1 (en) | 2015-11-23 | 2018-02-20 | Gopro, Inc. | Unmanned aerial vehicle with parallax disparity detection offset from horizontal |
US9720413B1 (en) * | 2015-12-21 | 2017-08-01 | Gopro, Inc. | Systems and methods for providing flight control for an unmanned aerial vehicle based on opposing fields of view with overlap |
US9663227B1 (en) | 2015-12-22 | 2017-05-30 | Gopro, Inc. | Systems and methods for controlling an unmanned aerial vehicle |
KR102517104B1 (ko) | 2016-02-17 | 2023-04-04 | 삼성전자주식회사 | 가상 현실 시스템에서 이미지 처리 방법 및 장치 |
CN108322763A (zh) * | 2016-08-23 | 2018-07-24 | 深圳市掌网科技股份有限公司 | 一种编解码全景视频的方法和系统 |
US10650590B1 (en) * | 2016-09-07 | 2020-05-12 | Fastvdo Llc | Method and system for fully immersive virtual reality |
US11671551B2 (en) * | 2021-05-24 | 2023-06-06 | Sony Group Corporation | Synchronization of multi-device image data using multimodal sensor data |
CN113706391B (zh) * | 2021-11-01 | 2022-01-18 | 成都数联云算科技有限公司 | 无人机航拍图像实时拼接方法、系统、设备及存储介质 |
CN114222162B (zh) * | 2021-12-07 | 2024-04-12 | 浙江大华技术股份有限公司 | 视频处理方法、装置、计算机设备及存储介质 |
CN114638771B (zh) * | 2022-03-11 | 2022-11-29 | 北京拙河科技有限公司 | 基于混合模型的视频融合方法及系统 |
CN117132925B (zh) * | 2023-10-26 | 2024-02-06 | 成都索贝数码科技股份有限公司 | 一种体育赛事的智能场记方法及装置 |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6711293B1 (en) | 1999-03-08 | 2004-03-23 | The University Of British Columbia | Method and apparatus for identifying scale invariant features in an image and use of same for locating an object in an image |
US6788333B1 (en) * | 2000-07-07 | 2004-09-07 | Microsoft Corporation | Panoramic video |
US7483061B2 (en) * | 2005-09-26 | 2009-01-27 | Eastman Kodak Company | Image and audio capture with mode selection |
US7777783B1 (en) * | 2007-03-23 | 2010-08-17 | Proximex Corporation | Multi-video navigation |
US8270767B2 (en) * | 2008-04-16 | 2012-09-18 | Johnson Controls Technology Company | Systems and methods for providing immersive displays of video camera information from a plurality of cameras |
CN101668160B (zh) * | 2009-09-10 | 2012-08-29 | 华为终端有限公司 | 视频图像数据处理方法、装置及视频会议系统及终端 |
FR2973343B1 (fr) * | 2011-04-01 | 2013-11-29 | Latecoere | Aeronef pourvu d'un systeme d'observation d'un environnement de cet aeronef |
US20120277914A1 (en) * | 2011-04-29 | 2012-11-01 | Microsoft Corporation | Autonomous and Semi-Autonomous Modes for Robotic Capture of Images and Videos |
US8970665B2 (en) * | 2011-05-25 | 2015-03-03 | Microsoft Corporation | Orientation-based generation of panoramic fields |
JP5870636B2 (ja) * | 2011-11-09 | 2016-03-01 | ソニー株式会社 | 画像処理装置および方法、並びにプログラム |
US9792955B2 (en) * | 2011-11-14 | 2017-10-17 | Apple Inc. | Automatic generation of multi-camera media clips |
US20130278728A1 (en) * | 2011-12-16 | 2013-10-24 | Michelle X. Gong | Collaborative cross-platform video capture |
EP2962063B1 (fr) * | 2013-02-28 | 2017-03-29 | Fugro N.V. | Système et méthode de mesure d'attitude |
-
2013
- 2013-04-12 FR FR1353346A patent/FR3004565B1/fr active Active
-
2014
- 2014-04-11 EP EP14717732.3A patent/EP2984815A1/fr not_active Withdrawn
- 2014-04-11 WO PCT/EP2014/057352 patent/WO2014167085A1/fr active Application Filing
-
2015
- 2015-10-12 US US14/880,879 patent/US20160037068A1/en not_active Abandoned
Non-Patent Citations (1)
Title |
---|
JOERGEN GEERDS: "PTgui 360 video batch stitching - Freedom360", 9 April 2013 (2013-04-09), XP055512649, Retrieved from the Internet <URL:https://freedom360.us/360-video-stitching-ptgui/> [retrieved on 20181005] * |
Also Published As
Publication number | Publication date |
---|---|
FR3004565A1 (fr) | 2014-10-17 |
FR3004565B1 (fr) | 2016-11-11 |
US20160037068A1 (en) | 2016-02-04 |
WO2014167085A1 (fr) | 2014-10-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2014167085A1 (fr) | Fusion de plusieurs flux video | |
EP3005296B1 (fr) | Fusion de plusieurs flux vidéo | |
JP6138962B2 (ja) | 写真の変換提案 | |
US10367997B2 (en) | Enriched digital photographs | |
US20190268583A1 (en) | Video sequence assembly | |
EP2834972B9 (fr) | Navigation video multi-sources | |
FR2913510A1 (fr) | Procede pour determiner automatiquement une probabilite de saisie d'images avec un terminal a partir de donnees contextuelles | |
FR2875662A1 (fr) | Procede de visualisation de document audiovisuels au niveau d'un recepteur, et recepteur apte a les visualiser | |
EP3449634B1 (fr) | Procédé de composition contextuelle d'une représentation vidéo intermédiaire | |
EP2172000B1 (fr) | Procede de creation d'une suite sonore de photographies, et appareil pour la creation et la reproduction d'une telle suite sonore | |
EP3092795A2 (fr) | Dispositif de création de vidéos augmentées | |
EP3473000A1 (fr) | Procede et systeme de prise de vues a l'aide d'un capteur virtuel | |
WO2014199085A1 (fr) | Systemes de reperage de la position de la camera de tournage pour le tournage de films video | |
CA2511846A1 (fr) | Procede d'obtention d'une succession d'images sous la forme d'un effet tournant | |
FR2978639A1 (fr) | Procedes de compression et de decompression d'images animees | |
FR3035989A1 (fr) | Procede de reglage du niveau de definition des images d'un programme multimedia | |
WO2023131757A1 (fr) | Procede et dispositif de composition d'une video et procede d'analyse de video | |
FR2931611A1 (fr) | Procede de modelisation 3d de scenes reelles et dynamiques | |
FR2887106A1 (fr) | Procede de generation d'un signal source, signal source, procede et dispositif de montage, moyens de stockage et programme d'ordinateur correspondants | |
EP2234066A1 (fr) | Mesure de distance à partir d'images stéréo | |
FR2983996A1 (fr) | Dispositif et procede d'affichage d'une image | |
EP2661731A1 (fr) | Procédé et dispositif d'aide à la prise de vue d'une photo numérique au moyen d'un objectif grand angle | |
FR2993686A1 (fr) | Procede de generation d'un document multimedia relatif a un evenement, dispositif de generation et programme d'ordinateurcorrespondants. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20151102 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAX | Request for extension of the european patent (deleted) | ||
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: GOPRO, INC. |
|
17Q | First examination report despatched |
Effective date: 20181015 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 20190426 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230601 |