CN107835397B - Multi-lens video synchronization method - Google Patents
Multi-lens video synchronization method Download PDFInfo
- Publication number
- CN107835397B CN107835397B CN201711405193.9A CN201711405193A CN107835397B CN 107835397 B CN107835397 B CN 107835397B CN 201711405193 A CN201711405193 A CN 201711405193A CN 107835397 B CN107835397 B CN 107835397B
- Authority
- CN
- China
- Prior art keywords
- code
- lens
- clock
- shot
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
- H04N7/181—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
- H04N21/4307—Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
- H04N21/8547—Content authoring involving timestamps for synchronizing content
Abstract
The invention relates to a method for synchronizing multi-lens videos, which designs a group of clock codes capable of automatically timing, and enables a worker or mobile equipment to control the clock codes to wind all lenses for one circle when panoramic shooting is carried out, so that all lenses record the current clock codes, and finally the clock codes in pictures shot by each lens are decoded to obtain the time code of each lens, and then the offset of the lens is calculated, so that the video synchronization shot by multiple lenses is adjusted.
Description
Technical Field
The invention relates to the technical field of panoramic video shooting, in particular to a multi-lens video synchronization method.
Background
The panoramic video is a video shot in all directions at 360 degrees by using a 3D camera, and a user can adjust the video to watch the video up, down, left and right at will when watching the video. When a plurality of cameras are used for shooting, because the cameras are difficult to start shooting at the same time, the starting time of the video shot by each camera is different, so that the videos need to be synchronized, the videos are aligned according to actual time, and the switching, editing or splicing of the videos are facilitated.
The current multi-lens video synchronization technology comprises the following steps: (1) time code synchronization by cameras requires hardware clock synchronization and custom hardware. The cost is high, and the limitation to the equipment is more. (2) And (5) shooting a lens, and manually operating. It is difficult to apply to multi-shot video because the script pictures cannot appear in all shots simultaneously, (3) manual editing, manual synchronization. The precision is poor, and the workload is large. (4) Through audio synchronization. For example, taking a live shot, the cameras record the sound and then synchronize the video by synchronizing the sound. The disadvantage is that it is easily disturbed by live sound and thus cannot be automatically aligned by the computer. If the human hearing sound or seeing the waveforms are aligned, the efficiency is low. Moreover, if the cameras are far away, the sound difference of the cameras is large, and the synchronization is not achieved.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a multi-lens video synchronization method which does not need to customize hardware, has low cost, automatic synchronization and simple and convenient operation.
The purpose of the invention is realized by the following technical scheme:
a method for multi-shot video synchronization, comprising the steps of:
s01: designing a group of clock pattern codes APP which can be clearly shot by a camera;
s02: implanting a clock graphic code APP into the mobile equipment, and displaying the graphic clock code at a display end of the mobile equipment;
s03: when the multi-lens starts to shoot, the graphic clock code starts to time, so that the time code displayed by the display end of the mobile equipment appears in front of each camera;
s04: decoding the time code by an identification technology; or the time code is identified manually, and the time code information is recorded;
s05: and calculating the offset of each lens and synchronizing the video according to the offset.
Preferably, the display unit of the graphic clock code comprises decimals, seconds and milliseconds, and the milliseconds comprise two tens and hundreds of milliseconds.
Preferably, the display unit of the graphic clock code further includes a display code displaying ten milliseconds bits.
Preferably, the ten-millisecond display code is a numeric code and is composed of a 0-9 ten-digit character string, the fonts 0-9 are sequentially enlarged at the beginning of timing to display ten milliseconds, and the maximum number displayed by the fonts at a certain moment is ten milliseconds of the current moment.
Preferably, the ten millisecond display code is a graphic code comprising two dot locations each having four color changes, the ten millisecond digits 0-9 being represented by a combination of the two dot locations and the color changes.
Preferably, the four colors are black, red, green, and blue, and define a display combination method of 0 to 9.
Preferably, the recognition technique in step S04 is to perform a gray scale process on the captured image and then extract the image clock code by a binarization process.
Preferably, the display end of the mobile device is stable when being positioned in front of the camera for shooting, so that shaking is avoided.
Preferably, the display end of the mobile device can be a mobile phone or a tablet computer.
Preferably, the specific method in step S05 is to select a largest time code as the reference time code according to the obtained time codes of the respective shots; and finally converting the frame number offset into the frame number offset of each lens according to the maximum time code and the decoded frame number of each lens, and moving each lens to a corresponding position by using the calculated frame number offset to realize multi-lens video synchronization.
The invention has the beneficial effects that: according to the scheme, a set of clock codes capable of automatically timing is designed, when panoramic shooting is carried out, a worker or mobile equipment controls the clock codes to wind all lenses for a circle, so that all lenses record the current clock codes, the clock codes in pictures shot by all the lenses are decoded, the time codes of all the lenses are obtained, the offset of the lenses is calculated, and the video synchronization shot by multiple lenses is adjusted.
Drawings
FIG. 1 is a graphical clock code display diagram of the present invention;
FIG. 2 is a ten millisecond bit pattern defined by the present invention.
Detailed Description
The technical solutions of the present invention are further described in detail below with reference to fig. 1-2 and the specific embodiments, but the scope of the present invention is not limited to the following descriptions.
A method for multi-shot video synchronization, comprising the steps of:
s01: designing a group of clock pattern codes APP which can be clearly shot by a camera;
s02: implanting a clock graphic code APP into the mobile equipment, and displaying the graphic clock code at a display end of the mobile equipment;
s03: when the multi-lens starts to shoot, the graphic clock code starts to time, so that the time code displayed by the display end of the mobile equipment appears in front of each camera;
s04: decoding the time code by an identification technology; or the time code is identified manually, and the time code information is recorded;
s05: and calculating the offset of each lens and synchronizing the video according to the offset.
Preferably, the display unit of the graphic clock code comprises decimals, seconds and milliseconds, and the milliseconds comprise two tens and hundreds of milliseconds.
Preferably, the display unit of the graphic clock code further includes a display code for displaying ten milliseconds, as shown in fig. 1, the display code for ten milliseconds is a digital code and is composed of a 0-9 ten digit character string, fonts 0-9 are sequentially enlarged at the beginning of timing to display ten milliseconds, and the font display maximum number at a certain time is ten milliseconds at the current time; the reason for this design is that when the digital code is displayed on the display, because the exposure time of the lens exists, two continuous motion pictures in a short time, namely what we say are virtual images, as shown in fig. 1, when the next millisecond unit is displayed as 37, and the previous time of the exposure of the camera under 37 is 36, so that 6 and 7 overlap phenomena occur, so that the time statistics have errors, and in order to eliminate the errors, a digital code which independently displays millisecond bits is designed, the digital code displays all digital strings of 0-9, and displays all digital strings from 0 to 9, the time shift is realized by the change of digital fonts, namely nine numbers of 0-9 become larger from 0 to 9 in turn, and then a loop is performed, wherein each loop finishes one time to represent 100 milliseconds, and 0-9 respectively represent 0 millisecond, 10 milliseconds, 20 milliseconds … … 90 milliseconds; the current moment is clearly displayed through the change of the digital font, so that the time identification in the subsequent image processing is facilitated, and the offset is calculated.
Preferably, the ten millisecond display code may be further designed as a graphic code, wherein the graphic code comprises two dot positions, each dot position has four color changes, and the ten millisecond digit numbers 0-9 are represented by the combination of the two dot positions and the color changes.
Preferably, the four colors are black, red, green and blue, and a display combination method of 0 to 9 is defined, which is defined as follows in this embodiment:
0 black, 1 black red, 2 black green, 3 black blue, 4 red, 5 red green, 6 red blue, 7 green, 8 green blue, 9 blue; the values of ten milliseconds are represented by the color blocks defined above, as shown in fig. 2.
Preferably, the recognition technique in step S04 is to perform a gray scale process on the captured image and then extract the image clock code by a binarization process.
And applying the designed clock code to the clock APP. When the shooting is started, the timer is started. Each camera is then caused to take a picture of the clock.
Example operations are as follows:
the method comprises the following steps of importing videos shot by various cameras into a video editing tool, identifying clock images in the videos, and sequentially carrying out the following operations.
And (3) region segmentation:
sequentially acquiring image data of each lens, establishing an image background model by using a general processing algorithm aiming at the image data, detecting the foreground of the image, and finally obtaining gray image data containing foreground and background information; and then clustering the obtained shot gray level image data to obtain a clustering result, recording upper and lower boundary points of each cluster, and for the upper and lower boundary points of each cluster, using the currently processed upper boundary to search the lower boundary of the previous cluster in a traversing manner, and gradually traversing, searching and combining the upper and lower boundaries. And sorting the clusters after the upper and lower boundaries are combined according to the number of the cluster interior points from large to small to obtain a cluster sorting table. And filtering the clustering sorting table according to the set minimum threshold value, leaving clustering points larger than the minimum threshold value, and generating a rectangular region on the image from the current clustering points according to the coordinate values of the clustering points and in a mode of top left corner-bottom right corner. And then, effectively combining the generated rectangular areas according to an effective threshold value to obtain the removed effective rectangular areas. And finally, selecting and filtering the removed effective rectangular area according to an effective interval threshold value to obtain an effective rectangular area gray scale image in the effective interval.
Processing the effective area:
and carrying out fuzzy processing on the gray-scale image of the effective area obtained from the above, carrying out edge detection on the fuzzy result image, detecting straight lines on the detection result by using a probabilistic Hough transform algorithm, and finally extracting edge straight lines of a plurality of areas. And processing the detected straight lines, calculating the horizontal included angle of each line to obtain a plurality of included angle classes with different angles, clustering the included angle classes, and clustering each cluster again according to whether a public part exists in line segment translation to obtain an included angle cluster set. And calculating the common part of the two rectangles according to the rectangles formed by the angle clustering set, extracting straight line segments near the common part to form candidate target rectangles, and finally forming real rectangles. And finally, extracting a clock image from the gray-scale image according to the rectangles.
Identification number:
and carrying out binarization processing on the obtained clock gray level image, taking the obtained binarization model as a template, carrying out binarization point clustering on the binarization model, and forming a rectangular area by the obtained point clustering set according to point coordinates. And sorting the rectangular areas from large to small according to the area sizes to obtain a rectangular area ordered list. And then filtering the table, and eliminating rectangles with overlarge heads and undersized tails to obtain an effective rectangular area set. And segmenting the effective area sets according to a horizontal straight line to obtain an upper rectangular area set and a lower rectangular area set, continuously filtering the two sets, and eliminating the rectangular areas with overlarge heads and undersize tails to obtain the final effective upper rectangular area set and the final effective lower rectangular area set. And processing the clock gray-scale image by using the upper and lower effective rectangular area sets, filling the pixel value of the gray-scale image in the effective area to 255, and extracting the digital image in the effective area. And finally, carrying out image recognition on the extracted digital image by using a general algorithm to obtain a real number. Finally, the time code required to synchronize the video is formed from these numbers.
Preferably, the display end of the mobile device is stable when being positioned in front of the camera for shooting, so that shaking is avoided, and the display end of the mobile device can be a mobile phone or a tablet computer.
Preferably, the specific method in step S05 is to select a largest time code as the reference time code according to the obtained time codes of the respective shots; and finally converting the frame number offset into the frame number offset of each lens according to the maximum time code and the decoded frame number of each lens, and moving each lens to a corresponding position by using the calculated frame number offset to realize multi-lens video synchronization.
The foregoing is illustrative of the preferred embodiments of this invention, and it is to be understood that the invention is not limited to the precise form disclosed herein and that various other combinations, modifications, and environments may be resorted to, falling within the scope of the concept as disclosed herein, either as described above or as apparent to those skilled in the relevant art. And that modifications and variations may be effected by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.
Claims (9)
1. A method for synchronizing multi-shot video, characterized by comprising the steps of:
s01: designing a group of clock pattern codes APP which can be clearly shot by a camera;
s02: implanting a clock graphic code APP into the mobile equipment, and displaying the graphic clock code at a display end of the mobile equipment;
s03: when the multi-lens starts to shoot, the graphic clock code starts to time, so that the time code displayed by the display end of the mobile equipment appears in front of each camera, and each camera shoots the picture of the clock to obtain the shot graphic;
s04: recognizing the shot graph through a recognition technology, and decoding a time code; or the shot graph is manually identified to obtain a time code, and the time code information is recorded;
s05: calculating the offset of each lens and synchronizing the video according to the offset;
selecting a maximum time code as a reference time code according to the obtained time codes of all the shots; and finally converting the frame number offset into the frame number offset of each lens according to the maximum time code and the decoded frame number of each lens, and moving each lens to a corresponding position by using the calculated frame number offset to realize multi-lens video synchronization.
2. The method of claim 1, wherein the display unit of the graphic clock code comprises decimals, seconds and milliseconds, and the milliseconds consist of two tens and hundreds of milliseconds.
3. The method of claim 2, wherein the display unit of the graphic clock code further comprises a display code displaying ten millisecond bits.
4. A method for multi-shot video synchronization as defined in claim 3, wherein the ten-millisecond display code is a digital code, and is composed of a string of ten-digit numbers from 0 to 9, the fonts from 0 to 9 at the beginning of the timing are sequentially enlarged for displaying ten milliseconds, and the maximum number displayed at a certain time is ten milliseconds at the current time.
5. A method for multi-shot video synchronization as defined in claim 3, wherein said ten millisecond display coding is graphics coding, said graphics coding comprising two point bits, each point bit having four color changes, the ten millisecond digits 0-9 being represented by a combination of two point bits and a color change.
6. The method of claim 5, wherein the four colors are black, red, green, and blue, and a combination of 0-9 is defined.
7. The method for multi-shot video synchronization as claimed in claim 1, wherein the recognition technique in step S04 is to perform gray processing on the captured image and then extract the image clock code by binarization processing.
8. The method as claimed in claim 1, wherein the display of the mobile device is kept stable when the mobile device is located in front of the camera to avoid shaking.
9. The method of claim 8, wherein the mobile device display terminal is a mobile phone or a tablet computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711405193.9A CN107835397B (en) | 2017-12-22 | 2017-12-22 | Multi-lens video synchronization method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711405193.9A CN107835397B (en) | 2017-12-22 | 2017-12-22 | Multi-lens video synchronization method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107835397A CN107835397A (en) | 2018-03-23 |
CN107835397B true CN107835397B (en) | 2019-12-24 |
Family
ID=61645330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711405193.9A Active CN107835397B (en) | 2017-12-22 | 2017-12-22 | Multi-lens video synchronization method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107835397B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110290287B (en) * | 2019-06-27 | 2022-04-12 | 上海玄彩美科网络科技有限公司 | Multi-camera frame synchronization method |
CN111457917A (en) * | 2020-04-13 | 2020-07-28 | 广东星舆科技有限公司 | Multi-sensor time synchronization measuring method and system |
CN111464807A (en) * | 2020-04-13 | 2020-07-28 | 广东星舆科技有限公司 | Binocular synchronization measuring method and system |
CN114461165B (en) * | 2022-02-09 | 2023-06-20 | 浙江博采传媒有限公司 | Virtual-real camera picture synchronization method, device and storage medium |
CN117041691B (en) * | 2023-10-08 | 2023-12-08 | 湖南云上栏山数据服务有限公司 | Analysis method and system for ultra-high definition video material based on TC (train control) code |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0838960A2 (en) * | 1996-10-28 | 1998-04-29 | Elop Electro-Optics Industries Ltd. | System and method for audio-visual content verification |
US6072832A (en) * | 1996-10-25 | 2000-06-06 | Nec Corporation | Audio/video/computer graphics synchronous reproducing/synthesizing system and method |
CN101646050A (en) * | 2009-09-09 | 2010-02-10 | 中国电信股份有限公司 | Text annotation method and system, playing method and system of video files |
CN102075668A (en) * | 2009-11-13 | 2011-05-25 | 株式会社Ntt都科摩 | Method and apparatus for synchronizing video data |
CN102708903A (en) * | 2012-06-14 | 2012-10-03 | 大连三通科技发展有限公司 | Automatic video playback system |
JP2013187642A (en) * | 2012-03-06 | 2013-09-19 | Sharp Corp | Image pickup device, image pickup device control system, and computer program |
CN103839562A (en) * | 2014-03-17 | 2014-06-04 | 杨雅 | Video creation system |
CN103957344A (en) * | 2014-04-28 | 2014-07-30 | 广州杰赛科技股份有限公司 | Video synchronization method and system for multiple camera devices |
CN104270567A (en) * | 2014-09-11 | 2015-01-07 | 深圳市南航电子工业有限公司 | High-precision synchronous multi-channel image acquisition system and time synchronization method thereof |
CN104980627A (en) * | 2015-07-10 | 2015-10-14 | 小米科技有限责任公司 | Clapper board, photographing control method and apparatus |
CN205430457U (en) * | 2016-03-29 | 2016-08-03 | 博瑞恒创(天津)科技有限公司 | Multi -media synchronized play system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040179608A1 (en) * | 2003-02-27 | 2004-09-16 | Intel Corporation | Multiple-description coding methods and apparatus |
-
2017
- 2017-12-22 CN CN201711405193.9A patent/CN107835397B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6072832A (en) * | 1996-10-25 | 2000-06-06 | Nec Corporation | Audio/video/computer graphics synchronous reproducing/synthesizing system and method |
EP0838960A2 (en) * | 1996-10-28 | 1998-04-29 | Elop Electro-Optics Industries Ltd. | System and method for audio-visual content verification |
CN101646050A (en) * | 2009-09-09 | 2010-02-10 | 中国电信股份有限公司 | Text annotation method and system, playing method and system of video files |
CN102075668A (en) * | 2009-11-13 | 2011-05-25 | 株式会社Ntt都科摩 | Method and apparatus for synchronizing video data |
JP2013187642A (en) * | 2012-03-06 | 2013-09-19 | Sharp Corp | Image pickup device, image pickup device control system, and computer program |
CN102708903A (en) * | 2012-06-14 | 2012-10-03 | 大连三通科技发展有限公司 | Automatic video playback system |
CN103839562A (en) * | 2014-03-17 | 2014-06-04 | 杨雅 | Video creation system |
CN103957344A (en) * | 2014-04-28 | 2014-07-30 | 广州杰赛科技股份有限公司 | Video synchronization method and system for multiple camera devices |
CN104270567A (en) * | 2014-09-11 | 2015-01-07 | 深圳市南航电子工业有限公司 | High-precision synchronous multi-channel image acquisition system and time synchronization method thereof |
CN104980627A (en) * | 2015-07-10 | 2015-10-14 | 小米科技有限责任公司 | Clapper board, photographing control method and apparatus |
CN205430457U (en) * | 2016-03-29 | 2016-08-03 | 博瑞恒创(天津)科技有限公司 | Multi -media synchronized play system |
Non-Patent Citations (1)
Title |
---|
微电影录音指南;颜隆;《数码影像时代》;20131015;32-37 * |
Also Published As
Publication number | Publication date |
---|---|
CN107835397A (en) | 2018-03-23 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107835397B (en) | Multi-lens video synchronization method | |
CN105654471B (en) | Augmented reality AR system and method applied to internet video live streaming | |
US9628837B2 (en) | Systems and methods for providing synchronized content | |
US11343591B2 (en) | Method and system of presenting moving images or videos corresponding to still images | |
EP2252071A2 (en) | Improved image conversion and encoding techniques | |
US20100278450A1 (en) | Method, Apparatus And System For Alternate Image/Video Insertion | |
KR101658239B1 (en) | Method and apparatus for generating of animation message | |
CN108965839B (en) | Method and device for automatically adjusting projection picture | |
CN109803172B (en) | Live video processing method and device and electronic equipment | |
CN108337471B (en) | Video picture processing method and device | |
CN105704559A (en) | Poster generation method and apparatus thereof | |
US9426385B2 (en) | Image processing based on scene recognition | |
CN110276769B (en) | Live broadcast content positioning method in video picture-in-picture architecture | |
TW201924317A (en) | Video processing method and device based on augmented reality, and electronic equipment | |
CN108055423B (en) | Multi-lens video synchronization offset calculation method | |
CN113011403B (en) | Gesture recognition method, system, medium and device | |
CN105678301B (en) | method, system and device for automatically identifying and segmenting text image | |
CN112927349A (en) | Three-dimensional virtual special effect generation method and device, computer equipment and storage medium | |
CN106682670B (en) | Station caption identification method and system | |
CN112163993A (en) | Image processing method, device, equipment and storage medium | |
CN114694136A (en) | Article display method, device, equipment and medium | |
CN110267079B (en) | Method and device for replacing human face in video to be played | |
CN110266955A (en) | Image processing method, device, electronic equipment and storage medium | |
CN116522974A (en) | Method and device for decoding and code scanning gun | |
CN111988520B (en) | Picture switching method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |