CN113395409A - Video synchronization method applied to multi-view camera - Google Patents

Video synchronization method applied to multi-view camera Download PDF

Info

Publication number
CN113395409A
CN113395409A CN202110656083.XA CN202110656083A CN113395409A CN 113395409 A CN113395409 A CN 113395409A CN 202110656083 A CN202110656083 A CN 202110656083A CN 113395409 A CN113395409 A CN 113395409A
Authority
CN
China
Prior art keywords
video data
video
data
frame
image frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110656083.XA
Other languages
Chinese (zh)
Other versions
CN113395409B (en
Inventor
胡进
黄仕贵
陈立刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sunny Optical Intelligent Technology Co Ltd
Original Assignee
Zhejiang Sunny Optical Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sunny Optical Intelligent Technology Co Ltd filed Critical Zhejiang Sunny Optical Intelligent Technology Co Ltd
Priority to CN202110656083.XA priority Critical patent/CN113395409B/en
Publication of CN113395409A publication Critical patent/CN113395409A/en
Application granted granted Critical
Publication of CN113395409B publication Critical patent/CN113395409B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The invention discloses a video synchronization method applied to a multi-view camera, wherein the multi-view camera comprises a first data acquisition device and a second data acquisition device. The method comprises the following steps: acquiring first video data and second video data; time stamping the first video data and the second video data; and masking the first video data and the second video data after the time stamp synchronization by a frame mask to determine whether a fusion criterion is satisfied between each image frame in the first video data and the second video data. In this way, whether the first video data and the second video data after the time stamp synchronization meet the fusion standard is judged through the frame mask processing, so as to ensure the image fusion effect.

Description

Video synchronization method applied to multi-view camera
The application is a divisional application of a patent application with the Chinese application number of 201711346530.1 and the invention name of 'video synchronization method applied to a multi-view camera'.
Technical Field
The invention relates to the technical field of videos, in particular to a video synchronization method suitable for a multi-view camera.
Background
With the continuous development of the field of camera shooting, the mode of acquiring images or video data through a single camera is not enough to meet the market demand. Or, a single camera cannot meet the shooting requirements of some special scenes. More and more application scenes need two, three or even a plurality of cameras to work simultaneously, acquire data simultaneously and perform comprehensive processing on the data. To meet the requirements of these special scenes, a multi-view camera, which refers to a device including multiple cameras or multiple cameras at the same time, such as a binocular camera, a TOF camera, and a panoramic camera, can be regarded as a multi-view camera in a broad sense.
The multi-view camera has been widely used in many technical fields, and for example, the multi-view camera is applied to a machine vision technology, the machine vision has played an increasingly important role in human life. The technology is required to be used in the fields of AR (augmented reality), VR (virtual reality), ADAS (advanced driven status System), SLAM (Simulanous localization and mapping) and the like, and the function essential in machine vision is depth perception. Typical applications of binocular, structured light, or TOF technologies associated with depth perception typically require the use of multiple cameras. Or taking the example that the multi-view camera is applied to panoramic shooting, the panoramic shooting often requires a plurality of cameras to simultaneously acquire data of the same scene at different angles, and the data are spliced, so that a panoramic image meeting the visual requirement is obtained. In both of these scenarios, the application to a multi-view camera is inevitably required.
In the practical application of the multi-view camera, a plurality of data acquisition devices of the multi-view camera acquire data of the same scene at the same time, of course, different data acquisition devices can acquire the same data and also can acquire different data, and then the multi-view camera or other data processing devices perform fusion processing on the data to obtain image or video data meeting the actual requirements. In the data processing process of the multi-view camera, an important step is to synchronize the acquired data, and specifically, the synchronizing process is to synchronize the data acquired by the multiple data acquisition devices to obtain the fused data at the same time.
However, due to the problems that a plurality of data acquisition devices in the multi-view camera may have time delay and inconsistent clock settings, the data fusion effect of the current multi-view camera is unsatisfactory, thereby limiting the popularization and application of the multi-view camera. Especially when a multi-view camera is applied to shooting video, the problem of fusion of different video data is more prominent. Specifically, the multiple data acquisition devices acquire video data of the same scene in the same time period, and in the fusion process of the video data, frame data acquired by different data acquisition devices corresponding to each time point is required, and once some frame data are dislocated or lost, the final video quality is greatly affected. For example, due to the defect of the video synchronization method, the first frame data acquired by one data acquisition device may correspond to the second frame data acquired by another data acquisition device, and thus the obtained fusion data may deviate from the actual scene.
In some prior art techniques, the way to sacrifice unsynchronized video data is chosen to ensure that synchronized fusion video is obtained. In particular, in some prior art video processing processes, only a portion of the synchronized video data is intercepted, and the video data that is not synchronized is discarded, which greatly wastes the utilization of the video data. In other prior art techniques, video is obtained by selecting a way to reduce the quality of the video data, which greatly affects the viewing quality and subsequent data utilization of the fused video.
In summary, because the working performance of different data acquisition devices in the multi-view camera is different, the camera parameters are different, and it is still a problem to be solved urgently in the video synchronization process of the multi-view camera if it is ensured that the data acquired by the different data acquisition devices are synchronized.
Disclosure of Invention
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method synchronizes video data respectively acquired by at least two data acquisition devices so as to improve the video quality of a fused video.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method synchronizes video data acquired by at least two data acquisition devices at the same time, so that the video quality of a fused video is improved.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method solves the problem of video quality loss caused by time delay existing between different data acquisition devices, in other words, the video synchronization method makes up for the deviation existing in the video data acquisition of the different data acquisition devices.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method combines a hardware synchronization mode and a software synchronization mode to further improve or optimize the data synchronization precision, so that the quality of the synchronized data is improved.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method simultaneously synchronizes waveforms and time frames of different video data so as to obtain synchronous data with high precision and high synchronization rate.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method synchronizes time frames of different video data in a software synchronization mode, in other words, the video synchronization method can complete video synchronization at an embedded end, so that the practical application of the multi-view camera at the embedded end is optimized.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method can be applied to multi-type multi-view cameras, in other words, the application range of the video synchronization method is wide.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method can enable the multi-view camera to finally obtain a high-precision high-synchronization-rate video, so that the use experience of a user is improved, and the subsequent utilization rate of video data is improved.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method improves the real-time performance of videos, namely the video synchronization method can solve the problems of time delay, blocking and the like of the videos during playing.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization precision of the video synchronization method can be adjusted according to actual needs, namely the video data synchronization precision is controllable.
The invention aims to provide a video synchronization method applied to a multi-view camera, wherein the video synchronization method does not affect the quality of video data, in other words, the video synchronization method can ensure the integrity of the video data.
In order to achieve any of the above objects, the present invention provides a video synchronization method applied to a multi-view camera, wherein the multi-view camera includes at least a first data acquisition device and at least a second data acquisition device, comprising the following steps:
s1: acquiring at least one first video data and at least one second video data, wherein the first video data is acquired by the first data acquisition equipment, and the second video data is acquired by the second data acquisition equipment;
s2: at least one hardware synchronization device waveform synchronizes the first video data and the second video data; and
s3: at least one software synchronization device time stamp synchronizes the first video data and the second video data to obtain at least one synchronization data.
In some embodiments, wherein the video synchronization method further comprises the steps of:
s4: the software synchronization device frame masks the first video data and the second video data.
In some embodiments, wherein the video synchronization method further comprises the steps of:
s5: the at least one fusion unit fuses the synchronous data into at least one fused data.
In some embodiments, the step S2 further comprises the following steps:
s21: the hardware synchronization device identifying the first video data and the second video data, wherein the first video data is displayed as a first video waveform and the second video data is displayed as a second video waveform; and
s22: a waveform synchronization module waveform synchronizes the first video waveform and the second video waveform.
In some embodiments, the step S3 further comprises the following steps:
s31: at least one timestamp marking module respectively marks the first video data as a first timestamp and marks the second video data as a second timestamp; and
s32: and according to the first time mark and the second time mark, at least one time stamp synchronization module time stamp synchronizes the first video data and the second video data.
In some embodiments, the step S32 further comprises the following steps:
s321: the timestamp synchronization module acquires at least one threshold value;
s322: the timestamp synchronization module acquires a timestamp difference corresponding to a specific image frame of the first video data and a specific image frame of the second video data; and
s323: comparing the threshold value with the timestamp difference, when the timestamp difference satisfies the threshold value, the specific image frame is synchronized, when the timestamp difference does not satisfy the threshold value, waiting for calculating a next image frame until the specific image frame of the first video data and the specific image frame of the second video data satisfy the threshold value.
In some embodiments, the step S4 further comprises the following steps:
s41: at least one mask marking sub-module respectively marks the first video data as a first frame mask and marks the second video data as a second frame mask; and
s42: at least one mask comparison submodule compares the first frame mask with the second frame mask to wait until the first frame mask and the second frame mask meet a fusion standard.
According to another aspect of the present invention, there is provided a multi-view camera, wherein the video data is synchronized by using any one of the video synchronization methods described above. According to another aspect of the present invention, the present invention further provides a video synchronization method, wherein the synchronization method comprises the steps of:
(a) synchronizing a waveform of a first video data and a waveform of a second video data;
(b) synchronizing a time stamp of the first video data and a time stamp of the second video data to obtain synchronization data; and
(c) and fusing the synchronous data to obtain fused data so as to synchronize the first video data and the second video data.
According to an embodiment of the present invention, before the step (c), further comprising the steps of:
(d) frame masking the first video data and the second video data.
According to an embodiment of the present invention, the step (a) further comprises the steps of:
(a.1) identifying a waveform of the first video data and a waveform of the second video data; and
(a.2) synchronizing the identified waveform of the first video data and the waveform of the second video data.
According to an embodiment of the present invention, the step (b) further comprises the steps of:
(b.1) marking a first time stamp of the first video data and a second time stamp of the second video data; and
(b.2) synchronizing a time stamp of the first video data and a time stamp of the second video data based on the first time stamp of the first video data and the second time stamp of the second video data to obtain the synchronized data.
Drawings
Fig. 1 is a diagram of a practical application of a multi-view camera applied to acquire video or image data of a target object according to an embodiment of the present invention.
Fig. 2 is a schematic diagram illustrating an effect of a video synchronization method applied to the multi-view camera according to an embodiment of the present invention.
Fig. 3 is a schematic diagram illustrating the operation of the multi-view camera to which the video synchronization method according to the above embodiment of the present invention is applied.
Fig. 4 is an operation diagram of a hardware synchronization process of the video synchronization method according to the above embodiment of the present invention.
Fig. 5 is a diagram of the operational effect of the hardware synchronization process according to the above embodiment of the present invention based on fig. 4.
Fig. 6 is an operation diagram of a software synchronization process of the video synchronization method according to the above embodiment of the present invention.
Fig. 7 is an operation diagram of a timestamp marking process of the video synchronization method according to the above-described embodiment of the present invention.
Fig. 8 is a working effect diagram of the timestamp marking process according to the above-described embodiment of the present invention based on fig. 7.
Fig. 9 is a schematic working diagram of a timestamp comparison process of the video synchronization method according to the above embodiment of the present invention.
Fig. 10 is a schematic diagram of the working effect of the timestamp comparison process according to the above embodiment of the present invention based on fig. 9.
Fig. 11 is a block diagram illustrating operation of a video synchronization method according to an equivalent embodiment of the above-described embodiment of the present invention.
Fig. 12 is a schematic diagram of the operation of the frame masking process of the video synchronization method according to the above equivalent embodiment of the present invention based on fig. 11.
Fig. 13 is a data flow diagram of the video synchronization method according to the above-described embodiment of the present invention.
Fig. 14 is a schematic beam direction diagram of the video synchronization method according to the above equivalent embodiment of the above embodiment of the present invention.
Fig. 15 to 20 are schematic flow charts of the video synchronization method according to the above-described embodiment of the present invention.
Detailed Description
The following description is presented to disclose the invention so as to enable any person skilled in the art to practice the invention. The preferred embodiments in the following description are given by way of example only, and other obvious variations will occur to those skilled in the art. The basic principles of the invention, as defined in the following description, may be applied to other embodiments, variations, modifications, equivalents, and other technical solutions without departing from the spirit and scope of the invention.
It will be understood by those skilled in the art that in the present disclosure, the terms "longitudinal," "lateral," "upper," "lower," "front," "rear," "left," "right," "vertical," "horizontal," "top," "bottom," "inner," "outer," and the like are used in an orientation or positional relationship indicated in the drawings for ease of description and simplicity of description, and do not indicate or imply that the referenced devices or components must be constructed and operated in a particular orientation and thus are not to be considered limiting.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
As shown in fig. 1, a multi-view camera 10 is applied to acquire video or image data of at least one target object O, and it is particularly worth mentioning that the multi-view camera 10 refers to a device simultaneously including a plurality of cameras or a plurality of cameras. In particular, when the same camera includes a plurality of cameras, such as a panoramic camera commonly available on the market, the multi-camera may be defined as the multi-view camera 10. Alternatively, when a plurality of separate cameras are used simultaneously to acquire data of the same scene at the same time, such as a TOF camera acquiring depth information, the combined device of the plurality of cameras may also be defined as the multi-view camera 10. It should be understood by those skilled in the art that the type and variety of the multi-view camera 10 to which the present invention is directed is not limited.
The multi-view camera 10 includes at least two data acquisition devices, and in the embodiment of the present invention, it will be described by way of example that the multi-view camera 10 includes only two data acquisition devices, but it should be understood by those skilled in the art that the number of the data acquisition devices of the multi-view camera 10 is not limited by the present invention, and the multi-view camera 10 may include two, three, or even more data acquisition devices.
In the embodiment of the present invention, the multi-view camera 10 only includes two data acquisition devices for further explanation, but it should be understood by those skilled in the art that the following description discloses and illustrates the present invention by taking the multi-view camera 10 only includes two data acquisition devices, and it should not be construed as limiting the content and scope of the present invention. In other words, in other embodiments, the number of data acquisition devices of the multi-view camera 10 may be more than two, such as three, four, five, etc.
Referring to fig. 1, the multi-view camera 10 includes a first data acquisition device 11 and a second data acquisition device 12, and the first data acquisition device 11 and the second data acquisition device 12 are capable of respectively capturing images or video data of the target object O. Preferably, the first data acquisition device 11 and the second data acquisition device 12 capture image or video data of the target object O at the same time. As is well known, video is obtained from a series of individual image frames played at a certain rate, and therefore, in the embodiment of the present invention, the multi-view camera 10 is used to obtain video data of the target object O as an example, but it should be understood by those skilled in the art that the multi-view camera 10 can also obtain image data by using the video synchronization method of the present invention, and the present invention is not limited in this respect.
The first data acquisition device 11 and the second data acquisition device 12 may be implemented as cameras, may also be implemented as cameras, or a combination of cameras and cameras. When the multi-view camera 10 is adapted to acquire the video data of the target object O, the first data acquisition device 11 and the second data acquisition device 12 acquire the video data of the target object O in the same time period, and these individual video data are synchronized or fused, so as to obtain the video data of the target object O meeting the actual needs.
It is particularly worth mentioning that the first data acquisition device 11 and the second data acquisition device 12 may acquire the same video data with respect to the target object O, and may also acquire different video data with respect to the target object O. Specifically, when the multi-view camera 10 is implemented as a panoramic camera, the first data acquisition device 11 and the second data acquisition device 12 acquire different video data about the target object O, and a data processing device or the multi-view camera 10 synchronizes and merges and splices the individual video data to obtain a panoramic video of the target object O. When the multi-view camera 10 is implemented as a depth camera, the first data acquisition device 11 and the second data acquisition device 12 acquire the same video data about the target object O. That is, the first data acquiring apparatus 11 and the second data acquiring apparatus 12 only need to simultaneously satisfy acquisition of video data of the target object O in the same time period, and there is no limitation as to the type and kind of the video data.
After the first data acquiring device 11 and the second data acquiring device 12 acquire the video data of the target object O, the multi-view camera 10 or the data processing device processes these data to obtain final data meeting the actual requirement. It is worth mentioning that no matter the types of the video data acquired by the first data acquiring device 11 and the second data acquiring device 12 are the same or different, the video data need to be synchronized to obtain at least one synchronization data S.
In order to optimize the data synchronization process of the multi-view camera 10, the present invention provides a video synchronization method applied to the multi-view camera 10, wherein the video synchronization method can synchronize video data respectively acquired by at least two data acquisition devices, thereby improving the data quality of the synchronization data S. Preferably, the video synchronization method may synchronize video data respectively acquired by at least two data acquisition devices at the same time, so as to further improve the data quality of the synchronized data S. It is worth mentioning that the video synchronization method synchronizes the waveforms and time frames of different video data at the same time, thereby obtaining the synchronization data S with high precision and high synchronization rate.
As shown in fig. 2, when the multi-view camera 10 is implemented as a dual-view camera or a panoramic camera, the first data acquiring device 11 acquires video data of a specific angle of the target object O, the second data acquiring device 12 acquires video data of another specific angle of the target object O, and the two video data are merged to obtain a panoramic video of the target object O. Taking a specific time as an example, the first data acquisition device 11 acquires a specific image frame of the target object O, the second data acquisition device 12 acquires another specific image frame of the target object O, and the video synchronization method synchronizes the specific image frames acquired by the two data acquisition devices at the specific time, so as to obtain a high-precision high-synchronization-rate synchronized image frame, and a plurality of frames of this type of synchronized images are combined to form a synchronized video of the target object O.
Specifically, as shown in fig. 3, when the multi-view camera 10 is applied to acquire video data of the target object O within a specific time period, the first data acquiring device 11 acquires a first video data T1 of the target object O within the specific time period, and the second data acquiring device 12 acquires a second video data T2 of the target object O within the specific time period, wherein the first video data T1 and the second video data T2 include related data of the target object O. It is particularly worth mentioning that the first video data T1 and the second video data T2 are composed of a series of independent image frame data, and generally speaking, when the frame rate of the video is greater than 24 frames, the human eye does not feel image seizure, that is, the human eye sees a video of a flow field rather than a single image.
The first video data T1 and the second video data T2 pass through a hardware synchronization unit 30, and the hardware synchronization unit 30 synchronizes the waveforms of the first video data T1 and the second video data T2. In other words, the hardware synchronization unit 30 waveform-synchronizes the first video data T1 and the second video data T2.
The first video data T1 and the second video data T2 after being waveform-synchronized pass through a software synchronization unit 40, and the software synchronization unit 40 synchronizes timestamps of the first video data T1 and the second video data T2. In other words, the software synchronization unit 40 time-stamps-synchronizes the first video data T1 and the second video data T2.
In summary, the video synchronization method synchronizes the waveforms and time stamps of the first video data T1 and the second video data T2 at the same time, so as to obtain the synchronized data S. It is worth mentioning that the synchronization data S, which is waveform-synchronized and time-stamp-synchronized, is also composed of a series of image frames, wherein each image frame in the synchronization data S is implemented as a synchronized combination of a first image frame and a second image frame synchronized at the time point, wherein the first image frame refers to the image frame acquired by the first data acquisition device 11 at the time point, and the second image frame refers to the image frame acquired by the second data acquisition device 12 at the time point. By analogy, the synchronized image frames form the synchronization data S, and a user can obtain the synchronization data S with high precision and high synchronization rate.
Specifically, as shown in fig. 4, the first video data T1 and the second video data T2 are waveform-synchronized, or the video synchronization method waveform-synchronizes the first video data T1 and the second video data T2 after acquiring the first video data T1 and the second video data T2.
Taking the first video data T1 as an example, the first video data T1 includes video data of the target object O in the time period, however, since the video is presented by a series of individual image frames in fast play, the first video data T1 includes a series of independent image frame data. In hardware, the first video data T1 is presented in the form of a first video waveform B1, each image frame data in the first video waveform B1 is presented as a separate peak. For example, when the frame rate of the first data acquisition device 11 is set to 25 frames/second, and the first video data TA displays 25 image frames of the target object O in one second, 25 peaks are displayed in the first video waveform B1 in one second, where each peak corresponds to one image frame.
Similarly to the first video data T1, the second video data T2 is presented in hardware in the form of a second video waveform B2, each image frame data in the second video waveform B2 being presented as a separate peak. It is noted that the frame rates of the first data acquisition device 11 and the second data acquisition device 12 are preferably set to the same parameter, i.e. when the video frame rate of the first data acquisition device 11 is set to 25 frames/second, the video frame rate of the second data acquisition device 12 is also set to 25 frames/second.
The hardware synchronization apparatus 30 further includes a waveform synchronization module 31, wherein the waveform synchronization module 31 waveform-synchronizes the first video waveform B1 and the second video waveform B2, as shown in fig. 5, each peak of the first video waveform B1 corresponds to each peak of the second video waveform B2, so as to achieve the waveform synchronization of the first video data T1 and the second video data T2.
It should be noted that the hardware synchronization device 30 does not synchronize the time frames of the first video data T1 and the second video data T2, and specifically, when the first video waveform B1 is synchronized with the second video waveform B2, the time frames corresponding to the peaks of the first video waveform B1 do not correspond to the time frames corresponding to the peaks of the second video waveform B2 one by one. For example, there may be a case where the first frame of the first video data T1 corresponds to the third frame of the second video data T2 in the waveform-synchronized video data.
To solve this problem, the first video data T1 and the second video data T2 further pass through the software synchronization device 40, and the software synchronization device 40 time-stamps-synchronizes the first video data T1 and the second video data T2. In other words, the first video data T1 and the second video data T2 are time stamp synchronized in the video synchronization method.
Specifically, as shown in fig. 6, the software synchronization device 40 further includes a timestamp marking module 41, and a timestamp synchronization module 42, wherein the timestamp marking module 41 marks timestamps of the first video data T1 and the second video data T2, and the timestamp synchronization module 42 synchronizes timestamps of each corresponding image frame of the first video data T1 and the second video data T2, so as to obtain the synchronization data S meeting actual requirements.
In other words, the software synchronization process in the video synchronization method further includes a timestamp marking process, and a timestamp synchronization process, and the first video data T1 and the second video data T2 are subjected to the software synchronization to obtain the synchronization data S that are simultaneously timestamp-synchronized and waveform-synchronized.
As shown in fig. 7, the first video data T1 and the second video data T2 are time-stamped. Specifically, the first image data T1 is time-stamped by the time-stamping module 41 to obtain a first time stamp M1, and the second image data T2 is time-stamped by the time-stamping module 41 to obtain a second time stamp M2. In other words, the timestamp marking module 41 respectively obtains the first video data T1 and the second video data T2, and respectively marks the first video data T1 and the second video data T2.
To illustrate the first video data T1 being time-stamped, it can be seen from the above that the first video data T1 is presented in the form of the first video waveform B1, and each peak in the first video waveform B1 refers to a separate image frame. However, each image frame corresponds to a particular point in time, or, in other words, each peak in the first video waveform B1 corresponds to a particular point in time. The timestamp marking module 41 marks the time point corresponding to each peak in the first video waveform B1 to determine the time point corresponding to each individual image frame.
For example, assuming that the first data acquiring device 11 acquires 30 image frames in a specific time period, it should be mentioned that the timestamp marking module 41 timestamp the first video data T1 with reference to beijing time or a specific time standard of the system, assuming that the first image frame corresponds to time 0ms, the second image frame corresponds to time 32ms, the third image frame corresponds to time 42ms, and the 30 th image frame corresponds to time 90 ms. By analogy, each image frame will correspond to a particular point in time. It is worth mentioning that the first data acquisition device 11 does not necessarily acquire each image frame uniformly, i.e. the time intervals between image frames adjacent to each other are not completely identical.
Similarly, the second video waveform B2 of the second video data T2 is also marked by the timestamp marking module 41. It is worth mentioning that the time stamping module 41 stamps the second video data T2 with the same time standard, so that each peak in the second video waveform B2 also corresponds to a specific time point, so that the time stamping module 41 stamps the time point corresponding to each peak in the second video waveform B2, and each image frame will correspond to a specific time point.
For example, it is assumed that the second data acquiring device 12 acquires 30 image frames in the specific time period, but since the second data acquiring device 12 may have a problem of shooting delay, etc., the time of the image frames acquired by the second data acquiring device 12 and the first data acquiring device 11 are not completely consistent, for example, the first image frame corresponds to 0ms, the second image frame corresponds to 20ms, the 30 th image frame corresponds to 30ms, and so on, each image frame will correspond to a specific time point. As above, the second data acquisition device 12 does not necessarily acquire each image frame uniformly, i.e. the time intervals between image frames adjacent to each other do not coincide exactly. Of course, the above examples are merely illustrative and not intended to be limiting.
It is to be noted that the first data acquisition device 11 and the second data acquisition device 12 are set to start acquiring the video data of the target object O at the same point in time, or the video synchronization method synchronizes the first video data T1 and the second video data T2 from the same point in time. I.e., the first image frame of the first video data T1 should correspond to the first image frame of the second video data T2.
In summary, the timestamp module 41 marks the first video data T1 and the second video data T2 respectively, and specifically, the timestamp module 41 marks a time point corresponding to each peak of the first video waveform B1 and a time point corresponding to each peak of the second video waveform B2. It is worth mentioning that the time standard at which the time stamp marking module 41 marks the first video data T1 and the second video data T2 is set to be the same, i.e. the time stamp marking module 41 marks the first video data T1 and the second video data T2 with the same reference time standard.
As shown in fig. 9 and 10, the first video data T1 and the second video data T2 after being waveform-synchronized are time-stamp-synchronized by the time stamp synchronization module 42, at this time, the first video data T1 is marked as the first time stamp M1 by the time stamp marking module 41, and similarly, the second video data T2 is marked as the second time stamp M2, and the time stamp synchronization module 42 synchronizes the first video data T1 and the second video data T2 according to the time stamps. It is particularly worth mentioning that the timestamp synchronization module 42 refers to the same time standard as the timestamp marking module 41.
It is particularly worth mentioning that the timestamp synchronization module 42 is connected to a threshold Y, the threshold Y affects the synchronization accuracy of the synchronization data S, and the threshold Y can be set manually according to actual conditions. Specifically, the image frames of the first video data T1 and the second video data T2 that are waveform-synchronized correspond to image frames, and the time synchronization module 43 determines whether the time stamps of the image frames satisfy the time requirement according to the threshold Y.
Specifically, there may be a certain degree of deviation in the time points corresponding to each image frame of the first video data T1 and the second video data T2. Taking the second frame of the first video data T1 and the second frame of the second video data T2 as an example, a timestamp difference C exists between the second frames of the first video data T1 and the second video data T2, the timestamp synchronization module 42 compares the timestamp difference C with the threshold Y, and when the timestamp difference C does not satisfy the threshold Y, the frame data is discarded, so that the timestamp corresponding to the second frame of the first video data T1 and the third frame of the second video data T2 is compared, and when the timestamp difference C satisfies the threshold Y, the frame data satisfies the synchronization requirement, so that the second frame of the first video data T1 and the third frame of the second video data T2 are synchronized as the second frame of the synchronization data S. By analogy, the timestamp synchronization module 42 synchronizes each frame of data to obtain the synchronization data S.
For example, when the threshold Y is set to 5mm, the timestamp difference C representing two frames at this time needs to be less than 5 mm. At this time, the timestamp synchronization module 42 detects that the time point corresponding to the second frame of the first video data T1 is 32ms, and the time point corresponding to the second frame of the second video data T2 is 20ms, so that the timestamp difference C corresponding to the two frames does not satisfy the threshold requirement. The second frame of the second video data TD is discarded. The timestamp synchronization module 42 further detects that the time point corresponding to the third frame of the second video data T2 is 30ms, and at this time, the timestamp difference C corresponding to the two frames meets the threshold requirement, so that the second image frame of the first video data T1 and the third image frame of the second video data T2 are synchronized to become the second image frame of the synchronization data S.
It should be noted that the timestamp synchronization module 42 may select to synchronize the second video data T2 based on the first video data T1, or may select to synchronize the first video data T1 based on the second video data T2, which is not limited in this respect.
In summary, the first video data T1 and the second video data T2 are waveform-synchronized by the hardware synchronization, and are time-stamp-synchronized by the software synchronization to obtain the synchronization data S. It is worth mentioning that the software synchronization can be implemented on at least one embedded terminal, which can be implemented as a mobile phone, a computer or other electronic processing device, etc., so that the multi-view camera can be well applied in the embedded terminal. I.e. the video synchronization method of the multi-view camera 10 can be done at the embedded end.
In addition, it is worth mentioning that the video data acquired by the data acquisition devices of the multi-view camera 10 often need to be merged for use, for example, when the multi-view camera 10 is implemented as a panoramic camera, the panoramic camera needs to merge the video data acquired at different angles at the same time to obtain the panoramic data about the target object O. When the multi-view camera 10 is implemented as a depth camera, the depth camera fuses video data acquired at the same time and at the same angle to obtain depth data about the target object O.
In an equivalent embodiment of the above embodiment of the present invention, the software synchronization unit 40 further includes a frame mask module 44, and the frame mask module 44 is adapted to determine whether the first video data T1 and the second video data T2 satisfy a fusion criterion.
Specifically, the frame mask module 44 further includes a mask marking submodule 441 and a mask comparison submodule 442, wherein the mask comparison submodule 442 is communicatively connected to the mask marking submodule 441, wherein the mask marking submodule 441 respectively marks the first video data T1 and the second video data T2, and the mask comparison submodule 442 compares whether masks of the first video data T1 and the second video data T2 satisfy a fusion criterion.
The mask tagging submodule 441 tags the first video data T1 as a series of first frame masks Y1 and tags the second video data T2 as a series of second frame masks Y2, the first frame masks Y1 being implementable as a bit number instance of each image frame and the second frame masks Y2 being implementable as a bit number instance of each image frame.
The marked first frame mask Y1 and the second frame mask Y2 are compared and judged by the mask comparison pair sub-module 442, and when the first frame mask Y1 and the second frame mask Y2 satisfy the fusion criterion, the first video data T1 and the second video data T2 are transmitted to the timestamp synchronization module 42 to be timestamp-synchronized. When the first frame mask Y1 and the second frame mask Y2 do not satisfy the fusion criterion, the mask ratio pair sub-module 442 continues to wait until the first frame mask Y1 and the second frame mask Y2 satisfy the fusion criterion.
For example, the first frame mask Y1 of the first video data T1 is bit1, the second frame mask Y2 of the first video data T2 is bit2, the frame mask of the synchronization data S should be (bit1 one bit2) according to the bit or calculation of the computer, and if the current first frame mask Y1 and the second frame mask Y2 do not satisfy this condition, the waiting is continued.
It should be noted that, in an embodiment of the present invention, the frame mask module 44 is communicatively connected to the timestamp synchronization module 42, and the frame mask of the video data is determined before timestamp synchronization. In another embodiment of the present invention, the video data may be synchronized by the timestamp before the frame mask determination is performed. It will be appreciated by those skilled in the art that the invention is not limited in this respect.
In addition, as shown in fig. 11, a fusion unit 50 is communicatively connected to the timestamp synchronization module 42 to fuse the synchronization data S to obtain a fused data R. When the multi-view camera 10 is implemented as a panoramic camera, the fused data R may represent the fused panoramic video. When the multi-view camera is implemented as a depth camera, the fusion data R may represent the fused depth video. Of course, the multi-view camera 10 may be implemented as other types of cameras, and the invention is not limited in this respect. The video synchronization method can be applied to various types of multi-view cameras, and particularly can be applied to a multiband multi-view camera.
In summary, the present invention provides a video synchronization method, which is particularly suitable for a multi-view camera 10, taking the multi-view camera 10 including a first data acquisition device 11 and a second data acquisition device 12 as an example for explanation, the multi-view camera 10 acquires video data of at least one target object O in a specific time period, including the following steps:
s1: acquiring a first video data T1 and a second video data T2, wherein the first video data T1 is acquired by the first data acquiring device 11, and the second video data T2 is acquired by the second data acquiring device 12;
s2: at least one hardware synchronization device 30 waveform-synchronizes the first video data T1 and the second video data T2; and
s3: at least one software synchronization device 40 time stamps synchronizes the first video data T1 and the second video data T2 to obtain at least one synchronization data S.
It is worth mentioning that the video synchronization method further comprises the following steps:
s4: the software synchronization apparatus 40 frame masks the first video data T1 and the second video data T2.
The frame masking process of the video data may be completed before the timestamp synchronization process or after the timestamp synchronization process, which is not limited in this respect. Specifically, the video data acquired by the multiple data acquisition devices of the multi-view camera 10 often need to be merged for use, for example, when the multi-view camera 10 is implemented as a panoramic camera, the panoramic camera needs to merge the video data acquired from different angles at the same time to obtain the panoramic data about the target object O. When the multi-view camera 10 is implemented as a depth camera, the depth camera fuses video data acquired at the same time and at the same angle to obtain depth data about the target object O.
In an equivalent embodiment of the above embodiment of the present invention, the software synchronization unit 40 further includes a frame mask module 44, and the frame mask module 44 is adapted to determine whether the first video data T1 and the second video data T2 satisfy a fusion criterion. Specifically, the frame mask module 44 further includes a mask marking sub-module 441 and a mask comparison sub-module 442, wherein the mask comparison sub-module 442 is communicatively connected to the mask marking sub-module 441, wherein the mask marking sub-module 441 respectively marks the first video data T1 as a series of first frame masks Y1 and the second video data T2 as a series of second frame masks Y2, and the mask comparison sub-module 442 compares whether the first frame masks Y1 and the second frame masks Y2 satisfy a fusion criterion. The first frame mask Y1 may be implemented as a bit number case for each image frame and the second frame mask Y2 may be implemented as a bit number case for each image frame.
In addition, a fusion unit 50 is communicatively connected to the software synchronization device 40 to fuse the synchronization data S to obtain a fusion data R. When the multi-view camera 10 is implemented as a panoramic camera, the fused data R may represent the fused panoramic video. When the multi-view camera is implemented as a depth camera, the fusion data R may represent the fused depth video. Of course, the multi-view camera 10 may be implemented as other types of cameras, and the invention is not limited in this respect. The video synchronization method can be applied to various types of multi-view cameras, and particularly can be applied to a multiband multi-view camera.
In other words, the video synchronization method further comprises the steps of:
s5: the at least one fusing unit 50 fuses the synchronous data T into fused data R.
Specifically, the step S4, namely, the frame masking process, further includes the steps of:
s41: at least one mask marking sub-module 441 respectively marks the first video data T1 as a first frame mask Y1 and the second video data T2 as a second frame mask Y2; and
s42: at least one mask comparison submodule 442 compares the first frame mask Y1 and the second frame mask Y2 to wait until the first frame mask Y1 and the second frame mask Y2 satisfy a merging criterion.
In addition, the waveform synchronization process further includes the steps of:
s21: the hardware synchronization apparatus 30 identifies the first video data T1 and the second video data T2, wherein the first video data T1 is shown as a first video waveform B1, and the second video data T2 is shown as a second video waveform B2; and
s22: a waveform synchronization module 31 waveform synchronizes the first video waveform B1 and the second video waveform B2.
Specifically, taking the first video data T1 as an example, the first video data T1 includes video data of the target object O in the time period, and the first video data T1 includes a series of independent image frame data. In the hardware synchronization device 30, the first video data T1 is presented in the form of a first video waveform B1, and each image frame data in the first video waveform B1 is presented as a separate peak. Similarly to the first video data T1, the second video data T2 is presented in hardware in the form of a second video waveform B2, each image frame data in the second video waveform B2 being presented as a separate peak.
It is noted that the frame rates of the first data acquisition device 11 and the second data acquisition device 12 are preferably set to the same parameter. Each peak of the first video data T1 and the second video data T2 after waveform synchronization corresponds to each other.
In addition, the timestamp synchronization process further comprises the steps of:
s31: at least one timestamp module 41 respectively marks the first video data T1 as a first timestamp M1 and the second video data T2 as a second timestamp M2;
s32: at least one timestamp synchronization module 42 timestamp-synchronizes the first video data T1 and the second video data T2 in accordance with the first timestamp M1 and the second timestamp M2.
Wherein the timestamp marking module 41 marks timestamps of the first video data T1 and the second video data T2, and the timestamp synchronization module 42 synchronizes timestamps of each corresponding image frame of the first video data T1 and the second video data T2, thereby obtaining the synchronization data S satisfying actual needs.
In step S31, the first video data T1 and the second video data T2 are time-stamped. In particular, the first image data T1 is time-stamped by the time-stamping module 41 to obtain the first time-stamp M1, and the second image data T2 is time-stamped by the time-stamping module 41 to obtain the second time-stamp M2.
To illustrate the first video data T1 being time-stamped, it can be seen from the above that the first video data T1 is presented in the form of the first video waveform B1, and each peak in the first video waveform B1 refers to a separate image frame. However, each image frame corresponds to a particular point in time, or, in other words, each peak in the first video waveform B1 corresponds to a particular point in time. The timestamp marking module 41 marks the time point corresponding to each peak in the first video waveform B1 to determine the time point corresponding to each individual image frame.
According to an aspect of the present invention, the step S32 further includes the steps of:
s321: the timestamp synchronization module 42 obtains at least a threshold Y;
s322: the timestamp synchronization module 42 obtains a timestamp difference C corresponding to a specific image frame of the first video data T1 and a specific image frame of the second video data T2; and
s323: comparing the threshold Y with the timestamp difference C, when the timestamp difference C satisfies the threshold Y, the specific image frame is synchronized, and when the timestamp difference C does not satisfy the threshold Y, waiting for calculating a next image frame until the specific image frame of the first video data T1 and the specific image frame of the second video data T2 satisfy the threshold Y.
Specifically, the first video data T1 and the second video data T2 that are waveform-synchronized are time-stamp-synchronized by the time stamp synchronizing module 42, and it is particularly worth mentioning that the time stamp synchronizing module 42 and the time stamp marking module 41 refer to the same time standard.
In addition, the threshold Y affects the synchronization accuracy of the synchronization data S, and the threshold Y may be set artificially according to actual conditions. Specifically, the image frames of the first video data T1 and the second video data T2 that are waveform-synchronized correspond to image frames, but the timestamp labels corresponding to each frame are not exactly the same, and the timestamp synchronization module 42 determines whether the timestamp of the image frame meets the time requirement according to the threshold Y.
It should be noted that the timestamp synchronization module 42 may select to synchronize the second video data T2 based on the first video data T1, or may select to synchronize the first video data T1 based on the second video data T2, which is not limited in this respect.
It should be noted that when the timestamp synchronization module 42 bases on the first video data T1, in step S323, the timestamp synchronization module 42 compares whether a specific frame of the second video data T2 meets the threshold Y requirement based on a specific frame of the first video data T1. When the particular frame of the second video data T2 does not satisfy the threshold Y requirement, a next image frame of the second video data T2 is computed until the timestamp difference C for the frame of the first video data T1 and the second video data T2 satisfies the threshold Y requirement.
In summary, the video data of the multi-view camera 10 is synchronized by the video synchronization method, wherein the multi-view camera 10 can be implemented as a camera, or a combination of a camera and a camera, that is, the video synchronization method can be applied to multiple types of multi-view cameras 10, and the invention is not limited in aspects.
According to another aspect of the present invention, the present invention further provides a video synchronization method, wherein the synchronization method comprises the steps of:
(a) synchronizing a waveform of a first video data T1 and a waveform of a second video data T2;
(b) synchronizing a time stamp of the first video data T1 and a time stamp of the second video data T2 to obtain synchronization data S; and
(c) the synchronization data are fused to obtain a fused data R, thereby synchronizing the first video data T1 and the second video data T2.
Preferably, the method further comprises the step of, before the step (c):
(d) frame-masking the first video data T1 and the second video data T2.
Further, the step (a) further comprises the steps of:
(a.1) identifying a waveform of the first video data T1 and a waveform of the second video data T2; and
(a.2) synchronizing the identified waveform of the first video data T1 and the waveform of the second video data T2.
Further, the step (b) further comprises the steps of:
(b.1) marking a first time stamp M1 of the first video data T1 and a second time stamp M2 of the second video data T2; and
(b.2) synchronizing time stamps of the first video data T1 and time stamps of the second video data T2 based on the first time stamp M1 of the first video data T1 and the second time stamp M2 of the second video data T2 to obtain the synchronized data S.
Furthermore, those skilled in the art will appreciate that the embodiments of the present invention described above and illustrated in the accompanying drawings are by way of example only and are not limiting of the invention. The objects of the invention have been fully and effectively accomplished. The functional and structural principles of the present invention have been shown and described in the examples, and any variations or modifications of the embodiments of the present invention may be made without departing from the principles.

Claims (10)

1. A video synchronization method applied to a multi-view camera including a first data acquisition device and a second data acquisition device, comprising:
acquiring first video data acquired by the first data acquisition device within a specific time period and second video data acquired by the second data acquisition device within the specific time period, the first video data and the second video data comprising a series of image frames;
time stamping the first video data and the second video data; and
the first video data and the second video data after the frame mask is synchronized by the time stamp to determine whether a fusion criterion is satisfied between each image frame in the first video data and the second video data, including:
marking each image frame of the first video data with a first frame mask and each image frame of the second video data with a second frame mask respectively, wherein the first frame mask is the bit number condition of each image frame of the first video data, and the second frame mask is the bit number condition of each image frame of the second video data; and
and comparing the first frame mask and the second frame mask one by one to wait until the first frame mask and the second frame mask meet the fusion standard, wherein the fusion standard is that the bits of the first frame mask and the second frame mask or the calculation result is a preset value.
2. The video synchronization method applied to the multi-view camera as claimed in claim 1, further comprising: fusing the first video data and the second video data after the frame mask to obtain fused data.
3. The video synchronization method applied to the multi-view camera according to claim 2, wherein time stamping the first video data and the second video data comprises:
tagging each image frame of the first video data with a first time tag and tagging each image frame of the second video data with a second time tag, respectively; and
synchronizing a series of image frames in the first video data and the second video data based on the first time stamp and the second time stamp.
4. The video synchronization method applied to the multi-view camera according to claim 3, wherein synchronizing a series of image frames in the first video data and the second video data based on the first time stamp and the second time stamp comprises:
acquiring at least one threshold value Y;
acquiring a time stamp difference between a specific image frame of the first video data and an image frame of the second video data; and
comparing the threshold value and the timestamp difference so that when the timestamp difference is less than or equal to the threshold value, an image frame of the first video data and a corresponding image frame of the second video data are synchronized; and when the timestamp difference is greater than the threshold, waiting for a next image frame of the first video data or the second video data until the timestamp difference between the image frame of the first video data and the image frame of the second video data is less than or equal to the threshold.
5. The video synchronization method applied to the multi-view camera according to claim 4, wherein the threshold is artificially set based on actual conditions.
6. The video synchronization method applied to a multi-view camera according to claim 1, wherein a frame rate of the first data acquisition device is the same as a frame rate of the second data acquisition device.
7. The video synchronization method applied to a multi-view camera according to claim 1, wherein a first image frame of the first video data in time series corresponds to a first image frame of the second video data in time series.
8. The video synchronization method applied to the multi-view camera as claimed in claim 1, wherein before synchronizing the first video data and the second video data by time stamp, further comprising:
waveform-synchronizing the first video data and the second video data.
9. The video synchronization method applied to the multi-view camera according to claim 8, wherein waveform-synchronizing the first video data and the second video data comprises:
respectively performing waveform parsing on each image frame in the first video data and the second video data through a hardware synchronization device to represent the first video data in a first video waveform in which each image frame of the first video data is represented by an independent peak and to represent the second video data in a second video waveform in which each image frame of the second video data is represented by an independent peak; and
waveform-synchronizing the first video waveform and the second video waveform such that individual peaks in the first video waveform correspond to individual peaks in the second video waveform.
10. The video synchronization method applied to the multi-view camera according to claim 1, wherein the multi-view camera is a panoramic camera or a depth camera.
CN202110656083.XA 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera Active CN113395409B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110656083.XA CN113395409B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201711346530.1A CN109936677B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera
CN202110656083.XA CN113395409B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201711346530.1A Division CN109936677B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera

Publications (2)

Publication Number Publication Date
CN113395409A true CN113395409A (en) 2021-09-14
CN113395409B CN113395409B (en) 2022-10-11

Family

ID=66979487

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201711346530.1A Active CN109936677B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera
CN202110656097.1A Active CN113395410B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera
CN202110656083.XA Active CN113395409B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera

Family Applications Before (2)

Application Number Title Priority Date Filing Date
CN201711346530.1A Active CN109936677B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera
CN202110656097.1A Active CN113395410B (en) 2017-12-15 2017-12-15 Video synchronization method applied to multi-view camera

Country Status (1)

Country Link
CN (3) CN109936677B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396644A (en) * 2022-07-21 2022-11-25 贝壳找房(北京)科技有限公司 Video fusion method and device based on multi-segment external parameter data

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112217985A (en) * 2020-08-28 2021-01-12 新奥特(北京)视频技术有限公司 Information acquisition method, device and system
CN112218099A (en) * 2020-08-28 2021-01-12 新奥特(北京)视频技术有限公司 Panoramic video generation method, panoramic video playing method, panoramic video generation device, and panoramic video generation system
CN114201645A (en) * 2021-12-01 2022-03-18 北京百度网讯科技有限公司 Object labeling method and device, electronic equipment and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5900919A (en) * 1996-08-08 1999-05-04 Industrial Technology Research Institute Efficient shot change detection on compressed video data
US6762794B1 (en) * 1997-12-03 2004-07-13 Canon Kabushiki Kaisha Image pick-up apparatus for stereoscope
US20070097224A1 (en) * 2005-11-02 2007-05-03 Olympus Corporation Camera system
CN101424680A (en) * 2008-12-11 2009-05-06 东华大学 Computer automatic recognition apparatus and method for profile fiber
CN101841694A (en) * 2009-03-19 2010-09-22 新奥特硅谷视频技术有限责任公司 Court hearing panoramic video image relaying method
US20120147001A1 (en) * 2010-12-08 2012-06-14 Byoungchul Cho Image processing unit, stereoscopic image display using the same, and image processing method
US20120293608A1 (en) * 2011-05-17 2012-11-22 Apple Inc. Positional Sensor-Assisted Perspective Correction for Panoramic Photography
US20130242058A1 (en) * 2012-03-19 2013-09-19 Samsung Electronics Co., Ltd. Depth camera, multi-depth camera system and method of synchronizing the same
CN106534677A (en) * 2016-10-27 2017-03-22 成都西纬科技有限公司 Image overexposure optimization method and device
CN106851242A (en) * 2016-12-30 2017-06-13 成都西纬科技有限公司 A kind of method and system for realizing moving camera 3D net casts
US9781356B1 (en) * 2013-12-16 2017-10-03 Amazon Technologies, Inc. Panoramic video viewer
US20170289646A1 (en) * 2016-04-01 2017-10-05 Intel Corporation Multi-camera dataset assembly and management with high precision timestamp requirements
US20170359534A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Mismatched Foreign Light Detection And Mitigation In The Image Fusion Of A Two-Camera System

Family Cites Families (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040061780A1 (en) * 2002-09-13 2004-04-01 Huffman David A. Solid-state video surveillance system
JP4589631B2 (en) * 2004-01-13 2010-12-01 ソニー株式会社 Imaging apparatus, phase control method, and synchronization establishment method
US9124877B1 (en) * 2004-10-21 2015-09-01 Try Tech Llc Methods for acquiring stereoscopic images of a location
CN101146231A (en) * 2007-07-03 2008-03-19 浙江大学 Method for generating panoramic video according to multi-visual angle video stream
JP2009218934A (en) * 2008-03-11 2009-09-24 Toshiba Corp Video reproducing device and video reproducing method
FR2930396A1 (en) * 2008-04-21 2009-10-23 Thomson Licensing Sas TEMPORAL MARKING ASSOCIATED WITH SYNCHRONIZATION OF EQUIPMENT CONNECTED TO A NETWORK
US9122443B1 (en) * 2008-05-01 2015-09-01 Rockwell Collins, Inc. System and method for synchronizing multiple video streams
CN101466048B (en) * 2009-01-09 2010-12-29 华亚微电子(上海)有限公司 Digital demodulation method for non-synchronous composite video signal and S video signal and demodulator
DE102010014733B4 (en) * 2010-04-13 2013-05-08 Bauhaus-Universität Weimar Chromakey method and chromakey device for taking and editing camera images
FR2967324B1 (en) * 2010-11-05 2016-11-04 Transvideo METHOD AND DEVICE FOR CONTROLLING THE PHASING BETWEEN STEREOSCOPIC CAMERAS
JP5854832B2 (en) * 2011-12-28 2016-02-09 キヤノン株式会社 Imaging apparatus and imaging method
CN103516995A (en) * 2012-06-19 2014-01-15 中南大学 A real time panorama video splicing method based on ORB characteristics and an apparatus
US9241103B2 (en) * 2013-03-15 2016-01-19 Voke Inc. Apparatus and method for playback of multiple panoramic videos with control codes
US9955142B2 (en) * 2013-07-05 2018-04-24 Mediatek Inc. On-line stereo camera calibration device and method for generating stereo camera parameters
US8988509B1 (en) * 2014-03-20 2015-03-24 Gopro, Inc. Auto-alignment of image sensors in a multi-camera system
CN104954630B (en) * 2014-03-28 2018-01-12 深圳市海派尔科技开发有限公司 Video time stamp acquisition methods, video processing equipment and video system
CN105516542B (en) * 2014-09-26 2019-03-05 北京同步科技有限公司 Multi-channel video synchronization system and its synchronous method based on hardware coder
EP3213519B1 (en) * 2014-10-31 2018-07-11 Telefonaktiebolaget LM Ericsson (publ) Video stream synchronization
CN104618673B (en) * 2015-01-20 2018-05-01 武汉烽火众智数字技术有限责任公司 A kind of multichannel video recording synchronized playback control method and device based on NVR
CN107211078B (en) * 2015-01-23 2020-07-31 瑞典爱立信有限公司 V L C-based video frame synchronization
US9819875B2 (en) * 2015-03-02 2017-11-14 Intel Corporation Multi-camera sync pulse synchronization
WO2016201683A1 (en) * 2015-06-18 2016-12-22 Wizr Cloud platform with multi camera synchronization
US20160366398A1 (en) * 2015-09-11 2016-12-15 Mediatek Inc. Image Frame Synchronization For Dynamic Image Frame Rate In Dual-Camera Applications
CN107205158A (en) * 2016-03-18 2017-09-26 中国科学院宁波材料技术与工程研究所 A kind of multichannel audio-video frequency stream synchronous decoding method based on timestamp
CN105959620A (en) * 2016-04-25 2016-09-21 北京大国慧谷科技股份有限公司 Panorama video synchronization display method and panorama video synchronization display device
CN107404362A (en) * 2017-09-15 2017-11-28 青岛海信移动通信技术股份有限公司 A kind of synchronous method and device of dual camera data frame

Patent Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5900919A (en) * 1996-08-08 1999-05-04 Industrial Technology Research Institute Efficient shot change detection on compressed video data
US6762794B1 (en) * 1997-12-03 2004-07-13 Canon Kabushiki Kaisha Image pick-up apparatus for stereoscope
US20070097224A1 (en) * 2005-11-02 2007-05-03 Olympus Corporation Camera system
CN101424680A (en) * 2008-12-11 2009-05-06 东华大学 Computer automatic recognition apparatus and method for profile fiber
CN101841694A (en) * 2009-03-19 2010-09-22 新奥特硅谷视频技术有限责任公司 Court hearing panoramic video image relaying method
US20120147001A1 (en) * 2010-12-08 2012-06-14 Byoungchul Cho Image processing unit, stereoscopic image display using the same, and image processing method
US20120293608A1 (en) * 2011-05-17 2012-11-22 Apple Inc. Positional Sensor-Assisted Perspective Correction for Panoramic Photography
US20130242058A1 (en) * 2012-03-19 2013-09-19 Samsung Electronics Co., Ltd. Depth camera, multi-depth camera system and method of synchronizing the same
US9781356B1 (en) * 2013-12-16 2017-10-03 Amazon Technologies, Inc. Panoramic video viewer
US20170289646A1 (en) * 2016-04-01 2017-10-05 Intel Corporation Multi-camera dataset assembly and management with high precision timestamp requirements
US20170359534A1 (en) * 2016-06-10 2017-12-14 Apple Inc. Mismatched Foreign Light Detection And Mitigation In The Image Fusion Of A Two-Camera System
CN106534677A (en) * 2016-10-27 2017-03-22 成都西纬科技有限公司 Image overexposure optimization method and device
CN106851242A (en) * 2016-12-30 2017-06-13 成都西纬科技有限公司 A kind of method and system for realizing moving camera 3D net casts

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邓建国: "实时视频前后景分离与合成技术的研究", 《中国优秀硕士学位论文全文数据库(电子期刊)信息科技辑》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115396644A (en) * 2022-07-21 2022-11-25 贝壳找房(北京)科技有限公司 Video fusion method and device based on multi-segment external parameter data
CN115396644B (en) * 2022-07-21 2023-09-15 贝壳找房(北京)科技有限公司 Video fusion method and device based on multi-section external reference data

Also Published As

Publication number Publication date
CN113395409B (en) 2022-10-11
CN113395410B (en) 2023-04-18
CN109936677B (en) 2021-07-27
CN113395410A (en) 2021-09-14
CN109936677A (en) 2019-06-25

Similar Documents

Publication Publication Date Title
CN109936677B (en) Video synchronization method applied to multi-view camera
EP3560195B1 (en) Stereoscopic omnidirectional imaging
CN108886611B (en) Splicing method and device of panoramic stereo video system
US8588514B2 (en) Method, apparatus and system for processing depth-related information
WO2020073709A1 (en) Multi-camera multi-face video continuous acquisition device and method
CN104813659B (en) video frame processing method
US9516297B2 (en) Method and device for monitoring phase shifting between stereoscopic cameras
EP2852161A1 (en) Method and device for implementing stereo imaging
CN102655585B (en) Video conference system and time delay testing method, device and system thereof
CN105631422A (en) Video identification method and video identification system
CN106020758B (en) A kind of screen splice displaying system and method
CN110505466B (en) Image processing method, device, electronic equipment, storage medium and system
CN103475887A (en) Image synchronization method and device in camera visual system
CN108377355A (en) A kind of video data handling procedure, device and equipment
CN104391960A (en) Video annotation method and system
CN108510541B (en) Information adjusting method, electronic equipment and computer readable storage medium
US20140300814A1 (en) Method for real-time processing of a video sequence on mobile terminals
CN107959787B (en) Image processing method and device
CN103796007B (en) Automatic adjustment method and system for naked-eye stereoscopic display device
CN115604404A (en) Switching method, switching device and storage medium for multi-channel video stream
WO2022045779A1 (en) Restoration of the fov of images for stereoscopic rendering
CN112929694B (en) Video stitching method, device, storage medium and computer equipment
CN115294207A (en) Fusion scheduling system and method for smart campus monitoring video and three-dimensional GIS model
CN112309311B (en) Display control method, device, display control card and computer readable medium
KR101481797B1 (en) Apparatus and method for corrcecting synchronous error between left and right frame in 3d imaging system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant