CN111277804A - Image processing method and device and multi-camera synchronization system - Google Patents

Image processing method and device and multi-camera synchronization system Download PDF

Info

Publication number
CN111277804A
CN111277804A CN202010161305.6A CN202010161305A CN111277804A CN 111277804 A CN111277804 A CN 111277804A CN 202010161305 A CN202010161305 A CN 202010161305A CN 111277804 A CN111277804 A CN 111277804A
Authority
CN
China
Prior art keywords
time
camera
calibration
video image
time information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010161305.6A
Other languages
Chinese (zh)
Inventor
郝青
崔建平
朱震钧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aibee Technology Co Ltd
Original Assignee
Beijing Aibee Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aibee Technology Co Ltd filed Critical Beijing Aibee Technology Co Ltd
Priority to CN202010161305.6A priority Critical patent/CN111277804A/en
Publication of CN111277804A publication Critical patent/CN111277804A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides an image processing method, an image processing device and a multi-camera synchronization system, wherein the multi-camera synchronization system comprises at least two cameras, and the cameras acquire video streams through image sensors; when the image sensor collects a frame of video image, acquiring current time information, wherein the time information is accurate to millisecond; and generating a data packet containing the data of the video image and the time information. The scheme of this application can be favorable to comparatively accurate determination this many cameras synchronous system in the video image's that each camera was gathered collection moment.

Description

Image processing method and device and multi-camera synchronization system
Technical Field
The present application relates to the field of information processing technologies, and in particular, to an image processing method and apparatus, and a multi-camera synchronization system.
Background
With the continuous development of image technology, in many fields related to video image analysis or video monitoring, a camera is required to acquire an image and process the image acquired by the camera. For example, the camera is used for collecting images, so that bases can be provided for police to handle cases, analyze user behaviors and the like.
However, the content of the video image captured by a single camera is limited, and therefore, in many fields, the video image captured by a single camera may not be able to implement more accurate and reliable video analysis, which results in limited application of video analysis processing based on a camera.
Disclosure of Invention
In view of this, the present application provides an image processing method, an image processing apparatus, and a multi-camera synchronization system, so as to facilitate more accurately determining the acquisition time of a video image acquired by each camera in the multi-camera synchronization system.
In order to achieve the above purpose, the present application provides the following technical solutions:
in one aspect, the present application provides an image processing method applied to cameras in a multi-camera synchronization system, where the multi-camera synchronization system includes at least two cameras, and the method includes:
collecting a video stream by an image sensor;
when the image sensor collects a frame of video image, acquiring current time information, wherein the time information is accurate to millisecond;
and generating a data packet containing the data of the video image and the time information.
Preferably, the time information is the time after the camera and the time calibration device are calibrated.
Preferably, the method further comprises the following steps:
when the time calibration moment is reached, sending a time calibration request to the time calibration equipment;
and obtaining the calibration time returned by the time calibration equipment in response to the time calibration request, and calibrating the clock time in the camera based on the calibration time.
Preferably, the time calibration device is a network time protocol server connected to the camera through a switch.
Preferably, the generating a data packet including the data of the video image and the time information includes:
transmitting the video image and the temporal information to an encoder;
and using the time information as a timestamp associated with the video image through the encoder, and encoding the timestamp and the video image to obtain an encoded data packet, wherein the data packet at least comprises the data of the video image and the timestamp.
In another aspect, the present application further provides an image processing apparatus applied to a camera in a multi-camera synchronization system, where the multi-camera synchronization system includes at least two cameras, the apparatus includes:
the video acquisition unit is used for acquiring a video stream through the image sensor;
the time acquisition unit is used for acquiring current time information when the image sensor acquires a frame of video image, wherein the time information is accurate to millisecond time information;
and a data generating unit for generating a data packet containing the data of the video image and the time information.
Preferably, the time information is the time after the camera and the time calibration device are calibrated.
Preferably, the method further comprises the following steps:
a calibration request unit, configured to send a time calibration request to the time calibration device when a time calibration time is reached;
and the time calibration unit is used for obtaining the calibration time returned by the time calibration equipment in response to the time calibration request and calibrating the clock time in the camera based on the calibration time.
In another aspect, the present application further provides a multi-camera synchronization system, including:
at least two cameras;
the camera is used for acquiring a video stream through an image sensor; when the image sensor collects a frame of video image, acquiring current time information, wherein the time information is accurate to millisecond; and generating a data packet containing the data of the video image and the time information.
Preferably, the system further comprises: a switch and a network time protocol server;
the camera is connected with the network time protocol server through the switch;
the camera is further used for sending a time calibration request to the time calibration equipment when the time calibration moment is reached;
the network time protocol server is used for responding to the time calibration request and returning calibration time to the camera;
the camera is further used for calibrating the clock time in the camera based on the calibration time.
According to the technical scheme, in the embodiment of the application, the multi-camera synchronization system comprises a plurality of cameras, each camera acquires time information accurate to millisecond at present every time when acquiring a frame of video image, and generates a data packet comprising the frame of video image and the time information.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on the provided drawings without creative efforts.
FIG. 1 is a flow chart illustrating an embodiment of an image processing method according to the present application;
FIG. 2 shows a functional block diagram of a camera of the present application generating data packets containing video images;
FIG. 3 illustrates a schematic diagram of one component architecture of the multi-camera synchronization system of the present application;
FIG. 4 is a schematic flow chart diagram illustrating a further embodiment of an image processing method of the present application;
fig. 5 is a schematic diagram illustrating a configuration of an image processing apparatus according to the present application.
Detailed Description
The inventor of the application finds that the application range of video monitoring or analysis based on a single camera is limited at present, for example, due to the limited camera shooting range of the single camera, people tracking, people flow analysis and the like cannot be performed based on the single camera in scenes such as people flow monitoring, people flow tracking and the like.
Meanwhile, the inventor researches and discovers that: in order to solve the problem of limited application of video monitoring analysis based on images acquired by a single camera, collaborative analysis by using images acquired by a plurality of cameras becomes a requirement. However, in a video monitoring analysis scene with multiple cameras cooperating with each other, it is necessary to align data of the multiple cameras, that is, extract video images captured by the cameras at the same time point from video streams captured by the cameras respectively. Therefore, if the accurate time of the video collected by each camera cannot be determined, the images collected by the multiple cameras at the same time cannot be determined, and therefore the video monitoring analysis cannot be accurately performed, or even cannot be realized.
For example, in a scenario of using multiple cameras to perform position tracking on a user, if the time of video images captured by two cameras located adjacent to each other cannot be determined, when a user walks from the field of view of one camera to the field of view of the other camera, there may be a case where there is no person in two frames of images between the two cameras at the same time, and thus tracking is lost.
Based on the research findings, the acquisition time of each frame of video image acquired by the camera can be accurately determined, and the method is a precondition for realizing the video analysis by the cooperation of multiple cameras. Based on this, the inventor of the present application proposes the scheme of the present application, and the scheme of the present application is applicable to any scene needing comprehensive analysis based on images acquired by multiple cameras. That is, any scene that needs to be analyzed based on video streams captured by multiple cameras in a multi-camera synchronization system.
In a multi-camera synchronization system composed of a plurality of cameras, each camera can execute the image acquisition processing method of the application.
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
As shown in fig. 1, which shows a schematic flow chart of an image processing method according to the present application, the method of the present embodiment is applied to a camera in a multi-camera synchronization system, where the multi-camera synchronization system includes at least two cameras. The method of the embodiment may include the steps of:
and S101, acquiring a video stream through an image sensor.
It is understood that the camera includes an image sensor, which can realize image capture, so that the camera can continuously obtain a video stream composed of video images captured by the image sensor.
S102, when the image sensor collects a frame of video image, current time information is obtained, and the time information is accurate to millisecond time information.
In the embodiment of the present application, each time an image sensor of a camera acquires a frame of video image, the camera acquires current time information, and therefore the time information is the time for acquiring the frame of video image.
In particular, in the embodiment of the present application, the time information is not the conventional time accurate to the second, but the time information accurate to the millisecond, so that the acquisition time of the frame of video image can be directly determined based on the time information. The time information accurate to the millisecond refers to the time information, minute and second, and further includes the current millisecond time, for example, the time information accurate to the millisecond may be: 9:20: 10:15. Of course, the time information may also include specific date information such as the year, month, day, etc.
The inventor of the present application has found through research that, in order to determine the acquisition time of the video image acquired by the camera, time information accurate to seconds can be obtained although the time information can be acquired once per second. However, in general, a camera can capture multiple frames of video images every second, and the time information corresponding to the multiple frames of video images captured in the same second is actually the same time (i.e., time accurate to second).
In this case, if the time interval between each frame of video image needs to be determined by combining the frame rate after the time information corresponding to the video image is subsequently acquired, which millisecond each frame of video image is acquired finally, so that the complexity of determining the acquisition time of the video image is high.
For example, if the frame rate is 25, a camera generates 25 frames of video images in 1 second, and an interval between each frame of video image is 40ms, for example, although 25 frames of video images are acquired in 1 minute 20 seconds, time information corresponding to the 25 frames of video images is 1 minute 20 seconds, so that a specific acquisition time and an acquisition sequence of each video image cannot be determined. In this case, it is necessary to determine, according to a timestamp (accurate to a second) corresponding to each frame of image acquired by the camera, a video image with a second jump in the corresponding timestamp, then determine, according to the acquisition time interval of the adjacent video image being 40ms, an actual acquisition time corresponding to the next frame of video after the video image, that is, a time accurate to a millisecond, and so on. Therefore, the method cannot directly determine the acquisition time of the video image according to the time information corresponding to the video image, and thus, for a multi-camera synchronization system, it is difficult to determine the video image acquired by each camera at the same time.
In order to determine the specific moment of the video image collected by the camera visually, the mode of acquiring time information once per second is different, when the camera collects a frame of video image at each time, the current accurate time information to millisecond can be acquired, on the basis, the time information corresponding to different frames of video images collected by the camera is different, and the millisecond-level time of collecting the video image can be accurately reflected.
S103, a data packet including the video image data and the time information is generated.
In the embodiment of the application, after the video image and the corresponding time information are obtained, the data of the video image and the time information are directly generated into one data packet, so that the video image and the time information can be directly extracted through the data packet, and convenience and efficiency for obtaining the time information corresponding to the video image are improved.
For example, the time information may be used as information associated with the video image to generate an attribute information packet containing the time information, and the attribute information and the data packet of the video image may be combined into one data packet.
Optionally, the video image and the time information may be encoded to obtain an encoded data packet containing data of the video image and the time information. Wherein, the time information can be coded as the attribute information associated with the video image in the coding process. For example, the camera may transmit the video image and the time information to an encoder in the camera, use the encoder to use the time information as a timestamp associated with the video image, and encode the timestamp and the video image to obtain an encoded data packet, where the data packet includes at least data of the video image and the timestamp.
It can be understood that, in the process of encoding a video image and time information corresponding to the video image by a camera, the time information is actually packaged separately as information associated with the video image, and finally the packaged time information is encoded together with data of the video image, so that a data packet containing the time information and the video image is encoded. In this case, the video image and time information are directly obtained by parsing the packet.
The specific algorithm for encoding is not limited in this application, and encoding may be performed by using an encoding algorithm such as H264 or H265.
For ease of understanding, reference is made to FIG. 2:
fig. 2 shows a schematic diagram of a principle framework of image processing performed by a camera in the present application.
In fig. 2, the camera may include a sensor, a controller, and an encoder.
As can be seen from fig. 2, in the case that the controller detects a frame of video image captured by the sensor, the controller obtains current time information, and transmits the currently determined time information to the encoder while controlling the sensor to transmit the frame of video image to the encoder. In this case, the encoder encodes the time information together with the video image into an encoded packet. The above steps are repeated continuously, so that each frame of image in the video stream collected by the camera can be encoded respectively, and finally, the encoded data of the video stream is obtained.
The inventor of the present application has found through research that adding a timestamp to an image can also be implemented based on an on-screen display (OSD). However, this method actually superimposes the time information as one image layer on the video image as one image. In this case, if time information of the video image needs to be acquired, the superimposed image needs to be recognized by an optical character recognition OCR technology to recognize the time information displayed in the image, resulting in high complexity in acquiring the time information. And because the OSD mode is only supported to add time information accurate to the second to the video image, namely the time information is only acquired once per second, the timestamps of the video images of multiple frames acquired within each second are all the same, and the acquisition time of the video images cannot be accurately determined.
On the basis of the research, the method and the device can acquire the current time information accurate to millisecond every time when a frame of video image is acquired, and simultaneously can directly generate a data packet of the video image and the corresponding time information, so that the time information accurate to millimeter level corresponding to the video image can be directly acquired from the data packet, and the acquisition time of each frame of video image acquired by the camera can be accurately acquired.
It can be understood that, for different application scenarios, after the data packet is generated by the camera, the data packet of each frame of video image may also be stored; the data packets corresponding to the respective frames of video images may be transmitted to a designated video storage device or analysis device.
Therefore, in the application, when the camera in the multi-camera synchronization system acquires a frame of video image each time, current time information is acquired and a data packet containing the frame of video image and the time information is generated, so that, since the data packet in which the video image is located contains time information that is accurate to milliseconds, therefore, the acquisition moment of the video image can be directly extracted from the data packet containing the video image, thereby accurately and conveniently determining the acquisition time of the video images acquired by each camera in the multi-camera synchronization system, when the video images collected by a plurality of cameras based on the multi-camera synchronous system are required to be comprehensively analyzed, the video images collected at the same moment in the cameras can be accurately determined based on the data packets which are collected and generated by the cameras and contain the video images.
Meanwhile, the acquisition time of the video images acquired by each camera in the multi-camera synchronous system can be accurately determined, so that the possibility of carrying out comprehensive analysis on the video images acquired by each camera at the same time in the multi-camera synchronous system is provided, and the reliability and the accuracy of multi-camera synchronous video analysis are favorably ensured.
It can be understood that, in a multi-camera synchronization system, if the clock time in each camera is not synchronous, the video images acquired by each camera at the same time cannot be accurately determined. And the error of the crystal oscillator of the camera can be accumulated continuously along with the increase of time, so that the internal time of the camera has deviation, and in order to ensure the consistency of clock time in each camera, the time information associated with the camera for the collected video image is synchronous with the time in other cameras, and the camera can also carry out time calibration with time calibration equipment in the application.
Correspondingly, in the embodiment of the present application, when the camera detects that one frame of video image is acquired, the obtained time information is the time after the camera and the time calibration device are calibrated. Because each camera in the multi-camera synchronous system can carry out time calibration with the time calibration equipment, the synchronization of the clock time of each camera can be ensured.
Wherein the time calibration device may have a plurality of possibilities. Optionally, the Time calibration device may be a Network Time Protocol (NTP) server. The network time protocol server refers to a server based on the NTP protocol. The NTP protocol can be used for synchronizing the time of the computer, the computer can synchronize the server or a clock source of the computer, high-precision time correction can be provided, and malicious protocol attack can be prevented through a mode of encryption confirmation.
For ease of understanding, reference may be made to fig. 3, which shows a schematic diagram of a component architecture of a multi-camera synchronization system according to the present application.
As can be seen from fig. 3, the multi-camera synchronization system comprises at least a plurality of cameras 301. Each camera 301 may execute the image processing method mentioned in any embodiment of the present application.
Optionally, in order to ensure time synchronization of each camera, the multi-camera synchronization system further includes: NTP servers 302 connected to the plurality of cameras, respectively.
In one possible implementation manner, the multi-camera synchronization system further includes a switch 303, wherein each camera 301 is connected to the NTP server 302 through the switch 303.
In the embodiment of the present application, there may be various ways for the camera to perform time calibration with the NTP server. For example, the camera periodically requests the NTP server for time calibration, or sends a time calibration request to the NTP server when the time calibration time is reached according to a set step change rule, so as to calibrate the clock time in the camera based on the calibration time returned by the NIP server.
Correspondingly, the NTP server is configured to respond to a time calibration request sent by the camera and return calibration time to the camera.
Fig. 4 is a schematic flow chart showing a further embodiment of an image processing method according to the present application, and the present embodiment is applied to any one camera in the multi-camera synchronization system. The embodiment comprises the following steps:
s401, collecting video stream through an image sensor.
S402, when the time alignment time is reached, transmits a time alignment request to the NTP server.
For example, the camera may determine whether the current time reaches the time calibration time according to the time calibration period.
For another example, the camera may use a plurality of preset specific times as the time calibration time, and when any one specific time is reached, the time calibration time is determined.
Wherein, the time calibration request is used for requesting the NTP server to calibrate the time of the camera.
It is understood that, in this embodiment, the time calibration device is taken as an NTP server for example, and the same is also applied to other devices capable of providing time calibration for a camera.
And S403, acquiring the calibration time returned by the NTP server in response to the time calibration request, and calibrating the clock time in the camera based on the calibration time.
The calibration time returned by the NTP server is time information provided to the camera to calibrate the clock of the camera.
The cameras can adjust the clock time based on the calibration time to ensure that the clocks of the cameras are consistent with the clock time of the NTP server, so that all the cameras of the multi-camera synchronization system are synchronized with the NTP server, and the clocks of all the cameras in the multi-camera synchronization system are synchronized.
It should be noted that the sequence of steps S402 and S403 and steps S401 and S404 is not limited to that shown in fig. 2, and in practical applications, steps S402 and S403 may be executed as long as the time calibration condition is satisfied during the process of capturing images by the camera.
S404, when the image sensor collects a frame of video image, current time information is obtained, and the time information is accurate to millisecond time information.
S405, a data packet including the video image data and the time information is generated.
The above steps S404 and S405 can refer to the related description of the previous embodiment, and are not described herein again.
In the embodiment of the application, as the cameras in the multi-camera synchronization system can continuously perform time calibration with the NTP server, the time of each camera is synchronized; and every time the image camera collects a frame of video image, the current time information determined by the camera is the calibrated time information, so that the time information corresponding to the video image can accurately reflect the collection time of the video image, and further the video image collected by each camera at the same time can be accurately determined directly based on the time information corresponding to the video image in the data packet generated by the camera.
It is understood that the camera may also detect whether the time calibration of the camera fails, for example, the time calibration is failed due to the fact that the time calibration request is not sent at the time of reaching the time calibration, or the time calibration request is sent but the response of the NTP server is not received. If so, the camera can also record the information of time calibration failure.
It can be understood that, depending on the actual application scenario, the multi-camera synchronization system may further include: video monitoring and analyzing equipment or video storage equipment and the like. Correspondingly, the camera can send the generated data packet of each video image to the video monitoring and analyzing device or the video storage device and the like. Of course, the camera may also send information of time calibration failure to the video monitoring and analyzing device or the video storage device.
The application also provides an image processing device corresponding to the image processing method.
As shown in fig. 5, which shows a schematic diagram of a composition structure of an embodiment of an image processing apparatus according to the present application, the apparatus of the present embodiment may be applied to cameras in a multi-camera synchronization system, where the multi-camera synchronization system includes at least two cameras, and the apparatus includes:
a video collecting unit 501, configured to collect a video stream through an image sensor;
a time obtaining unit 502, configured to obtain current time information when the image sensor collects a frame of video image, where the time information is time information accurate to milliseconds;
a data generating unit 503, configured to generate a data packet including the data of the video image and the time information.
Optionally, the time information is time after the camera and the time calibration device are calibrated.
In one possible implementation, the apparatus may further include:
a calibration request unit, configured to send a time calibration request to the time calibration device when a time calibration time is reached;
and the time calibration unit is used for obtaining the calibration time returned by the time calibration equipment in response to the time calibration request and calibrating the clock time in the camera based on the calibration time.
In another possible implementation manner, the data generating unit includes:
a data transmission unit for transmitting the video image and the time information to an encoder;
and the data encoding unit is used for taking the time information as a time stamp associated with the video image through the encoder, and encoding the time stamp and the video image to obtain an encoded data packet, wherein the data packet at least comprises the data of the video image and the time stamp.
For the device embodiments, since they substantially correspond to the method embodiments, reference may be made to the partial description of the method embodiments for relevant points. The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be embodied in other specific forms without departing from the spirit or scope of the present application. The present embodiment is an exemplary example only, and should not be taken as limiting, and the specific disclosure should not be taken as limiting the purpose of the application. For example, the division of the unit or the sub-unit is only one logical function division, and there may be another division manner in actual implementation, for example, a plurality of units or a plurality of sub-units are combined together. In addition, various elements or components may be combined or may be integrated into another system, or some features may be omitted, or not implemented.
Additionally, the systems and methods described, as well as the illustrations of various embodiments, can be combined or integrated with other systems, modules, techniques, or methods without departing from the scope of the application. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The foregoing is directed to embodiments of the present invention, and it is understood that various modifications and improvements can be made by those skilled in the art without departing from the spirit of the invention.

Claims (10)

1. An image processing method is applied to cameras in a multi-camera synchronization system, wherein the multi-camera synchronization system comprises at least two cameras, and the method comprises the following steps:
collecting a video stream by an image sensor;
when the image sensor collects a frame of video image, acquiring current time information, wherein the time information is accurate to millisecond;
and generating a data packet containing the data of the video image and the time information.
2. The method of claim 1, wherein the time information is a time after the camera is calibrated with a time calibration device.
3. The method of claim 2, further comprising:
when the time calibration moment is reached, sending a time calibration request to the time calibration equipment;
and obtaining the calibration time returned by the time calibration equipment in response to the time calibration request, and calibrating the clock time in the camera based on the calibration time.
4. A method according to claim 2 or 3, wherein the time alignment device is a network time protocol server connected to the camera via a switch.
5. The method of claim 1, wherein generating the data packet containing the data of the video image and the time information comprises:
transmitting the video image and the temporal information to an encoder;
and using the time information as a timestamp associated with the video image through the encoder, and encoding the timestamp and the video image to obtain an encoded data packet, wherein the data packet at least comprises the data of the video image and the timestamp.
6. An image processing apparatus applied to a camera in a multi-camera synchronization system including at least two cameras, the apparatus comprising:
the video acquisition unit is used for acquiring a video stream through the image sensor;
the time acquisition unit is used for acquiring current time information when the image sensor acquires a frame of video image, wherein the time information is accurate to millisecond time information;
and a data generating unit for generating a data packet containing the data of the video image and the time information.
7. The apparatus of claim 6, wherein the time information is a time after the camera is calibrated with a time calibration device.
8. The apparatus of claim 7, further comprising:
a calibration request unit, configured to send a time calibration request to the time calibration device when a time calibration time is reached;
and the time calibration unit is used for obtaining the calibration time returned by the time calibration equipment in response to the time calibration request and calibrating the clock time in the camera based on the calibration time.
9. A multi-camera synchronization system, comprising:
at least two cameras;
the camera is used for acquiring a video stream through an image sensor; when the image sensor collects a frame of video image, acquiring current time information, wherein the time information is accurate to millisecond; and generating a data packet containing the data of the video image and the time information.
10. The system of claim 9, further comprising: a switch and a network time protocol server;
the camera is connected with the network time protocol server through the switch;
the camera is further used for sending a time calibration request to the time calibration equipment when the time calibration moment is reached;
the network time protocol server is used for responding to the time calibration request and returning calibration time to the camera;
the camera is further used for calibrating the clock time in the camera based on the calibration time.
CN202010161305.6A 2020-03-10 2020-03-10 Image processing method and device and multi-camera synchronization system Pending CN111277804A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010161305.6A CN111277804A (en) 2020-03-10 2020-03-10 Image processing method and device and multi-camera synchronization system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010161305.6A CN111277804A (en) 2020-03-10 2020-03-10 Image processing method and device and multi-camera synchronization system

Publications (1)

Publication Number Publication Date
CN111277804A true CN111277804A (en) 2020-06-12

Family

ID=71002513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010161305.6A Pending CN111277804A (en) 2020-03-10 2020-03-10 Image processing method and device and multi-camera synchronization system

Country Status (1)

Country Link
CN (1) CN111277804A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113890959A (en) * 2021-09-10 2022-01-04 鹏城实验室 Multi-mode image synchronous acquisition system and method
CN114401378A (en) * 2022-03-25 2022-04-26 北京壹体科技有限公司 Multi-segment video automatic splicing method, system and medium for track and field projects
CN116545572A (en) * 2023-06-30 2023-08-04 苏州齐思智行汽车系统有限公司 Original image data acquisition system based on FPGA

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101014136A (en) * 2007-02-05 2007-08-08 北京大学 Time synchronizing method and system for multi-view video collection
CN103957344A (en) * 2014-04-28 2014-07-30 广州杰赛科技股份有限公司 Video synchronization method and system for multiple camera devices
CN105681632A (en) * 2015-12-31 2016-06-15 深圳市华途数字技术有限公司 Multi-head camera and frame synchronization method thereof
CN105991992A (en) * 2016-06-21 2016-10-05 浩云科技股份有限公司 Whole-space synchronous monitoring camera system
CN106131437A (en) * 2016-08-25 2016-11-16 武汉烽火众智数字技术有限责任公司 A kind of Multi net voting video camera method for synchronizing time and system
CN107613159A (en) * 2017-10-12 2018-01-19 北京工业职业技术学院 Image temporal calibration method and system
CN108449552A (en) * 2018-03-07 2018-08-24 北京理工大学 Tag image acquires the method and system at moment
CN109495760A (en) * 2018-12-25 2019-03-19 虎扑(上海)文化传播股份有限公司 A kind of method of multiple groups camera live broadcasting
CN110719496A (en) * 2018-07-11 2020-01-21 杭州海康威视数字技术股份有限公司 Multi-path code stream packaging and playing method, device and system
US20200068176A1 (en) * 2016-04-21 2020-02-27 Probable Cause Solutions LLC System and method for synchronizing camera footage from a plurality of cameras in canvassing a scene

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101014136A (en) * 2007-02-05 2007-08-08 北京大学 Time synchronizing method and system for multi-view video collection
CN103957344A (en) * 2014-04-28 2014-07-30 广州杰赛科技股份有限公司 Video synchronization method and system for multiple camera devices
CN105681632A (en) * 2015-12-31 2016-06-15 深圳市华途数字技术有限公司 Multi-head camera and frame synchronization method thereof
US20200068176A1 (en) * 2016-04-21 2020-02-27 Probable Cause Solutions LLC System and method for synchronizing camera footage from a plurality of cameras in canvassing a scene
CN105991992A (en) * 2016-06-21 2016-10-05 浩云科技股份有限公司 Whole-space synchronous monitoring camera system
CN106131437A (en) * 2016-08-25 2016-11-16 武汉烽火众智数字技术有限责任公司 A kind of Multi net voting video camera method for synchronizing time and system
CN107613159A (en) * 2017-10-12 2018-01-19 北京工业职业技术学院 Image temporal calibration method and system
CN108449552A (en) * 2018-03-07 2018-08-24 北京理工大学 Tag image acquires the method and system at moment
CN110719496A (en) * 2018-07-11 2020-01-21 杭州海康威视数字技术股份有限公司 Multi-path code stream packaging and playing method, device and system
CN109495760A (en) * 2018-12-25 2019-03-19 虎扑(上海)文化传播股份有限公司 A kind of method of multiple groups camera live broadcasting

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113890959A (en) * 2021-09-10 2022-01-04 鹏城实验室 Multi-mode image synchronous acquisition system and method
CN113890959B (en) * 2021-09-10 2024-02-06 鹏城实验室 Multi-mode image synchronous acquisition system and method
CN114401378A (en) * 2022-03-25 2022-04-26 北京壹体科技有限公司 Multi-segment video automatic splicing method, system and medium for track and field projects
CN116545572A (en) * 2023-06-30 2023-08-04 苏州齐思智行汽车系统有限公司 Original image data acquisition system based on FPGA

Similar Documents

Publication Publication Date Title
CN111277804A (en) Image processing method and device and multi-camera synchronization system
CN109104259B (en) Multi-sensor time synchronization system and method
EP3291551B1 (en) Image delay detection method and system
CN102668534B (en) Data search, parser, and synchronization of video and telemetry data
US20130091528A1 (en) Video reproduction system, receive terminal, home gateway device, and quality control system
KR101934200B1 (en) Method and system for media synchronization
EP2033141A2 (en) Embedded appliance for multimedia capture
CN104349163A (en) Measurement display device and measurement system for end-to-end video transmission delay
EP2541932A1 (en) Quality checking in video monitoring system.
JP7208530B2 (en) Synchronization control device, synchronization control method and synchronization control program
US10992725B2 (en) System and method for interleaved media communication and conversion
CN111565298B (en) Video processing method, device, equipment and computer readable storage medium
CN112351251A (en) Image processing system and terminal device
CN109756744B (en) Data processing method, electronic device and computer storage medium
JP5658985B2 (en) Network camera system and video information time synchronization method used therefor
KR101853441B1 (en) Client device and local clock skew compensation method thereof
CN113225152A (en) Method and device for synchronizing cameras and computer readable medium
KR20130065069A (en) Apparatus and method for transmitting data of network camera
JP4419640B2 (en) Surveillance camera and surveillance system
KR101616457B1 (en) Verification Device And System For Video Delay Time
TW201824850A (en) Monitoring camera system
CN113079351B (en) Data gateway, video monitoring time calibration method and monitoring system
KR102705267B1 (en) Synchronous communication system and method between multiple images and data
de Castro Multi-Camera Synchronization for Object Detection in Multiple Scenes by Computer Vision
KR20190116213A (en) Surveillance camera transmitting still image and video according to input of specific signal to server, Video management server providing video and still images related to the video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200612

RJ01 Rejection of invention patent application after publication