CN117336418A - Video time calibration method, device, equipment and storage medium - Google Patents

Video time calibration method, device, equipment and storage medium Download PDF

Info

Publication number
CN117336418A
CN117336418A CN202311051851.4A CN202311051851A CN117336418A CN 117336418 A CN117336418 A CN 117336418A CN 202311051851 A CN202311051851 A CN 202311051851A CN 117336418 A CN117336418 A CN 117336418A
Authority
CN
China
Prior art keywords
time
calibrated
video
video segment
current
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311051851.4A
Other languages
Chinese (zh)
Inventor
豆红雷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Huacheng Software Technology Co Ltd
Original Assignee
Hangzhou Huacheng Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Huacheng Software Technology Co Ltd filed Critical Hangzhou Huacheng Software Technology Co Ltd
Priority to CN202311051851.4A priority Critical patent/CN117336418A/en
Publication of CN117336418A publication Critical patent/CN117336418A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/04Synchronising
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Abstract

The application discloses a video time calibration method, a device, equipment and a storage medium, wherein the video time calibration method comprises the following steps: determining the recording starting time of the video segment to be calibrated based on the acquired power-down time corresponding to the previous video segment of the video segment to be calibrated and the power-up starting time corresponding to the video segment to be calibrated; if a timing instruction is received in the recording process of the video segment to be corrected, the timing instruction is analyzed, and the current correction time is obtained; if a timing instruction is not received in the recording process of the video segment to be corrected, performing time analysis based on the video content of the video segment to be corrected to obtain the current analysis time; and carrying out calibration processing on the recording starting time based on the current calibration time or the current analysis time to obtain calibrated video time. By the scheme, the video time can be calibrated.

Description

Video time calibration method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a video time calibration method, apparatus, device, and storage medium.
Background
The current image acquisition devices are mostly provided with an RTC (Real-Time Clock) module, which is used for recording Time and performing Time calibration under the condition that the intelligent device is powered off, or performing Time calibration through a NTP (Network Time Protocol) timing server on a network.
However, in the case of power-off and network-off, for an image acquisition device that does not have an RTC module, the time is usually restored to a certain default time after restarting; if the equipment is not connected with the network in time, the video time is disordered, and video recording video with wrong time is generated.
Therefore, there is a need for a video time calibration method for calibrating the time of video recording.
Disclosure of Invention
The application provides at least one video time calibration method, device, equipment and computer readable storage medium.
The first aspect of the present application provides a video time calibration method, including: determining the recording starting time of the video segment to be calibrated based on the acquired power-down time corresponding to the previous video segment of the video segment to be calibrated and the power-up starting time corresponding to the video segment to be calibrated; if a timing instruction is received in the recording process of the video segment to be corrected, analyzing the timing instruction to obtain the current correction time; if the timing instruction is not received in the recording process of the video segment to be corrected, performing time analysis based on the video content of the video segment to be corrected to obtain the current analysis time; and calibrating the recording starting time based on the current calibration time or the current analysis time to obtain calibrated video time.
In an embodiment, the step of performing time analysis based on the video content of the video segment to be calibrated to obtain the current analysis time includes: content analysis is carried out on the video content of the video segment to be calibrated, and a video event corresponding to the video content is obtained; matching the video event with a preset event template to obtain event time corresponding to the event template matched with the video event; and taking the event time as the current analysis time of the video segment to be calibrated.
In an embodiment, the step of calibrating the recording start time based on the current calibration time or the current analysis time to obtain a calibrated video time includes: calculating the difference between the current calibration time and the acquired running time corresponding to the video segment to be calibrated to obtain the current starting time; and rewriting the recording starting time based on the current starting time to obtain the calibrated video time, wherein the calibrated video time comprises the calibrated recording starting time.
In an embodiment, after the step of obtaining the calibrated video time, the method further comprises: if a playing instruction of the video segment to be calibrated is received, correspondingly overlapping the calibrated video time from the calibrated recording starting time to the calibrated video time in the current calibration time to the video segment to be calibrated for display in the process of playing the video segment to be calibrated; and writing the time correspondence after the current calibration time into the video segment after the video segment to be calibrated for displaying.
In an embodiment, the step of calibrating the recording start time based on the current calibration time or the current analysis time to obtain a calibrated video time includes: calculating the difference between the current analysis time and the acquired running time corresponding to the video segment to be calibrated to obtain estimated starting time; and rewriting the recording starting time based on the estimated starting time to obtain the calibrated video time, wherein the calibrated video time comprises the calibrated recording starting time.
In an embodiment, after the step of obtaining the calibrated recording start time, the method further comprises: and if a playing instruction of the video segment to be calibrated is received, correspondingly overlapping the calibrated video time from the calibrated recording starting time to the calibrated video time in the current analysis time to the video segment to be calibrated for display in the process of playing the video segment to be calibrated.
In an embodiment, after the step of receiving the play command for the video segment to be calibrated, the method further includes: and if a time setting instruction for the video segment to be calibrated is received, respectively adding the time corresponding to the time setting instruction and the calibrated video time from the calibrated recording start time to the calibrated video time in the current analysis time to the video content of the video segment to be calibrated for display based on different display modes in the process of playing the video segment to be calibrated, wherein the different display modes at least comprise one of different display fonts or different display colors.
A second aspect of the present application provides a video time calibration apparatus, comprising: the time determining module is used for determining the recording starting time of the video segment to be calibrated based on the power-down time corresponding to the previous video segment of the obtained video segment to be calibrated and the starting time after power-up corresponding to the video segment to be calibrated; the instruction analysis module is used for analyzing the timing instruction to obtain the current calibration time if the timing instruction is received in the recording process of the video segment to be calibrated; the video analysis module is used for carrying out time analysis based on the video content of the video segment to be calibrated if the time calibrating instruction is not received in the recording process of the video segment to be calibrated, so as to obtain the current analysis time; and the time calibration module is used for calibrating the recording starting time based on the current calibration time or the current analysis time to obtain calibrated video time.
A third aspect of the present application provides an electronic device, including a memory and a processor, where the processor is configured to execute program instructions stored in the memory to implement the video time calibration method described above.
A fourth aspect of the present application provides a computer readable storage medium having program instructions stored thereon which, when executed by a processor, implement the video time alignment method described above.
According to the scheme, the recording starting time of the video segment to be calibrated is determined by acquiring the power-down time corresponding to the previous video segment of the video segment to be calibrated and the starting time after power-up corresponding to the video segment to be calibrated, so that the time relationship between the video segment to be calibrated and the previous video segment is determined; if a timing instruction is received in the recording process of the video segment to be corrected, time correction is carried out on the video segment to be corrected by analyzing the current correction time in the instruction; if a timing instruction is not received in the recording process of the video segment to be corrected, the current analysis time is obtained by analyzing the video content of the video segment to be corrected, and time calibration is carried out on the video segment to be corrected based on the current analysis time, so that the video time calibration can be realized under the two conditions of receiving the timing instruction and not receiving the timing instruction.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and, together with the description, serve to explain the technical aspects of the application.
FIG. 1 is a flow chart of an exemplary embodiment of a video time calibration method of the present application;
FIG. 2 is a schematic view of an application scenario in the video time calibration method of the present application;
FIG. 3 is a simplified schematic diagram of a video summary of a video segment in the video time alignment method of the present application;
FIG. 4 is a schematic diagram showing the effect of time display in the video time calibration method of the present application;
FIG. 5 is a block diagram of a video time alignment apparatus shown in an exemplary embodiment of the present application;
FIG. 6 is a schematic structural diagram of an embodiment of an electronic device of the present application;
FIG. 7 is a schematic diagram of an embodiment of a computer-readable storage medium of the present application.
Detailed Description
The following describes the embodiments of the present application in detail with reference to the drawings.
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, interfaces, techniques, etc., in order to provide a thorough understanding of the present application.
The term "and/or" is herein merely an association relationship describing an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship. Further, "a plurality" herein means two or more than two. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Currently, most of the existing computer devices are provided with an RTC battery, where the RTC battery is used to supply power to a CMOS chip in a south bridge chipset under the condition that the computer is turned off, so that real-time is always recorded and stored, and when the RTC battery fails, the CMOS chip is powered down, so that system time is lost, and the system time is disordered, which affects normal running of a computer program, so that the RTC battery is one of extremely important computer components. The RTC battery is a button battery and can be divided into a rechargeable type or a non-rechargeable type, wherein the non-rechargeable RTC battery needs to be replaced periodically, and the service life of the rechargeable RTC battery is shorter after the rechargeable RTC battery is repeatedly charged for many times, however, no matter the rechargeable RTC battery or the non-rechargeable RTC battery needs to be provided with corresponding hardware circuits and structures, the manufacturing cost is higher, and meanwhile, the structure is not easy to optimize.
At present, the mature technical scheme is that timing is performed based on an NTP timing server or a cloud server on a network, equipment periodically obtains accurate time from a time server on the network, and then the calibration time in a timing instruction issued by the time server is synchronized to local equipment. Timing is performed based on an NTP timing server or a cloud server on a network, which requires that the device must be on a public network to be able to communicate with a remote server.
However, in the field of intelligent home video monitoring, if the RTC battery of the image acquisition device fails or has no RTC module, the network of the device fails and the device is not connected with an external network, so that the time is abnormal. For example, when the image acquisition device is reset to a default time (e.g., 1970/01/01:00:00) after power-off restarting, and the device is not connected to the network in time, the video time is disordered.
Referring to fig. 1, fig. 1 is a flow chart illustrating an exemplary embodiment of a video time calibration method of the present application. Specifically, the method may include the steps of:
step S110, determining the recording starting time of the video segment to be calibrated based on the acquired power-down time corresponding to the previous video segment of the video segment to be calibrated and the power-up starting time corresponding to the video segment to be calibrated.
It should be noted that, in the embodiment of the method, one or more video segments are recorded before the video segment to be calibrated as an application scenario, refer to fig. 2, and fig. 2 is a schematic diagram of an application scenario in the video time calibration method of the present application, where the video segment to be time-calibrated is the video segment to be calibrated, before the recording process of the video segment to be calibrated, there are one or more recorded video segments, the recording process may refer to a process of powering up the device to powering down the device, the powering up of the device is equivalent to the availability of a clock function of the image acquisition device, the powering down of the device is equivalent to the unavailability of a clock function of the image acquisition device, and when the image acquisition device is powered down and then powered up (restarted), the video segment to be calibrated and a previous video segment of the video segment to be calibrated in fig. 2 are formed, for convenience of description, the previous video segment of the video segment to be calibrated is referred to as a previous video segment in short. It will be appreciated that there may be either a time alignment or no time alignment of the previous video segment.
Specifically, referring to fig. 2, the starting time T0 of the video segment to be calibrated is determined according to the power-down time corresponding to the previous video segment and the start time T1 after power-up corresponding to the current video segment to be calibrated, so that the time sequence of the video segment to be calibrated can be arranged after the previous video segment, and the time sequence confusion between the video segments can be avoided, wherein the start time after power-up refers to the time period from the time when the image acquisition device is powered up to the time when the device system starts to start the recording of the video, for example, the system consumes 1 minute to finish the starting and start the recording of the video.
It can be understood that the time between the power-down of the previous video segment and the power-up of the video segment to be calibrated is temporarily unavailable, if the power-up is restarted immediately after the power-down, the power-down time is equal to the power-up time, if the power-up is performed after a certain interval time, the accurate power-up time cannot be confirmed, but the power-down time of the previous video segment is earlier than the recording start time of the video to be calibrated by adding the start time after the power-up of the system on the basis of the power-down time, so the recording start time of the video segment to be calibrated can be regarded as a virtual time which is set in advance and is used for distinguishing the time sequence relationship between the previous video segment and the video segment to be calibrated.
It should be noted that, referring to fig. 2, if the previous video segment is calibrated by the time of the timing instruction, the time of the previous video segment is the accurate real time, and the power-down time is the real end time; if the time calibration of the timing instruction is not carried out on the previous video segment, the time of the previous video segment is estimated virtual time, and the power-down time is virtual end time; and adding the starting time after the system is powered on the basis of the real ending time or the virtual ending time to obtain the recording starting time of the video segment to be calibrated.
In addition, in the recording process of the video segment to be calibrated, if the video segment to be calibrated is subjected to time calibration, the real ending time of the video segment to be calibrated can be determined by the calibrated time; if the video segment to be calibrated is not subjected to time calibration, determining the virtual ending time of the video segment to be calibrated according to the recording starting time of the video segment to be calibrated and the recording duration of the video to be calibrated recorded by the system clock; the recording starting time of the next video segment of the video segment to be calibrated is estimated according to the real ending time or the virtual ending time; similarly, the above scheme is also a determination process of the real end time or the virtual end time of the previous video segment, so that the recorded time sequences of the video segments are sequentially overlapped, and the problem of time disorder is avoided.
Step S120, if a timing instruction is received in the recording process of the video segment to be corrected, the timing instruction is analyzed to obtain the current calibration time.
The timing instruction can be an instruction issued by a time server (i.e. network timing) after the network of the image acquisition equipment is restored, or can be a timing instruction sent by other equipment terminals or a timing instruction for directly performing time calibration on the video segment to be corrected (i.e. active timing).
Specifically, the current calibration time corresponding to the timing instruction can be obtained by analyzing the timing instruction, and in general, the current calibration time is accurate real time, i.e. the time of the video segment to be corrected at the current moment can be determined.
For example, referring to fig. 3, fig. 3 is a simplified schematic diagram of a video summary of a video segment in the video time calibration method of the present application, and an index field is added to record a time parameter of the video segment based on an original video summary, where the parameter time is used to represent whether the video segment is calibrated, the parameter T0 is used to represent a recording start time of the video segment, the parameter T is used to represent a running time (recording time) after the system is powered on and starts recording the video, the parameter T1 is used to represent a current analysis time of the video segment, and the parameter T2 is used to represent a time set by a user in the video segment that is not calibrated by a calibrated time instruction.
Further, after receiving the timing instruction, marking a time parameter in the video abstract of the video segment to be corrected as 1 to represent that the video segment is accurately time-calibrated, and the time parameter in the video abstract of the video segment which is not received with the timing instruction calibration is 0; starting time counting after starting to record video data after powering on, stopping updating until a timing instruction is received or stopping updating due to powering down again, wherein if the data is stored by adopting a block storage method, t is updated once per second until the field stops updating when the network is recovered, and continuously updating the subsequent time through network timing; when the block is divided, the corresponding processing is carried out on t, namely, the old storage block indicates how long the time video is stored on the block, and t is updated every second on the new storage block, so that the video segment is continuous in video recording time even if the video segment is not corrected before the next equipment is powered off.
Step S130, if a timing instruction is not received in the recording process of the video segment to be corrected, performing time analysis based on the video content of the video segment to be corrected, and obtaining the current analysis time.
If a timing instruction is not received in the recording process of the video segment to be corrected, for example: in the recording process of the video segment to be calibrated, if the network of the image acquisition equipment is not restored, analyzing the video content in the video segment to be calibrated, judging the time in the video segment to be calibrated through the video content, and obtaining the current analysis time, wherein the video content comprises but not limited to audio, pictures, event triggering instructions and the like recorded in the video recording process.
Taking audio as an example, if voice information (such as voice broadcasting of a clock, timing alarm at a certain moment and the like) related to time exists in the video segment to be calibrated, generating current analysis time based on the audio information to calibrate the time of the video segment to be calibrated; taking a picture as an example, for an image acquisition device in the monitoring field, an infrared lamp is usually arranged, when the ambient light intensity is weaker than a certain threshold value, the device can be switched to the infrared lamp to carry out light supplementing, a specific scene can be represented by that the ambient light is changed from strong to weak in the evening, the image acquisition device uses the infrared lamp to supplement light, so that a currently recorded video segment is converted into a black-and-white image from a color image, and therefore the current analysis time of the video segment to be calibrated can be estimated, or the clock image information acquired in the video segment to be calibrated can be estimated and analyzed, and the current analysis time is acquired; taking an event triggering instruction as an example, the image acquisition equipment can be associated with an attendance machine, if the attendance machine corresponding to the image acquisition equipment is triggered by a card punching attendance event, the event triggering instruction is correspondingly generated, and the time corresponding to the card punching attendance event is taken as the current analysis time to perform time calibration on the video segment to be calibrated.
In addition, in the process of recording the video segment to be calibrated, from the power-on time or the recording start time corresponding to the video segment to be calibrated, if a preset calibration waiting threshold (for example, 5 minutes) is passed and a calibration instruction is not received yet, time analysis is performed based on the video content of the video segment to be calibrated, so that the current analysis time is obtained.
Step S140, the recording start time is calibrated based on the current calibration time or the current analysis time, and the calibrated video time is obtained.
And combining the steps, and after the current calibration time or the current analysis time is obtained, performing calibration processing according to the relation among a plurality of preset time parameters to obtain the calibrated video time.
It can be understood that the current calibration time is obtained through network timing or active timing, and the accuracy of the current analysis time is higher than that obtained through analyzing the video content, so that when the current calibration time and the current analysis time exist in the same video segment, the time calibration processing is performed on the video segment by taking the current calibration time as a reference, and the calibrated video time is obtained.
For example, referring to fig. 2, if network connection is restored in the video segment to be calibrated, a time calibration command with a current calibration time of T0 is received, so that the current time of the video segment to be calibrated is time-calibrated, and based on the recording time T (running time after starting the device) from the recording start time T0 to the current time of the video segment to be calibrated, the recording start time T0 is rewritten (t0=t0-T), so as to obtain the calibrated recording start time; in addition, the time after the current calibration time may be updated in real time based on the network timing manner, or may be updated and increased based on the current calibration time and the subsequent recording time (device running time), which is not limited herein, and if the video segment to be calibrated is powered off and disconnected after the current calibration time T0, the power-down time (real end time T3) of the video segment to be calibrated is obtained.
As can be seen, the method and the device determine the recording start time of the video segment to be calibrated by acquiring the power-down time corresponding to the previous video segment of the video segment to be calibrated and the start time after power-up corresponding to the video segment to be calibrated, so as to determine the time relationship between the video segment to be calibrated and the previous video segment; if a timing instruction is received in the recording process of the video segment to be corrected, time correction is carried out on the video segment to be corrected by analyzing the current correction time in the instruction; if a timing instruction is not received in the recording process of the video segment to be corrected, the current analysis time is obtained by analyzing the video content of the video segment to be corrected, and time calibration is carried out on the video segment to be corrected based on the current analysis time, so that the video time calibration can be realized under the two conditions of receiving the timing instruction and not receiving the timing instruction.
On the basis of the embodiment, the embodiment of the application describes the step of performing time analysis on the video content based on the video segment to be calibrated to obtain the current analysis time. Specifically, the method of the embodiment comprises the following steps:
content analysis is carried out on the video content of the video segment to be calibrated, and a video event corresponding to the video content is obtained; matching the video event with a preset event template to obtain event time corresponding to the event template matched with the video event; and taking the event time as the current analysis time of the video segment to be calibrated.
The preset different event templates are provided with corresponding event time, the event time can be preset or can be determined according to one or more historical video segments with accurate time before the video segment to be calibrated, for example, the time for starting the infrared lamp according to the change of the ambient light brightness can be set to be 7:00pm, and the time for stopping the infrared lamp according to the change of the ambient light brightness can be set to be 7:00am, so that if the infrared lamp is started or stopped due to the change of the ambient light brightness in the video segment to be calibrated, the corresponding current analysis time can be obtained; in addition, if the video event of opening or closing the infrared lamp is performed in the historical video segment with accurate time, the event time of the template event can be dynamically updated based on the time corresponding to opening or closing the infrared lamp in the historical video segment as the event time of the event template, and the current analysis time is more accurate.
It should be noted that, the video event analyzed from the video content includes, but is not limited to, audio, a picture, and a preset trigger command, and similarly, the method for setting the event time of the template event can be applied to the audio, the picture, and the trigger command.
It can be appreciated that there may be a plurality of analyzable video content in the video segment to be calibrated, so that after time analysis is performed on the video content of the video segment to be calibrated, a plurality of video events and a plurality of corresponding candidate analysis times are obtained; if a plurality of candidate analysis time exists in the video segment to be calibrated, selecting the current analysis time from the plurality of candidate analysis time based on the weight value of the video content corresponding to each candidate analysis time; the event templates are preset with weight values for distinguishing the time accuracy of each event template, so that in the process of analyzing the video content, video events in the video content are matched with the preset event templates, the weight values of the event templates are used as the weight values of the video events matched with the event templates, namely the weight values of the video content, and the current analysis time can be relatively accurate analysis time in candidate analysis time of a plurality of video contents based on a video frequency band to be calibrated.
For example, if the weight value corresponding to the infrared lamp switch event is higher than the weight value corresponding to the alarm event, and if the infrared lamp switch event and the alarm event exist in the video segment to be calibrated, the candidate analysis time corresponding to the infrared lamp switch event is used as the current analysis time of the video segment to be calibrated.
In addition, the present embodiment may further combine video events in a plurality of video contents to determine a weight value of the video contents, for example: alarm events can occur within 1 hour after the infrared lamp is turned off in a preset event template, and the alarm events are detected to occur within 1 hour after the infrared lamp is turned off in the video content of the video section to be calibrated, so that the weight value of the alarm events can be superimposed on the weight value corresponding to the infrared lamp turning-off event, the weight value of the infrared lamp turning-off event is higher, and the time corresponding to the infrared lamp turning-off event is more accurate.
It can be seen that, in this embodiment, by analyzing the video content of the video segment to be calibrated, the video time of the video segment to be calibrated is calibrated under the condition that the network time calibration or the active time calibration instruction is not passed, so that the accuracy of the current analysis time is improved.
On the basis of the above embodiments, the embodiments of the present application describe the step of performing calibration processing on the recording start time based on the current calibration time or the current analysis time to obtain the calibrated video time. Specifically, the method of the embodiment comprises the following steps:
calculating the difference between the current calibration time and the running time corresponding to the acquired video segment to be calibrated to obtain the current starting time; and rewriting the recording start time based on the current start time to obtain calibrated video time, wherein the calibrated video time comprises the calibrated recording start time.
In connection with the foregoing embodiment, reference may be made to fig. 2, where the current calibration time is T0, and the acquired running time corresponding to the video segment to be calibrated is T, then the current start time should be T0-T, and the recording start time T0 of the video segment to be calibrated is rewritten based on the current start time, so that t0=t0-T is used to calibrate the recording start time, so as to obtain the calibrated recording start time, and correspondingly, the video time between the calibrated recording start time of the video segment to be calibrated and the current calibration time is calibrated synchronously, so as to obtain the calibrated video time.
On the basis of the above embodiments, the embodiments of the present application describe steps after obtaining the calibrated video time. Specifically, the method of the embodiment comprises the following steps:
if a playing instruction of the video segment to be calibrated is received, correspondingly overlapping the calibrated video time from the calibrated recording starting time to the calibrated video time in the current calibration time to the video segment to be calibrated for displaying in the process of playing the video segment to be calibrated; and writing the time correspondence after the current calibration time into the video segment after the video segment to be calibrated for display.
The play instruction refers to an instruction to play the video segment.
Superimposition refers to, for example, OSD (On-Screen Display) technology, in which information is displayed On a Screen of a television, a Display, or the like without changing original video data.
Specifically, for example, when a user queries and plays a video segment to be calibrated, time index data is extracted from a video segment data index and OSD superimposed display is performed in a video picture; for example, due to the influence of power-off and network-off, the video time of the video segment to be calibrated in the recording start time to the current calibration time is uncertain, so when the video data of the recording start time T0 to the current calibration time T0 is played, the time of T0 to T0 is correspondingly overlapped into the video picture of the video segment through the OSD technology to be displayed, and the video time of the video data after T0 is calibrated and determined, therefore, the video time after T0 can be directly written into the video data recorded after T0 without performing OSD overlapping display on the video picture and the video time after T0, and the corresponding time can be directly displayed when the video data after T0 is played.
Therefore, it can be understood that when the play instruction is not received, the uncertain video time is not written or overlapped in the video picture, so that the reduction of the processing efficiency of the data is avoided, and more resources are avoided being consumed.
On the basis of the above embodiments, the embodiments of the present application describe the step of performing calibration processing on the recording start time based on the current calibration time or the current analysis time to obtain the calibrated video time. Specifically, the method of the embodiment comprises the following steps:
calculating the difference between the current analysis time and the running time corresponding to the acquired video segment to be calibrated to obtain estimated starting time; and rewriting the recording start time based on the estimated start time to obtain a calibrated video time, wherein the calibrated video time comprises the calibrated recording start time.
In combination with the foregoing embodiments, if a timing instruction is not received and time calibration is not performed in the recording process of the video segment to be corrected, the video content of the video segment to be corrected needs to be analyzed to obtain the current analysis time t1, so as to calculate the difference between the current analysis time t1 and the obtained running time t corresponding to the video segment to be corrected, and obtain the estimated starting time t1-t; and (3) rewriting the recording start time based on the estimated start time to calibrate the recording start time, so as to obtain the calibrated recording start time T0=t1-T, wherein the recording start time calibrated based on the current analysis time has higher accuracy compared with the recording start time before calibration.
Similarly, the video time from the calibrated recording start time to the current analysis time of the video segment to be calibrated is correspondingly and synchronously calibrated, so that the calibrated video time is obtained; in addition, the time after the current analysis time of the video segment to be calibrated is updated and increased based on the current analysis time and the subsequent recording time or the running time of the device, if the video segment to be calibrated is powered off and disconnected after the current analysis time T1, the power-down time (virtual ending time T2) of the video segment to be calibrated is obtained, wherein the recording time refers to the acquisition (recording) time of the image acquisition device, and the running time refers to the running time after the power-up start of the image acquisition device is completed; if the image acquisition equipment keeps a recording state in the whole time period in the operation process, the operation time of the equipment is generally the same as the recording time; if the image capture device pauses or stops the over-recording state during operation, the device run time is typically greater than the recording time.
On the basis of the above embodiments, the embodiments of the present application describe steps after obtaining the calibrated recording start time. Specifically, the method of the embodiment comprises the following steps:
And if a playing instruction of the video segment to be calibrated is received, correspondingly overlapping the calibrated video time from the calibrated recording starting time to the calibrated video time in the current analysis time to the video segment to be calibrated for display in the process of playing the video segment to be calibrated.
In the above embodiments, if the video segment to be calibrated does not receive the timing instruction for accurately calibrating, the current analysis time obtained based on the analysis of the video content and the recording start time calibrated based on the current analysis time are also not accurate times, so after receiving the playing instruction of the video segment to be calibrated, the video time and the video frame calibrated based on the current analysis time are displayed in an OSD superimposed manner, instead of writing the video time into the video data.
It should be noted that, if the video content in the video segment to be calibrated cannot be analyzed, for example, the video encryption, the system does not have the analysis capability, etc., the time data is updated and increased from the acquired recording start time T0 according to the acquired recording time or the equipment running time, and the time data from T0 is correspondingly superimposed into the frame of the video segment to be calibrated through OSD technology.
On the basis of the above embodiment, the steps after receiving the play instruction of the video segment to be calibrated are described in the embodiment of the present application. Specifically, the method of the embodiment comprises the following steps:
if a time setting instruction of the video segment to be calibrated is received, respectively superposing the time corresponding to the time setting instruction and the calibrated recording starting time to the calibrated video time in the current analysis time to the video content of the video segment to be calibrated for display based on different display modes in the process of playing the video segment to be calibrated, wherein the different display modes at least comprise one of different display fonts or different display colors.
The time setting instruction is an instruction that a user actively sets the time of the video segment to be corrected (i.e. the video segment to be corrected for performing time analysis) which is not subjected to time correction by the time correction instruction, wherein if the obtained video segment to be corrected for performing time analysis is correspondingly given a prompt to the user and a time setting interface is provided for the user, so that the user actively sets the video segment to be corrected for performing time analysis based on the time setting interface.
For example, if the current analysis time t1 is obtained and the current setting time is obtained, t1 and t2 are respectively displayed in the frames of the video segment to be calibrated in different display modes, as shown in fig. 4, fig. 4 is a schematic view of the effect of time display in the video time calibration method of the present application, t1 may be displayed in the display mode of OSD1, and t2 may be displayed in the display mode of OSD2, where different display modes refer to at least one of different display fonts or different display colors, so as to distinguish the two times when playing the frames of the video segment to be calibrated.
It can be understood that if the current analysis time t1 and the current setting time t2 cannot be obtained, the OSD corresponding to t2 is only superimposed in the video segment to be calibrated; if the current analysis time t1 is acquired and the current setting time t2 is not acquired, overlapping the OSD corresponding to t1 in the video segment to be calibrated; if the video segment to be corrected does not perform network correction through the correction instruction, the user can perform time setting on the video segment to be corrected for multiple times, namely, a plurality of current setting times t2 are set on the video segment to be corrected.
It should be further noted that the main implementation of the video time calibration method may be a video time calibration apparatus, for example, the video time calibration method may be implemented by a terminal device or a server or other processing device, where the terminal device may be a User Equipment (UE), a computer, a mobile device, a User terminal, a cellular phone, a cordless phone, a personal digital assistant (Personal Digital Assistant, PDA), a handheld device, a computing device, a vehicle-mounted device, a wearable device, etc. In some possible implementations, the video time calibration method may be implemented by way of a processor invoking computer readable instructions stored in a memory.
Fig. 5 is a block diagram of a video time alignment apparatus according to an exemplary embodiment of the present application. As shown in fig. 5, the exemplary video time alignment apparatus 500 includes: a time determination module 510, an instruction parsing module 520, a video analysis module 530, and a time calibration module 540. Specifically:
the time determining module 510 is configured to determine a recording start time of the video segment to be calibrated based on the acquired power-down time corresponding to a previous video segment of the video segment to be calibrated and the power-up start time corresponding to the video segment to be calibrated.
The instruction parsing module 520 is configured to parse the timing instruction to obtain the current calibration time if the timing instruction is received during the recording process of the video segment to be calibrated.
The video analysis module 530 is configured to perform time analysis based on the video content of the video segment to be calibrated if the time calibration instruction is not received during the recording process of the video segment to be calibrated, so as to obtain the current analysis time.
The time calibration module 540 is configured to calibrate the recording start time based on the current calibration time or the current analysis time, and obtain a calibrated video time.
In the video time calibration device, the recording starting time of the video segment to be calibrated is determined by acquiring the power-down time corresponding to the previous video segment of the video segment to be calibrated and the starting time after power-up corresponding to the video segment to be calibrated, so that the time relationship between the video segment to be calibrated and the previous video segment is determined; if a timing instruction is received in the recording process of the video segment to be corrected, time correction is carried out on the video segment to be corrected by analyzing the current correction time in the instruction; if a timing instruction is not received in the recording process of the video segment to be corrected, the current analysis time is obtained by analyzing the video content of the video segment to be corrected, and time calibration is carried out on the video segment to be corrected based on the current analysis time, so that the video time calibration can be realized under the two conditions of receiving the timing instruction and not receiving the timing instruction.
The functions of each module may be referred to an embodiment of the video time calibration method, which is not described herein.
Referring to fig. 6, fig. 6 is a schematic structural diagram of an embodiment of an electronic device of the present application. The electronic device 600 comprises a memory 601 and a processor 602, the processor 602 being adapted to execute program instructions stored in the memory 601 to implement the steps of any of the video time calibration method embodiments described above. In one particular implementation scenario, electronic device 600 may include, but is not limited to: the electronic device 600 may also include mobile devices such as a notebook computer and a tablet computer, and is not limited herein.
In particular, the processor 602 is used to control itself and the memory 601 to implement the steps in any of the video time calibration method embodiments described above. The processor 602 may also be referred to as a CPU (Central Processing Unit ). The processor 602 may be an integrated circuit chip having signal processing capabilities. The processor 602 may also be a general purpose processor, a digital signal processor (Digital Signal Processor, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. In addition, the processor 602 may be commonly implemented by an integrated circuit chip.
According to the scheme, the recording starting time of the video segment to be calibrated is determined by acquiring the power-down time corresponding to the previous video segment of the video segment to be calibrated and the starting time after power-up corresponding to the video segment to be calibrated, so that the time relationship between the video segment to be calibrated and the previous video segment is determined; if a timing instruction is received in the recording process of the video segment to be corrected, time correction is carried out on the video segment to be corrected by analyzing the current correction time in the instruction; if a timing instruction is not received in the recording process of the video segment to be corrected, the current analysis time is obtained by analyzing the video content of the video segment to be corrected, and time calibration is carried out on the video segment to be corrected based on the current analysis time, so that the video time calibration can be realized under the two conditions of receiving the timing instruction and not receiving the timing instruction.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of a computer readable storage medium of the present application. The computer readable storage medium 710 stores program instructions 711 executable by the processor, the program instructions 711 for implementing the steps in any of the video time alignment method embodiments described above.
According to the scheme, the recording starting time of the video segment to be calibrated is determined by acquiring the power-down time corresponding to the previous video segment of the video segment to be calibrated and the starting time after power-up corresponding to the video segment to be calibrated, so that the time relationship between the video segment to be calibrated and the previous video segment is determined; if a timing instruction is received in the recording process of the video segment to be corrected, time correction is carried out on the video segment to be corrected by analyzing the current correction time in the instruction; if a timing instruction is not received in the recording process of the video segment to be corrected, the current analysis time is obtained by analyzing the video content of the video segment to be corrected, and time calibration is carried out on the video segment to be corrected based on the current analysis time, so that the video time calibration can be realized under the two conditions of receiving the timing instruction and not receiving the timing instruction.
In some embodiments, functions or modules included in an apparatus provided by the embodiments of the present disclosure may be used to perform a method described in the foregoing method embodiments, and specific implementations thereof may refer to descriptions of the foregoing method embodiments, which are not repeated herein for brevity.
The foregoing description of various embodiments is intended to highlight differences between the various embodiments, which may be the same or similar to each other by reference, and is not repeated herein for the sake of brevity.
In the several embodiments provided in the present application, it should be understood that the disclosed methods and apparatus may be implemented in other manners. For example, the apparatus embodiments described above are merely illustrative, e.g., the division of modules or units is merely a logical functional division, and there may be additional divisions of actual implementation, e.g., units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed with each other may be an indirect coupling or communication connection via some interfaces, devices or units, which may be in electrical, mechanical, or other forms.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units. The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in part or all or part of the technical solution contributing to the prior art or in the form of a software product stored in a storage medium, including several instructions to cause a computer device (which may be a personal computer, a server, or a network device, etc.) or a processor (processor) to perform all or part of the steps of the methods of the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.

Claims (10)

1. A method of video time alignment, the method comprising:
determining the recording starting time of the video segment to be calibrated based on the acquired power-down time corresponding to the previous video segment of the video segment to be calibrated and the power-up starting time corresponding to the video segment to be calibrated;
if a timing instruction is received in the recording process of the video segment to be corrected, analyzing the timing instruction to obtain the current correction time;
if the timing instruction is not received in the recording process of the video segment to be corrected, performing time analysis based on the video content of the video segment to be corrected to obtain the current analysis time;
and calibrating the recording starting time based on the current calibration time or the current analysis time to obtain calibrated video time.
2. The method according to claim 1, wherein the step of performing time analysis based on the video content of the video segment to be calibrated to obtain the current analysis time includes:
content analysis is carried out on the video content of the video segment to be calibrated, and a video event corresponding to the video content is obtained;
matching the video event with a preset event template to obtain event time corresponding to the event template matched with the video event;
And taking the event time as the current analysis time of the video segment to be calibrated.
3. The method of claim 1, wherein the step of calibrating the recording start time based on the current calibration time or the current analysis time to obtain a calibrated video time comprises:
calculating the difference between the current calibration time and the acquired running time corresponding to the video segment to be calibrated to obtain the current starting time;
and rewriting the recording starting time based on the current starting time to obtain the calibrated video time, wherein the calibrated video time comprises the calibrated recording starting time.
4. A method according to claim 3, wherein after the step of deriving the calibrated video time, the method further comprises:
if a playing instruction of the video segment to be calibrated is received, correspondingly overlapping the calibrated video time from the calibrated recording starting time to the calibrated video time in the current calibration time to the video segment to be calibrated for display in the process of playing the video segment to be calibrated;
and writing the time correspondence after the current calibration time into the video segment after the video segment to be calibrated for displaying.
5. The method of claim 1, wherein the step of calibrating the recording start time based on the current calibration time or the current analysis time to obtain a calibrated video time comprises:
calculating the difference between the current analysis time and the acquired running time corresponding to the video segment to be calibrated to obtain estimated starting time;
and rewriting the recording starting time based on the estimated starting time to obtain the calibrated video time, wherein the calibrated video time comprises the calibrated recording starting time.
6. The method of claim 5, wherein after the step of obtaining a calibrated recording start time, the method further comprises:
and if a playing instruction of the video segment to be calibrated is received, correspondingly overlapping the calibrated video time from the calibrated recording starting time to the calibrated video time in the current analysis time to the video segment to be calibrated for display in the process of playing the video segment to be calibrated.
7. The method of claim 6, wherein after the step of receiving a play instruction for the video segment to be scheduled, the method further comprises:
And if a time setting instruction for the video segment to be calibrated is received, respectively adding the time corresponding to the time setting instruction and the calibrated video time from the calibrated recording start time to the calibrated video time in the current analysis time to the video content of the video segment to be calibrated for display based on different display modes in the process of playing the video segment to be calibrated, wherein the different display modes at least comprise one of different display fonts or different display colors.
8. A video time alignment apparatus, comprising:
the time determining module is used for determining the recording starting time of the video segment to be calibrated based on the power-down time corresponding to the previous video segment of the obtained video segment to be calibrated and the starting time after power-up corresponding to the video segment to be calibrated;
the instruction analysis module is used for analyzing the timing instruction to obtain the current calibration time if the timing instruction is received in the recording process of the video segment to be calibrated;
the video analysis module is used for carrying out time analysis based on the video content of the video segment to be calibrated if the time calibrating instruction is not received in the recording process of the video segment to be calibrated, so as to obtain the current analysis time;
And the time calibration module is used for calibrating the recording starting time based on the current calibration time or the current analysis time to obtain calibrated video time.
9. An electronic device comprising a memory and a processor for executing program instructions stored in the memory to implement the method of any one of claims 1 to 7.
10. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of any of claims 1 to 7.
CN202311051851.4A 2023-08-18 2023-08-18 Video time calibration method, device, equipment and storage medium Pending CN117336418A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311051851.4A CN117336418A (en) 2023-08-18 2023-08-18 Video time calibration method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311051851.4A CN117336418A (en) 2023-08-18 2023-08-18 Video time calibration method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117336418A true CN117336418A (en) 2024-01-02

Family

ID=89294072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311051851.4A Pending CN117336418A (en) 2023-08-18 2023-08-18 Video time calibration method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117336418A (en)

Similar Documents

Publication Publication Date Title
EP4080834A1 (en) Notification processing system and method, and electronic device
CN110399110B (en) Multi-screen synchronous display method and system, display equipment and storage medium
CN109167931A (en) Image processing method, device, storage medium and mobile terminal
CN104917581A (en) System time correction method and device, terminal and router
CN113766217B (en) Video delay testing method and device, electronic equipment and storage medium
CN114071197A (en) Screen projection data processing method and device
CN102780840B (en) Imaging apparatus
CN117336418A (en) Video time calibration method, device, equipment and storage medium
CN111949512A (en) Application program jamming detection method and device, terminal and medium
US20100177202A1 (en) Electronic apparatus, imaging device, method for time correction, and program
CN111312207A (en) Text-to-audio method and device, computer equipment and storage medium
CN111444822A (en) Object recognition method and apparatus, storage medium, and electronic apparatus
CN115688206A (en) Display device and interface display method
CN115426501A (en) Audio and video code stream time calibration method and electronic equipment
CN114745537A (en) Sound and picture delay testing method and device, electronic equipment and storage medium
CN114779936A (en) Information display method and device, electronic equipment and storage medium
US20170318342A1 (en) Channel switching method and device
CN114257502A (en) Log reporting method and device
JP5587089B2 (en) Communication apparatus and control method
CN115956239A (en) Time synchronization method and related equipment
WO2023240927A1 (en) Terminal capability test system, method, device, and storage medium
CN116757983B (en) Main and auxiliary image fusion method and device
WO2023246376A1 (en) Display device and channel playing method
CN113225783B (en) Automatic channel switching method and display equipment
CN115080200A (en) Time calibration method and device, wearable device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination