CN101931775A - Video recording method and device - Google Patents

Video recording method and device Download PDF

Info

Publication number
CN101931775A
CN101931775A CN2010102713607A CN201010271360A CN101931775A CN 101931775 A CN101931775 A CN 101931775A CN 2010102713607 A CN2010102713607 A CN 2010102713607A CN 201010271360 A CN201010271360 A CN 201010271360A CN 101931775 A CN101931775 A CN 101931775A
Authority
CN
China
Prior art keywords
frame data
audio
video
synchronous control
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2010102713607A
Other languages
Chinese (zh)
Inventor
代康
陈有鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZTE Corp
Original Assignee
ZTE Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZTE Corp filed Critical ZTE Corp
Priority to CN2010102713607A priority Critical patent/CN101931775A/en
Publication of CN101931775A publication Critical patent/CN101931775A/en
Priority to PCT/CN2011/076228 priority patent/WO2012028021A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/7921Processing of colour television signals in connection with recording for more than one processing mode
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/82Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
    • H04N9/8205Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only involving the multiplexing of an additional signal and the colour video signal

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a video recording method, which comprises the following steps of: respectively encoding acquired audio and video data and respectively allocating synchronous control identifiers to encoded audio frame data and encoded video frame data; and storing the audio frame data, the video frame data and the corresponding synchronous control identifiers in a cache, comparing synchronous control identifiers of the current audio frame and the current video frame in the cache and then storing the audio and video frame data in the cache according to a preset storage rule. The invention discloses a video recording device simultaneously. The method and the device of the invention can realize synchronization of the audio and video data acquired in the recording process so as to avoid asynchronism of the audio and video data caused by variation of frame rate of an image sensor or scheduling latency of a system and further promote user experience.

Description

Video recording method and device
Technical Field
The present invention relates to video recording technologies in the multimedia field, and in particular, to a video recording method and apparatus.
Background
With the development of electronic technology and software technology, the proportion of multimedia in daily life is increasing, and meanwhile, the requirements of people on multimedia experience are increasing.
Through the video recording, people can conveniently record the infusion in daily life in an image and sound mode, so that multimedia equipment with the video recording function is more and more favored by consumers. The final embodiment of the video file is playing, and if the recorded file is not synchronized with sound in picture, the user experience is greatly influenced.
Under the ideal condition, the time interval of each frame of audio and video data collected during video recording is constant, wherein the frame interval of the video data is determined by the set frame rate of the image sensor, and the frame interval of the audio data is determined by the set sampling interval. However, in the actual recording process, due to the frame rate reduction of the image sensor or the delay of the system scheduling, the frame interval of the video data becomes non-constant, so that an error occurs in the time statistics of the audio and video data, that is: the phenomenon that the audio and video cannot be synchronized can occur. Moreover, as the recording time of the user goes by, the situation that the audio and video cannot be synchronized becomes more serious.
The chinese patent application No. 200610041631.3, entitled "a video recording method", proposes: and audio and video data are recorded in a segmented mode, and selective storage is carried out by comparing the time length of the recorded audio and video data, so that the effect of audio and video synchronization is achieved. However, because there is a factor of frame rate reduction of the image sensor or system scheduling delay in the recording process, the video recording method provided by the above patent still causes an error in the time statistics of the audio and video, and accordingly, if these errors are not processed, the phenomenon that the audio and video cannot be synchronized occurs along with the lapse of the recording time of the user.
Disclosure of Invention
In view of the above, the present invention provides a video recording method and apparatus, which can solve the problem in the prior art that audio/video data cannot be synchronized due to frame rate variation of an image sensor or system scheduling delay.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
the invention provides a video recording method, which comprises the following steps:
respectively encoding the collected audio and video data, and respectively distributing synchronous control identifications for the encoded audio frame data and video frame data;
storing the audio frame data, the video frame data and the corresponding synchronous control identification into a cache, comparing the synchronous control identification of the current audio frame and the video frame in the cache, and then storing the audio and video frame data in the cache according to a preset storage rule.
In the foregoing solution, before allocating synchronization control identifiers to the encoded audio frame data and video frame data, respectively, the method further includes:
and respectively calculating the time intervals of the initial video frame data and the initial audio frame data according to the preset frame rate and audio sampling interval of the image sensor.
In the above scheme, the time interval of the audio frame data is used as a reference, and the allocating of the synchronization control identifier for the encoded audio frame data and the encoded video frame data is specifically:
respectively counting the number of audio frames and the number of video frames which have the same time interval and are currently coded, then matching the audio frames and the number of video frames with a preset analysis rule, and correcting the time interval of the video frame data by adopting a preset correction formula after the matching is determined;
setting the counted audio frame number and video frame number as zero, then distributing the synchronous control mark for the current video frame data according to the time interval of the corrected video frame data and the synchronous control mark distributed for the video frame data last time, correspondingly distributing the synchronous control mark for the current audio frame data according to the time interval of the audio frame data and the synchronous control mark distributed for the audio frame data last time.
In the above scheme, the method further comprises:
after the fact that the audio frame data cannot be matched with the preset analysis rule is determined, the synchronous control identification is distributed for the current video frame data according to the time interval of the currently adopted video frame data and the synchronous control identification distributed for the video frame data at the last time, and correspondingly, the synchronous control identification is distributed for the current audio frame data according to the time interval of the audio frame data and the synchronous control identification distributed for the audio frame data at the last time.
In the above scheme, if Δ TV>ΔTAThen, the analysis rule is: | N Δ TV-MΔTA|>ΔTV(ii) a If Δ TV<ΔTAThe analysis rule is as follows: | N Δ TV-MΔTA|>ΔTA(ii) a Wherein, Delta TVRepresenting the time interval, Δ T, of the currently taken video frame dataARepresenting time intervals of audio frame data, N Δ TVRepresenting having the same time interval Δ TVOf all video frame data, M Δ TAIs expressed in N.DELTA.TVCumulative time of time interval of all audio frame data;
Accordingly, if Δ TV>ΔTAThe correction formula is as follows:
Figure BSA00000255937200031
if Δ TV<ΔTAThen, the correction formula is:
Figure BSA00000255937200032
wherein,
Figure BSA00000255937200033
meaning that the rounding is done down,
Figure BSA00000255937200034
indicating rounding up.
In the above scheme, the time interval of the video frame data is used as a reference, and the allocating of the synchronization control identifier for the encoded audio frame data and the encoded video frame data is specifically:
respectively counting the number of audio frames and the number of video frames which have the same time interval and are currently encoded, then matching the audio frames and the number of video frames with a preset analysis rule, and correcting the time interval of the audio frame data by adopting a preset correction formula after the matching is determined;
setting the counted audio frame number and video frame number as zero, then distributing the synchronous control mark for the current audio frame data according to the time interval of the corrected audio frame data and the synchronous control mark distributed for the audio frame data last time, and correspondingly distributing the synchronous control mark for the current video frame data according to the time interval of the video frame data and the synchronous control mark distributed for the video frame data last time.
In the above scheme, the method further comprises:
after the fact that the audio frame data cannot be matched with the preset analysis rule is determined, the synchronous control identification is distributed for the current audio frame data according to the time interval of the currently adopted audio frame data and the synchronous control identification distributed for the audio frame data at the last time, and correspondingly, the synchronous control identification is distributed for the current video frame data according to the time interval of the video frame data and the synchronous control identification distributed for the video frame data at the last time.
In the above scheme, if Δ TV>ΔTAThen, the analysis rule is: | N Δ TV-MΔTA|>ΔTV(ii) a If Δ TV<ΔTAThe analysis rule is as follows: | N Δ TV-MΔTA|>ΔTA
Accordingly, if Δ TV>ΔTAThe correction formula is as follows:
Figure BSA00000255937200041
if Δ TV<ΔTAThen, the correction formula is:
Figure BSA00000255937200042
in the above scheme, the storage rule is: and if the synchronous control identification of the audio frame is less than or equal to the synchronous control identification of the video frame, saving the audio frame data, and if the synchronous control identification of the audio frame is greater than the synchronous control identification of the video frame, saving the video frame data.
In the above scheme, the method further comprises:
and judging whether the current audio and video frame data and the corresponding synchronous control identification exist in the cache, and if yes, comparing the synchronous control identifications of the current audio frame and the video frame in the cache.
The invention also provides a video recording device, comprising: the device comprises a collecting unit, a coding unit, a synchronous control unit, a cache unit and a writing unit; wherein,
the acquisition unit is used for acquiring audio and video data from an audio and video data source and sending the acquired audio and video data to the coding unit;
the encoding unit is used for respectively encoding the acquired audio and video data after receiving the audio and video data transmitted by the acquisition unit and transmitting the encoded audio frame data and video frame data to the synchronous control unit;
the synchronous control unit is used for respectively distributing synchronous control identifications to the encoded audio frame data and the encoded video frame data after receiving the encoded audio frame data and the encoded video frame data sent by the encoding unit, then storing the audio frame data, the encoded video frame data and the corresponding synchronous control identifications into the cache unit, and triggering the write-in unit;
the buffer unit is used for storing audio frame data, video frame data and corresponding synchronous control identification;
and the writing unit is used for comparing the synchronous control identification of the current audio frame and the video frame in the cache unit after receiving the trigger information of the synchronous control unit, and then storing the audio and video frame data in the cache unit according to a preset storage rule.
In the foregoing solution, the synchronization control unit is further configured to, before allocating the synchronization control identifier for the encoded audio frame data and the encoded video frame data, calculate time intervals of the initial video frame data and the initial audio frame data according to a preset frame rate and an audio sampling interval of the image sensor.
In the above scheme, the write-in unit is further configured to determine whether the buffer unit currently has audio/video frame data and a corresponding synchronization control identifier, and compare the synchronization control identifiers of the current audio frame and the video frame in the buffer unit if the synchronization control identifiers are determined to be present.
In the above scheme, the apparatus further comprises: a setting unit for setting an analysis rule, a frame rate and an audio sampling interval of the image sensor, a correction formula, and a storage rule.
In the foregoing solution, the writing unit is further configured to store the audio frame data or the video frame data remaining in the buffer unit after the video recording is stopped.
The video recording method and the video recording device respectively distribute synchronous control identifications for the encoded audio frame data and the encoded video frame data; the audio frame data, the video frame data and the corresponding synchronous control identification are stored in a cache, the synchronous control identification of the current audio frame and the video frame in the cache is compared, and then the audio and video frame data in the cache is stored according to a preset storage rule, so that the synchronization of the audio and video data collected in the video recording process can be realized, the phenomenon that the audio and video data cannot be synchronized due to the frame rate change of an image sensor or the system scheduling delay can be avoided, and the user experience is further improved.
In addition, the time stamp is used as the synchronous control identification, when the synchronous control identification is distributed, the time interval of the video frame data is corrected by using a correction formula, and the synchronous control identification is distributed to the current video frame data according to the time interval of the corrected video frame data; or, the time interval of the audio frame data is corrected by using a correction formula, and a synchronization control identifier is allocated to the current audio frame data according to the time interval of the corrected audio frame data, so that the synchronization of the audio and video data can be simply and effectively realized.
Drawings
FIG. 1 is a flow chart illustrating a video recording method according to the present invention;
FIG. 2 is a schematic flow chart of a method for allocating synchronization control marks according to the present invention when the time interval of audio frame data is used as a reference;
FIG. 3 is a schematic flow chart of a method for allocating synchronization control marks according to the present invention when a time interval of video frame data is used as a reference;
FIG. 4 is a schematic diagram of a video recording apparatus according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
The video recording method of the present invention, as shown in fig. 1, includes the following steps:
step 101: respectively encoding the collected audio and video data, and respectively distributing synchronous control identifications for the encoded audio frame data and video frame data;
here, the acquired audio/video data refers to audio/video data acquired from an audio/video data source;
the existing coding technology can be adopted to code the audio and video data respectively;
the invention adopts the time stamp as the synchronous control mark, namely: the synchronous control identification of the encoded audio frame data and the encoded video frame data is the starting time of the frame data;
before allocating the synchronization control identifier to the encoded audio frame data and the encoded video frame data, respectively, the method may further include:
respectively calculating the time intervals of initial video frame data and audio frame data according to the preset frame rate and audio sampling interval of the image sensor;
setting a frame rate and an audio sampling interval of an image sensor according to the performance of a camera; adopting the reciprocal of the frame rate of the image sensor as the time interval of the initial video frame data, and adopting the reciprocal of the audio sampling interval as the time interval of the initial audio frame data; in addition, the time interval of the audio frame data is taken as a reference object, so that the time interval of the audio frame data is a fixed value and cannot be changed in the whole video recording process;
with the time interval of the audio frame data as a reference, respectively allocating a synchronization control identifier to the encoded audio frame data and the encoded video frame data, as shown in fig. 2, the method includes the following steps:
step 101 a: respectively counting the number of audio frames and the number of video frames which have the same time interval and are currently coded, and then executing a step 101 b;
here, two counters may be adopted, the audio frame number and the video frame number with the same time interval after the coding are respectively counted according to the collecting sequence, when the audio frame number and the video frame number with the same time interval after the current coding are counted, 1 is added to the value of the counter for counting the audio frame number after the coding of the audio frame data with the same time interval is completed, similarly, 1 is added to the value of the counter for counting the video frame number after the coding of the video data with the same time interval is completed, the values of the two counters are cleared after the time interval of the video frame is corrected, and then the counting is restarted;
the video frame data with the same time interval refers to: distributing the video frame data of the synchronous control identification and the video frame data after the current coding by adopting the same time interval; correspondingly, the audio frame data with the same time interval refers to the encoded audio frame data within the accumulated time of the time interval of the video frame data with the same time interval distribution synchronization control identification and the current encoded video frame data;
step 101 b: judging whether the analysis rule can be matched with a preset analysis rule, if so, executing a step 101c, otherwise, executing a step 101 f;
here, if Δ TV>ΔTAThe analysis rule is as follows: | N Δ TV-MΔTA|>ΔTV(ii) a If Δ TV<ΔTAThe analysis rule is as follows: | N Δ TV-MΔTA|>ΔTA(ii) a Wherein, Delta TVRepresenting the time interval, Δ T, of the currently taken video frame dataARepresenting time intervals of audio frame data, N Δ TVRepresenting having the same time interval Δ TVOf all video frame data, M Δ TAIs expressed in N.DELTA.TVThe accumulated time of the time intervals of all audio frame data; m and N respectively correspond to the counted audio frame number and the counted video frame number, namely: counting the value of a counter of the number of audio frames and the value of a counter of the number of video frames;
in practical application, Δ T is generally setV>ΔTA
The basis for setting the analysis rules is as follows: in practical applications, the video frame data or the audio frame data are allowed to lag a certain amount, but not too much, that is: the allowable lag time is Δ TVAnd Δ TAMaximum of the two, if the lag time exceeds Δ TVAnd Δ TAThe maximum value of the two will result in the phenomenon that the audio and video data can not be synchronized with the lapse of the recording time.
Step 101 c: correcting the time interval of the video frame data by adopting a preset correction formula;
here, if Δ TV>ΔTAThen the correction formula is:
Figure BSA00000255937200081
if Δ TV<ΔTAThen the correction formula is:
Figure BSA00000255937200082
wherein,
Figure BSA00000255937200083
meaning that the rounding is done down,
Figure BSA00000255937200084
represents rounding up;
the basis for setting the correction formula is as follows: and analyzing the inequality of the analysis rule to obtain a correction formula.
Step 101 d: setting the counted audio frame number and video frame number to zero respectively, and then executing the step 101 e;
here, specifically, the values of the two counters are respectively cleared; the purpose of setting the counted audio frame number and the video frame number to be zero respectively is as follows: and allocating the synchronous control identification for the next video frame data.
Step 101 e: distributing a synchronous control identifier for the current video frame data according to the time interval of the corrected video frame data and the synchronous control identifier distributed for the video frame data last time, and correspondingly distributing a synchronous control identifier for the current audio frame data according to the time interval of the audio frame data and the synchronous control identifier distributed for the audio frame data last time;
here, for example, assume that the synchronization control flag assigned to the video frame data last time is TVThe time interval of the corrected video frame data is Delta T'vThen, the synchronization control flag allocated to the current video frame data is: t isV+ΔTV'; accordingly, assume that the synchronization control flag last allocated for audio frame data is TATime interval of audio frame data is Δ TAThen isThe synchronization control flag of the current audio frame data allocation is as follows: t isA+ΔTA
Step 101 f: distributing a synchronous control identifier for the current video frame data according to the time interval of the currently adopted video frame data and the synchronous control identifier distributed for the video frame data last time, and correspondingly distributing a synchronous control identifier for the current audio frame data according to the time interval of the audio frame data and the synchronous control identifier distributed for the audio frame data last time;
here, for example, assume that the synchronization control flag assigned to the video frame data last time is TVThe time interval of the currently used video frame data is Δ TVThen, the synchronization control flag allocated to the current video frame data is: t isV+ΔTV(ii) a Accordingly, assume that the synchronization control flag last allocated for audio frame data is TATime interval of audio frame data is Δ TAThen, the synchronization control flag allocated to the current audio frame data is: t isA+ΔTA
Similarly, the time interval of the video frame data may be used as a reference, and the synchronization control flag may be respectively allocated to the encoded audio frame data and the encoded video frame data, at this time, the time interval of the video frame data is a fixed value and does not change, as shown in fig. 3, the method includes the following steps:
101A: respectively counting the number of audio frames and the number of video frames which have the same time interval and are currently coded, and then executing a step 101B;
here, the statistical method is the same as that in the case where the time interval of the audio frame data is used as a reference, and detailed description thereof is omitted.
Step 101B: judging to be matched with a preset analysis rule, if the preset analysis rule can be matched, executing a step 101C, otherwise, executing a step 101F;
here, the analysis rule is the same as that in the case where the time interval of the audio frame data is used as a reference, and will not be described in detail here.
Step 101C: correcting the time interval of the audio frame data by adopting a preset correction formula;
here, if Δ TV>ΔTAThen the correction formula is:
Figure BSA00000255937200101
if Δ TV<ΔTAThen the correction formula is:
Figure BSA00000255937200102
step 101D: the counted audio frame number and video frame number are set to zero, respectively, and then step 101D is performed.
Step 101E: and correspondingly, distributing the synchronous control identification for the current video frame data according to the time interval of the video frame data and the synchronous control identification distributed for the video frame data last time.
Step 101F: and correspondingly, distributing the synchronous control identification for the current video frame data according to the time interval of the video frame data and the synchronous control identification distributed for the video frame data last time.
Step 102: storing the audio frame data, the video frame data and the corresponding synchronous control identification into a cache, comparing the synchronous control identification of the current audio frame and the video frame in the cache, and then storing the audio and video frame data in the cache according to a preset storage rule;
here, the storage rule is: if the synchronous control identification of the audio frame is less than or equal to the synchronous control identification of the video frame, saving audio frame data, and if the synchronous control identification of the audio frame is greater than the synchronous control identification of the video frame, saving video frame data;
the audio frame data or the video frame data which is not stored currently is still kept in the cache, after the new video frame data or the audio frame data and the corresponding synchronous control identification are stored in the cache, the synchronous control identification corresponding to the audio frame data or the video frame data which is not stored currently is compared with the new video frame data or the audio frame data and the corresponding synchronous control identification, the audio and video frame data in the cache is stored according to the storage rule, specifically, if the audio frame data is not stored currently, after the new video frame data and the corresponding synchronous control identification are stored in the cache, the synchronous control identification corresponding to the audio frame data which is not stored currently is compared with the synchronous control identification corresponding to the new video frame data, and the audio and video frame data in the cache are stored according to the storage rule; if the video frame data is not stored currently, after the new audio frame data and the corresponding synchronous control identification are stored in the cache, comparing the synchronous control identification corresponding to the currently stored video frame data with the new audio frame data and the corresponding synchronous control identification, storing the audio and video frame data in the cache according to a storage rule, and repeating the steps until the whole video recording process is finished;
storing audio and video frame data means forming an audio and video file which can be played, and the formed audio and video file can realize the synchronization of pictures and sound when being played;
before comparing the synchronization control identifier of the current audio frame and the video frame in the buffer, the method may further include:
and judging whether the current audio and video frame data and the corresponding synchronous control identification exist in the cache, if so, comparing the synchronous control identifications of the current audio frame and the video frame in the cache, and if not, not doing any operation.
After the video recording is stopped, if the residual audio frame data or video frame data exist in the cache, the residual audio frame data or video frame data in the cache can be stored; after the video recording is stopped, only one of the audio frame data and the video frame data is left in the buffer memory because the time intervals of the audio frame and the video frame are different.
In order to implement the above method, the present invention further provides a video recording apparatus, as shown in fig. 4, the apparatus includes: a collection unit 41, an encoding unit 42, a synchronization control unit 43, a buffer unit 44, and a writing unit 45; wherein,
the acquisition unit 41 is configured to acquire audio and video data from an audio and video data source and send the acquired audio and video data to the encoding unit 42;
the encoding unit 42 is configured to encode the acquired audio and video data respectively after receiving the audio and video data sent by the acquisition unit 41, and send encoded audio frame data and video frame data to the synchronization control unit 43;
a synchronization control unit 43, configured to, after receiving the encoded audio frame data and the encoded video frame data sent by the encoding unit 42, allocate synchronization control identifiers to the encoded audio frame data and the encoded video frame data, store the audio frame data, the video frame data, and the corresponding synchronization control identifiers in a buffer unit 44, and trigger the write unit 45;
the buffer unit 44 is configured to store audio frame data, video frame data, and corresponding synchronization control identifiers;
and the writing unit 45 is configured to compare the synchronization control identifier of the current audio frame and the video frame in the buffer unit 44 after receiving the trigger information of the synchronization control unit 43, and then store the audio/video frame data in the buffer unit 44 according to a preset storage rule.
Wherein, the device can also further include: a setting unit for setting an analysis rule, a frame rate and an audio sampling interval of the image sensor, a correction formula, and a storage rule.
The synchronization control unit 43 is further configured to calculate time intervals of the initial video frame data and the initial audio frame data according to a preset frame rate and an audio sampling interval of the image sensor before allocating synchronization control identifiers to the encoded audio frame data and the encoded video frame data, respectively.
The writing unit 45 is further configured to determine whether the buffer unit 44 has the audio/video frame data and the corresponding synchronization control identifier at the same time, and compare the synchronization control identifier of the current audio frame and the synchronization control identifier of the video frame in the buffer unit 44 if the audio/video frame data and the corresponding synchronization control identifier are determined to be present.
The writing unit 45 is further configured to determine that no operation is performed when the buffer unit 44 currently has no audio/video frame data and no corresponding synchronization control identifier.
The writing unit 45 is further configured to store the audio frame data or the video frame data remaining in the buffer unit 44 after the video recording is stopped.
Here, the specific processing procedure of the synchronization control unit in the apparatus of the present invention has been described in detail above, and is not described in detail again.
The above description is only exemplary of the present invention and should not be taken as limiting the scope of the present invention, and any modifications, equivalents, improvements, etc. that are within the spirit and principle of the present invention should be included in the present invention.

Claims (15)

1. A method for video recording, the method comprising:
respectively encoding the collected audio and video data, and respectively distributing synchronous control identifications for the encoded audio frame data and video frame data;
storing the audio frame data, the video frame data and the corresponding synchronous control identification into a cache, comparing the synchronous control identification of the current audio frame and the video frame in the cache, and then storing the audio and video frame data in the cache according to a preset storage rule.
2. The method of claim 1, wherein prior to assigning synchronization control identifiers to the encoded audio frame data and the encoded video frame data, respectively, the method further comprises:
and respectively calculating the time intervals of the initial video frame data and the initial audio frame data according to the preset frame rate and audio sampling interval of the image sensor.
3. The method according to claim 2, wherein the time interval of the audio frame data is used as a reference, and the step of respectively allocating the synchronization control identifier to the encoded audio frame data and the encoded video frame data comprises:
respectively counting the number of audio frames and the number of video frames which have the same time interval and are currently coded, then matching the audio frames and the number of video frames with a preset analysis rule, and correcting the time interval of the video frame data by adopting a preset correction formula after the matching is determined;
setting the counted audio frame number and video frame number as zero, then distributing the synchronous control mark for the current video frame data according to the time interval of the corrected video frame data and the synchronous control mark distributed for the video frame data last time, correspondingly distributing the synchronous control mark for the current audio frame data according to the time interval of the audio frame data and the synchronous control mark distributed for the audio frame data last time.
4. The method of claim 3, further comprising:
after the fact that the audio frame data cannot be matched with the preset analysis rule is determined, the synchronous control identification is distributed for the current video frame data according to the time interval of the currently adopted video frame data and the synchronous control identification distributed for the video frame data at the last time, and correspondingly, the synchronous control identification is distributed for the current audio frame data according to the time interval of the audio frame data and the synchronous control identification distributed for the audio frame data at the last time.
5. The method according to claim 3 or 4,
if Δ TV>ΔTAThen, the analysis rule is: | N Δ TV-MΔTA|>ΔTV(ii) a If Δ TV<ΔTAThe analysis rule is as follows: | N Δ TV-MΔTA|>ΔTA(ii) a Wherein, Delta TVRepresenting the time interval, Δ T, of the currently taken video frame dataARepresenting time intervals of audio frame data, N Δ TVRepresenting having the same time interval Δ TVOf all video frame data, M Δ TAIs expressed in N.DELTA.TVThe accumulated time of the time intervals of all audio frame data;
accordingly, if Δ TV>ΔTAThe correction formula is as follows:
Figure FSA00000255937100021
if Δ TV<ΔTAThen, the correction formula is:
Figure FSA00000255937100022
wherein,
Figure FSA00000255937100023
meaning that the rounding is done down,
Figure FSA00000255937100024
indicating rounding up.
6. The method according to claim 2, wherein the time interval of the video frame data is used as a reference, and the step of respectively allocating the synchronization control identifier to the encoded audio frame data and the encoded video frame data comprises:
respectively counting the number of audio frames and the number of video frames which have the same time interval and are currently encoded, then matching the audio frames and the number of video frames with a preset analysis rule, and correcting the time interval of the audio frame data by adopting a preset correction formula after the matching is determined;
setting the counted audio frame number and video frame number as zero, then distributing the synchronous control mark for the current audio frame data according to the time interval of the corrected audio frame data and the synchronous control mark distributed for the audio frame data last time, and correspondingly distributing the synchronous control mark for the current video frame data according to the time interval of the video frame data and the synchronous control mark distributed for the video frame data last time.
7. The method of claim 6, further comprising:
after the fact that the audio frame data cannot be matched with the preset analysis rule is determined, the synchronous control identification is distributed for the current audio frame data according to the time interval of the currently adopted audio frame data and the synchronous control identification distributed for the audio frame data at the last time, and correspondingly, the synchronous control identification is distributed for the current video frame data according to the time interval of the video frame data and the synchronous control identification distributed for the video frame data at the last time.
8. The method according to claim 6 or 7,
if Δ TV>ΔTAThen, the analysis rule is: | N Δ TV-MΔTA|>ΔTV(ii) a If Δ TV<ΔTAThe analysis rule is as follows: | N Δ TV-MΔTA|>ΔTA
Accordingly, if Δ TV>ΔTAThe correction formula is as follows:
Figure FSA00000255937100031
if Δ TV<ΔTAThen, the correction formula is:
Figure FSA00000255937100032
9. the method of claim 2, 3, 4, 6 or 7,
the storage rule is as follows: and if the synchronous control identification of the audio frame is less than or equal to the synchronous control identification of the video frame, saving the audio frame data, and if the synchronous control identification of the audio frame is greater than the synchronous control identification of the video frame, saving the video frame data.
10. The method of claim 1, 2, 3, 4, 6 or 7, further comprising:
and judging whether the current audio and video frame data and the corresponding synchronous control identification exist in the cache, and if yes, comparing the synchronous control identifications of the current audio frame and the video frame in the cache.
11. A video recording apparatus, comprising: the device comprises a collecting unit, a coding unit, a synchronous control unit, a cache unit and a writing unit; wherein,
the acquisition unit is used for acquiring audio and video data from an audio and video data source and sending the acquired audio and video data to the coding unit;
the encoding unit is used for respectively encoding the acquired audio and video data after receiving the audio and video data transmitted by the acquisition unit and transmitting the encoded audio frame data and video frame data to the synchronous control unit;
the synchronous control unit is used for respectively distributing synchronous control identifications to the encoded audio frame data and the encoded video frame data after receiving the encoded audio frame data and the encoded video frame data sent by the encoding unit, then storing the audio frame data, the encoded video frame data and the corresponding synchronous control identifications into the cache unit, and triggering the write-in unit;
the buffer unit is used for storing audio frame data, video frame data and corresponding synchronous control identification;
and the writing unit is used for comparing the synchronous control identification of the current audio frame and the video frame in the cache unit after receiving the trigger information of the synchronous control unit, and then storing the audio and video frame data in the cache unit according to a preset storage rule.
12. The apparatus of claim 11,
and the synchronous control unit is further used for respectively calculating the time intervals of the initial video frame data and the initial audio frame data according to the preset frame rate and audio sampling interval of the image sensor before the synchronous control identification is respectively allocated to the encoded audio frame data and the encoded video frame data.
13. The apparatus of claim 11 or 12,
the writing unit is also used for judging whether the buffer unit has audio and video frame data and a corresponding synchronous control identifier at the same time, and comparing the synchronous control identifiers of the current audio frame and the video frame in the buffer unit if the synchronous control identifiers are determined to be present.
14. The apparatus of claim 11 or 12, further comprising: a setting unit for setting an analysis rule, a frame rate and an audio sampling interval of the image sensor, a correction formula, and a storage rule.
15. The apparatus of claim 11 or 12,
and the writing unit is also used for storing the audio frame data or the video frame data left in the cache unit after the video recording is stopped.
CN2010102713607A 2010-09-01 2010-09-01 Video recording method and device Pending CN101931775A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN2010102713607A CN101931775A (en) 2010-09-01 2010-09-01 Video recording method and device
PCT/CN2011/076228 WO2012028021A1 (en) 2010-09-01 2011-06-23 Method and device for video recording

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010102713607A CN101931775A (en) 2010-09-01 2010-09-01 Video recording method and device

Publications (1)

Publication Number Publication Date
CN101931775A true CN101931775A (en) 2010-12-29

Family

ID=43370661

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010102713607A Pending CN101931775A (en) 2010-09-01 2010-09-01 Video recording method and device

Country Status (2)

Country Link
CN (1) CN101931775A (en)
WO (1) WO2012028021A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012028021A1 (en) * 2010-09-01 2012-03-08 中兴通讯股份有限公司 Method and device for video recording
CN103686312A (en) * 2013-12-05 2014-03-26 中国航空无线电电子研究所 DVR multipath audio and video recording method
CN104023192A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Method and device for recording video
CN104092920A (en) * 2014-07-16 2014-10-08 浙江航天长峰科技发展有限公司 Audio and video synchronizing method
CN105141869A (en) * 2015-08-19 2015-12-09 中山市天启电子科技有限公司 Android system based segmented video data processing method
CN105979138A (en) * 2016-05-30 2016-09-28 努比亚技术有限公司 Video shooting apparatus and method, and mobile terminal
CN106101797A (en) * 2016-07-12 2016-11-09 青岛海信电器股份有限公司 A kind of screen recording method and touch TV
CN106412662A (en) * 2016-09-20 2017-02-15 腾讯科技(深圳)有限公司 Timestamp distribution method and device
CN110944225A (en) * 2019-11-20 2020-03-31 武汉长江通信产业集团股份有限公司 HTML 5-based method and device for synchronizing audio and video with different frame rates

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111246142B (en) * 2018-11-29 2022-07-01 杭州海康威视数字技术股份有限公司 Video file generation method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1222036A (en) * 1998-10-06 1999-07-07 国家科学技术委员会高技术研究发展中心 System controller in HDTV video decoder
CN1265508A (en) * 1998-11-09 2000-09-06 索尼公司 Data recording, recording and reproducing, reproducing and synchronous detecting device and method, recording medium
CN1491516A (en) * 2001-10-18 2004-04-21 ���µ�����ҵ��ʽ���� Video/Audio reproducing apparatus, and reproducing method and program and medium
US20050169532A1 (en) * 2004-01-16 2005-08-04 Matsushita Electric Industrial Co., Ltd. Signal processor
EP1829376A1 (en) * 2004-12-22 2007-09-05 British Telecommunications Public Limited Company Rate control with buffer underflow prevention
CN101394469A (en) * 2008-10-29 2009-03-25 北京创毅视讯科技有限公司 Audio and video synchronization method, device and a digital television chip
CN101710958A (en) * 2009-12-02 2010-05-19 北京中星微电子有限公司 Audio and video composite device and method and device for synchronizing audio and video thereof

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101076099B (en) * 2007-06-14 2010-06-09 北京中星微电子有限公司 Method and device for controlling video record and synchronized-controlling unit
CN101272501B (en) * 2008-05-07 2010-10-13 北京数码视讯科技股份有限公司 Video/audio encoding and decoding method and device
CN101931775A (en) * 2010-09-01 2010-12-29 中兴通讯股份有限公司 Video recording method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1222036A (en) * 1998-10-06 1999-07-07 国家科学技术委员会高技术研究发展中心 System controller in HDTV video decoder
CN1265508A (en) * 1998-11-09 2000-09-06 索尼公司 Data recording, recording and reproducing, reproducing and synchronous detecting device and method, recording medium
CN1491516A (en) * 2001-10-18 2004-04-21 ���µ�����ҵ��ʽ���� Video/Audio reproducing apparatus, and reproducing method and program and medium
US20050169532A1 (en) * 2004-01-16 2005-08-04 Matsushita Electric Industrial Co., Ltd. Signal processor
EP1829376A1 (en) * 2004-12-22 2007-09-05 British Telecommunications Public Limited Company Rate control with buffer underflow prevention
CN101394469A (en) * 2008-10-29 2009-03-25 北京创毅视讯科技有限公司 Audio and video synchronization method, device and a digital television chip
CN101710958A (en) * 2009-12-02 2010-05-19 北京中星微电子有限公司 Audio and video composite device and method and device for synchronizing audio and video thereof

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012028021A1 (en) * 2010-09-01 2012-03-08 中兴通讯股份有限公司 Method and device for video recording
CN103686312A (en) * 2013-12-05 2014-03-26 中国航空无线电电子研究所 DVR multipath audio and video recording method
CN103686312B (en) * 2013-12-05 2017-02-08 中国航空无线电电子研究所 DVR multipath audio and video recording method
CN104023192A (en) * 2014-06-27 2014-09-03 深圳市中兴移动通信有限公司 Method and device for recording video
CN104023192B (en) * 2014-06-27 2016-04-13 努比亚技术有限公司 A kind of method and apparatus of recorded video
CN104092920A (en) * 2014-07-16 2014-10-08 浙江航天长峰科技发展有限公司 Audio and video synchronizing method
CN105141869A (en) * 2015-08-19 2015-12-09 中山市天启电子科技有限公司 Android system based segmented video data processing method
CN105979138A (en) * 2016-05-30 2016-09-28 努比亚技术有限公司 Video shooting apparatus and method, and mobile terminal
CN106101797A (en) * 2016-07-12 2016-11-09 青岛海信电器股份有限公司 A kind of screen recording method and touch TV
CN106412662A (en) * 2016-09-20 2017-02-15 腾讯科技(深圳)有限公司 Timestamp distribution method and device
CN110944225A (en) * 2019-11-20 2020-03-31 武汉长江通信产业集团股份有限公司 HTML 5-based method and device for synchronizing audio and video with different frame rates
CN110944225B (en) * 2019-11-20 2022-10-04 武汉长江通信产业集团股份有限公司 HTML 5-based method and device for synchronizing audio and video with different frame rates

Also Published As

Publication number Publication date
WO2012028021A1 (en) 2012-03-08

Similar Documents

Publication Publication Date Title
CN101931775A (en) Video recording method and device
CN1140956A (en) Decoder for compressed and multiplexed video and audio data
CN101243490B (en) Method and apparatus for encoding and decoding an audio signal
EP4054190A1 (en) Video data encoding method and device, apparatus, and storage medium
CN105849800B (en) Bit rate estimation determination method, bit rate estimator, multiplexing device, bit stream providing method, and encoding system
US10045071B2 (en) Method and system for fast digital channel change utilizing time-stamp management
CN103686042A (en) Method and apparatus for image data processing, and electronic device including the apparatus
US8554017B2 (en) Imaging apparatus, data processing method, and program
CN100426862C (en) Method and apparatus for obtaining current broadcasting time while broadcasting multi-medium document
CN101854508A (en) The method and apparatus of the content of multimedia of reverse playback of encoded
EP1133180A2 (en) Method and apparatus for indexing and locating key frames in streaming and variable-frame-length data
CN109413371B (en) Video frame rate calculation method and device
CN109597566A (en) A kind of reading data, storage method and device
CN112235600B (en) Method, device and system for processing video data and video service request
WO2018162973A1 (en) Synchronizing media in multiple devices
CN114513651A (en) Video equipment key frame collision detection method, data transmission method and related device
CN104469400A (en) Image data compression method based on RFB protocol
CN111447451A (en) Image coding and decoding method and device
CN105187688A (en) Method and system for carrying out synchronization on real-time video and audio collected by mobile phone
CN109194965A (en) Processing method, processing unit, display methods and display device
CN103391415A (en) Video data frame loss processing method and system
TWI523511B (en) Variable bit rate video panning method
TWI552573B (en) Coding of video and audio with initialization fragments
CN114501095B (en) Audio and video synchronous recording method based on recording terminal
CN106412690B (en) Video playing determination method and device

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C12 Rejection of a patent application after its publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20101229