CN105791939A - Audio and video synchronization method and apparatus - Google Patents

Audio and video synchronization method and apparatus Download PDF

Info

Publication number
CN105791939A
CN105791939A CN201610144802.9A CN201610144802A CN105791939A CN 105791939 A CN105791939 A CN 105791939A CN 201610144802 A CN201610144802 A CN 201610144802A CN 105791939 A CN105791939 A CN 105791939A
Authority
CN
China
Prior art keywords
audio
video
time stamp
time
packets
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610144802.9A
Other languages
Chinese (zh)
Other versions
CN105791939B (en
Inventor
禹业茂
王金宝
皮慧斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zed-3 Technology Co Ltd
Original Assignee
Beijing Zed-3 Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zed-3 Technology Co Ltd filed Critical Beijing Zed-3 Technology Co Ltd
Priority to CN201610144802.9A priority Critical patent/CN105791939B/en
Publication of CN105791939A publication Critical patent/CN105791939A/en
Application granted granted Critical
Publication of CN105791939B publication Critical patent/CN105791939B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Data Exchanges In Wide-Area Networks (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides an audio and video synchronization method applied to a first terminal. The first terminal can receive an audio data packet ad a video data packet that are sent by a second terminal; and the audio data packet includes an audio time stamp and the video data packet includes a video time stamp. Because the audio time stamp and the video time stamp are generated by the second terminal based on same system time, the audio time stamp and the video time stamp have an association relation. When video data are played, the time corresponding to the video time stamp is compared with time corresponding to the audio time stamp; if the time corresponding to the video time stamp is earlier than the time corresponding to the audio time stamp, the video data packet is abandoned; and if the time corresponding to the video time stamp is later than the time corresponding to the audio time stamp, the video data are played in a delayed mode, thereby realizing audio and video synchronization. Besides, the time stamps are field data in data packets. According to the provided synchronization method, no extra bandwidth occupation is added, so that the bandwidth resources are saved.

Description

The synchronous method of audio & video and device
Technical field
The application relates to multi-media processing technical field, more specifically, be synchronous method and the device of audio & video.
Background technology
The terminals such as mobile phone are configured with audio playing module and video playback module, it is possible to achieve play while audio & video.Such as, when two mobile phones are conversed, mobile phone A can play the audio & video that mobile phone B sends simultaneously.
But, due to reasons such as network congestions, it is possible to causing VoP or video data packet delay shake, thus there is voice and the nonsynchronous problem of video playback, Consumer's Experience is poor.
Summary of the invention
In view of this, this application provides the synchronous method of a kind of audio & video, for realizing the synchronization of audio & video.It addition, present invention also provides the synchronizer of a kind of audio & video, in order to ensure the application in practice of described method and realization.
For realizing described purpose, the technical scheme that the application provides is as follows:
The first aspect of the application provides the synchronous method of a kind of audio & video, is applied to first terminal, and this includes:
Receive packets of audio data and video data bag that the second terminal sends;Wherein, described packets of audio data comprises audio time stamp, described video data bag comprises video time stamp, and described audio time stamp and described video time stamp are that described second terminal generated based on the same system time;
During video data in playing video data bag, the time that time corresponding to video time stamp described in comparison is corresponding with the described audio time stamp play;
If time corresponding to described video time stamp early than time corresponding to the described audio time stamp play, then abandons described video data bag;
If time corresponding to described video time stamp is later than the time that the described audio time stamp play is corresponding, then determine the time difference between time corresponding with described audio time stamp time corresponding to described video time stamp, and after described time difference, play the video data in described video data bag;
If the time that time corresponding to described video time stamp is corresponding equal to the described audio time stamp play, then play the video data in described video data bag.
Alternatively, in the synchronous method of above-mentioned audio & video, also included before the packets of audio data and video data bag of the transmission of described reception the second terminal:
In the stage setting up calling with described second terminal, receive the message related to calls that described second terminal sends;Wherein, described message related to calls carries audio sample rate A and video sampling rate V;
Correspondingly, the time that time corresponding to video time stamp described in described comparison is corresponding with the described audio time stamp play, including:
According to computing formula TA/ (A/1000), it is thus achieved that described audio time stamp relative time, and according to computing formula TV/ (V/1000), it is thus achieved that described video time stamp relative time;Wherein, described TA is audio time stamp, and described TV is video time stamp;
Video time stamp relative time described in comparison and described audio time stamp relative time;
The described time difference determined between time corresponding with described audio time stamp time corresponding to described video time stamp, including:
Determine the time difference between described video time stamp relative time and described audio time stamp relative time.
The second aspect of the application provides the synchronous method of a kind of audio & video, is applied to the second terminal, and the method includes:
Based on the same system time, generate audio time stamp and video time stamp respectively;
Described audio time stamp is encapsulated in packets of audio data, and described video time stamp is encapsulated in video data bag;
Described packets of audio data and described video data bag is sent to first terminal.
Alternatively, in the synchronous method of described audio & video, described second terminal being provided with audio-frequency module and video module, voice is sampled by described audio-frequency module according to audio sample rate A, and video is sampled by described video module according to video sampling rate V;
Correspondingly, described based on the same system time, generate audio time stamp respectively and video time stamp includes:
When receiving the access success confirmation packet that described first terminal sends, record the present system time B of described second terminal;
When needs generate described packets of audio data, it is thus achieved that described audio-frequency module currently carries out the audio sample time point SA sampled, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that audio time stamp TA;
When needs generate described video data bag, it is thus achieved that described video module currently carries out the video sampling time point SV sampled, and according to default video time stamp generating algorithm (SV-B) * (V/1000), it is thus achieved that audio time stamp TV.
Alternatively, described when needs generate described packets of audio data, it is thus achieved that described audio-frequency module currently carries out the audio sample time point SA sampled, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), obtain audio time stamp TA, including:
When needs generate described packets of audio data and described packets of audio data is first packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that the audio time stamp TA of described packets of audio data;
When generating described packets of audio data and the non-first packets of audio data of described packets of audio data at needs, using the sampling period of the audio time stamp of first packets of audio data and described audio-frequency module and be worth the audio time stamp TA as described packets of audio data.
The third aspect of the application provides the synchronizer of a kind of audio & video, is applied to first terminal, and this includes:
Packet-receiving module, for receiving packets of audio data and the video data bag that the second terminal sends;Wherein, described packets of audio data comprises audio time stamp, described video data bag comprises video time stamp, and described audio time stamp and described video time stamp are that described second terminal generated based on the same system time;
Timestamp comparation module, during for video data in playing video data bag, the time that time corresponding to video time stamp described in comparison is corresponding with the described audio time stamp play;If time corresponding to described video time stamp early than time corresponding to the described audio time stamp play, then triggers discard module;If time corresponding to described video time stamp is later than the time that the described audio time stamp play is corresponding, then triggered time difference determines module;If time corresponding to described video time stamp is later than the time that the described audio time stamp play is corresponding, then trigger video data playing module;
Discard module, is used for abandoning described video data bag;
Time difference determines module, for determining the time difference between time corresponding with described audio time stamp time corresponding to described video time stamp;
Postpone playing module, for, after described time difference, playing the video data in described video data bag;
Video data playing module, for playing the video data in described video data bag.
Alternatively, the synchronizer of above-mentioned audio & video also includes:
Message related to calls sending module, in the stage setting up calling with described second terminal, receiving the message related to calls that described second terminal sends;Wherein, described message related to calls carries described audio sample rate A and described video sampling rate V;
Correspondingly, described timestamp comparation module includes:
Relative time obtains submodule, for according to computing formula TA/ (A/1000), it is thus achieved that described audio time stamp relative time, and according to computing formula TV/ (V/1000), it is thus achieved that described video time stamp relative time;Wherein, described TA is audio time stamp, and described TV is video time stamp;
Relative time comparison sub-module, video time stamp relative time and described audio time stamp relative time described in comparison;
Described time difference determines that module includes:
Time difference determines submodule, for determining the time difference between described video time stamp relative time and described audio time stamp relative time.
The fourth aspect of the application provides the synchronizer of a kind of audio & video, is applied to the second terminal, and this device includes:
Timestamp generation module, for based on the same system time, generating audio time stamp and video time stamp respectively;
Timestamp package module, for described audio time stamp being encapsulated in packets of audio data, and is encapsulated in described video time stamp in video data bag;
Timestamp sending module, for sending described packets of audio data and described video data bag to first terminal.
Alternatively, in the synchronizer of above-mentioned audio & video, described second terminal being provided with audio-frequency module and video module, voice is sampled by described audio-frequency module according to audio sample rate A, and video is sampled by described video module according to video sampling rate V;
Wherein, described timestamp generation module includes:
Access success time record sub module, when the access success sent for receiving described first terminal confirms packet, records the present system time B of described second terminal;
Audio time stamp generates submodule, for when needs generate described packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that audio time stamp TA;
Video time stamp generates submodule, for when needs generate described video data bag, obtain the video sampling time point SV that described video module currently carries out sampling, and according to default video time stamp generating algorithm (SV-B) * (V/1000), it is thus achieved that audio time stamp TV.
Alternatively, described audio time stamp generation submodule includes:
First timestamp generates unit, for when needs generate described packets of audio data and described packets of audio data is first packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that the audio time stamp TA of described packets of audio data;
Follow-up time stamp generates unit, for generating described packets of audio data and the non-first packets of audio data of described packets of audio data at needs, using the sampling period of the audio time stamp of first packets of audio data and described audio-frequency module and be worth the audio time stamp TA as described packets of audio data.
By above technical scheme it can be seen that the application has the advantages that
This application provides the synchronous method of a kind of audio & video, the method is applied on first terminal, first terminal can receive packets of audio data and the video data bag that the second terminal sends, audio time stamp is included in packets of audio data, video time stamp is comprised in video data bag, and audio time stamp and video time stamp are that the second terminal generates based on same system time, therefore, there is between audio time stamp and video time stamp incidence relation, and then, can when playing video data, compare the time corresponding to video time stamp and broadcasting the time corresponding to audio time stamp, if early than, then video data bag is abandoned, if being later than, then postpone to play by video data, if being equal to, then direct playing video data, thus realizing the synchronization of audio & video.It addition, timestamp is the field data in packet, the method for synchronization that the application provides, extra bandwidth occupancy can't be increased, save bandwidth resources.
Accompanying drawing explanation
In order to be illustrated more clearly that the embodiment of the present application or technical scheme of the prior art, the accompanying drawing used required in embodiment or description of the prior art will be briefly described below, apparently, accompanying drawing in the following describes is only embodiments herein, for those of ordinary skill in the art, under the premise not paying creative work, it is also possible to obtain other accompanying drawing according to the accompanying drawing provided.
The flow chart of the synchronous method embodiment 1 of the audio & video that Fig. 1 provides for the application;
Fig. 2 generates the flow chart of audio time stamp and video time stamp for the second terminal that the application provides;
The structural representation of the synchronizer embodiment 1 of the audio & video that Fig. 3 provides for the application;
The structural representation of the timestamp generation module that Fig. 4 provides for the application.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present application, the technical scheme in the embodiment of the present application is clearly and completely described, it is clear that described embodiment is only some embodiments of the present application, rather than whole embodiments.Based on the embodiment in the application, the every other embodiment that those of ordinary skill in the art obtain under not making creative work premise, broadly fall into the scope of the application protection.
Referring to Fig. 1, it illustrates the flow process of the synchronous method embodiment 1 of the audio & video that the application provides.Specifically, the present embodiment is applied on first terminal, as it is shown in figure 1, the present embodiment can specifically include step S101~step S105.
Step S101: receive packets of audio data and video data bag that the second terminal sends;Wherein, packets of audio data comprises audio time stamp, video data bag comprises video time stamp, and audio time stamp and video time stamp are that the second terminal generated based on the same system time.
Wherein, after first terminal and the second call between terminals stage be successfully completed, first terminal can receive packets of audio data and the video data bag that the second terminal sends, in such cases, first terminal and receiving terminal, the second terminal and transmitting terminal, certainly, if the second terminal receives packets of audio data that first terminal sends and during video data bag, first terminal and transmitting terminal, the second terminal and receiving terminal.Packets of audio data carries the voice data that the second terminal collects, video data bag carries the video data that the second terminal collects.
It should be noted that, packets of audio data includes audio time stamp, video data bag includes video time stamp, and, audio time stamp and video time stamp are that the second terminal generates according to same system time, and this system time can be the system time of certain time point in the second terminal.
Owing to audio time stamp and video time stamp all have incidence relation with the same system time, then just there is incidence relation between audio time stamp and video time stamp, so that there is incidence relation between packets of audio data and video data bag.After first terminal receives packets of audio data and video data bag, just according to magnitude relationship between timestamp in packet, the video data in the voice data in packets of audio data and video data bag can be carried out synchronization process.
It addition, audio time stamp is as certain field in packets of audio data, video time stamp is as certain field in video data bag.So, audio time stamp just can be transmitted along with to first terminal with voice data, and video time stamp can also be transmitted along with to first terminal with video data.This kind of mode can save extra bandwidth occupancy, thus solve the asynchronous problem of audio & video when not increasing extra bandwidth.
Step S102: during video data in playing video data bag, the time that time corresponding to comparison video time stamp is corresponding with the audio time stamp play.If time corresponding to video time stamp is early than time corresponding to the audio time stamp play, then perform step S103, if time corresponding to video time stamp is later than the time that the audio time stamp play is corresponding, then perform step S104, if the time that time corresponding to video time stamp is corresponding equal to the audio time stamp play, then perform step S105.
Wherein, during the data in playback data, it is possible to playing audio-fequency data for benchmark, video data is processed.It is of course also possible to playing video data for benchmark, voice data is processed.The viewing experience of people, the first implementation has good result of broadcast, and therefore, this step is preferably used the first implementation.
Specifically, video data bag that in step S101, first terminal real-time reception the second terminal sends and packets of audio data, and packets of audio data is put in audio buffer queue, video data bag is put in video cache queue.First terminal can create two threads, i.e. audio frequency thread and video thread, processes two buffer queues respectively.Audio frequency thread, from audio buffer queue, obtains packets of audio data, extracts voice data and play out from packets of audio data, and video thread, from video cache queue, obtains video data bag, extracts video data and play out from video data bag.
Owing to being as the criterion with playing audio-fequency data, therefore, audio frequency thread obtains packets of audio data according to intrinsic broadcasting frequency from audio buffer queue, and plays the voice data in this packets of audio data.But, when video thread gets video data bag every time from video queue, Time transfer receiver is carried out with the packets of audio data being currently played, i.e. time corresponding to video time stamp and the time corresponding to packets of audio data sound intermediate frequency timestamp play in this video data bag of comparison firstly the need of by it.
It should be noted that all there is with the same system time incidence relation just because of audio time stamp and video time stamp, and then the time that both are represented can be compared.That is, first terminal is when playing video data, extracts the video time stamp in video data bag, judge that whether the time represented by this video time stamp is equal with the time represented by audio time stamp, if equal, then perform step S105, if unequal, determine whether early than or be later than.If early than, then what perform step S103 abandons operation, if being later than, then performs the delay play operation of step S104.
Step S103: abandon video data bag.
Step S104: determine the time difference between time corresponding with audio time stamp time corresponding to video time stamp the video data after time difference, in playing video data bag.
Step S105: the video data in playing video data bag.
From above technical scheme, this application provides the synchronous method embodiment of a kind of audio & video, the present embodiment is applied on first terminal, first terminal can receive packets of audio data and the video data bag that the second terminal sends, audio time stamp is included in packets of audio data, video time stamp is comprised in video data bag, and audio time stamp and video time stamp are that the second terminal generates based on same system time, therefore, there is between audio time stamp and video time stamp incidence relation, and then, can when playing video data, compare the time corresponding to video time stamp and in the time corresponding to the audio time stamp broadcast, if early than, then video data bag is abandoned, if being later than, then postpone to play by video data, if it is equal, then direct playing video data, thus realizing the synchronization of audio & video.
It should be noted that within a period of time, first terminal and the second terminal can constantly receive the video data bag that the other side sends.Each packet only comprises the video data of less sampled point, to terminal, in synchronization process process, abandons certain video data bag, can't affect the normal play of video.
In the specific implementation, first terminal and the second terminal can carry out data transmission by IP based network, use the popular protocols such as SIP (SessionInitiationProtocol, session initiation protocol) and RTP (Real-timeTransportProtocol, real time transport protocol).But, the time stamp field in existing RTP packet, and not associated any absolute time, this time stamp field only can be used to judge the time relationship between the media such as audio frequency or video self different pieces of information bag.Meanwhile, packets of audio data there is no incidence relation with the time stamp in video data bag, therefore, can not directly use the time stamp in packets of audio data and video data bag to synchronize.But, the timestamp in the present embodiment all associates same absolute system time, so that having incidence relation between two timestamps.
It addition, the initial value of existing time stamp is random number, but, the timestamp in the present embodiment is that the system time according to the second terminal generates, and nonrandom, and wherein, this system time can be the call start time of first terminal and the second terminal.
It is understood that only the second terminal generates the audio time stamp and the video time stamp that are associated with the same system time, first terminal just can use this kind of video time stamp and audio time stamp, it is achieved the synchronization of audio frequency and video is play.Hereinafter the audio & video synchronous method being applied in the second terminal being illustrated, specifically, the method can include step A1~step SA3.
Step A1: based on the same system time, generates audio time stamp and video time stamp respectively.
Wherein, this system time can be system time when first terminal and the second terminal call success, implements process as described below.
Being provided with audio-frequency module in second terminal, voice can be sampled by audio-frequency module according to certain audio sample rate, for the ease of describing, it is possible to audio sample rate is denoted as A.Being additionally provided with video module in second terminal, video can be sampled by video module according to video sampling rate, for the ease of describing, it is possible to video sampling rate is denoted as V.
As in figure 2 it is shown, the second terminal is based on the same system time, the flow process generating audio time stamp and video time stamp can specifically include step S201~step S203.
Step S201: when receiving the access success confirmation packet that first terminal sends, record the present system time B of the second terminal.
Wherein, the second terminal with first terminal before communicating, can through call phase.At call phase, if the second terminal receives the access success confirmation packet that first terminal sends, represent that call phase is successfully completed, then the second terminal just records current system time, for the ease of being described below, it is possible to present system time is denoted as B.
Step S202: when needs generate packets of audio data, it is thus achieved that audio-frequency module currently carries out the audio sample time point SA sampled, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that audio time stamp TA.
It is understood that the second terminal is when needs generate packets of audio data, it is necessary to voice is sampled.Second terminal, when audio-frequency module carries out first sampling, records system time, this system time and audio sample time point SA.
It is appreciated that, voice is circulated sampling according to certain cycle by audio-frequency module, every time after sampling, it is possible to be included in by the voice data of this sampling in packets of audio data and send, or, it is possible to the voice data of multiple repairing weld is included in same packets of audio data and sends.
When comprising multiple repairing weld audio frequency in same packets of audio data, using the system time sampled first as audio sample time point SA.
Then, (the system time during audio sample time point-access success) * (audio sample rate/1000) according to audio time stamp generating algorithm (SA-B) * (A/1000) i.e., calculate and obtain audio time stamp TA.Wherein, audio sample rate A can be but be not limited to 8000Hz, namely within 1 second, samples 8000 times, and owing to 1 second is 1000 milliseconds, A/1000 is in order to sample rate is carried out unit conversion, is millisecond unit level by sample rate conversion.
After obtaining audio time stamp, audio time stamp is included in packets of audio data, packets of audio data also comprises the voice data sampled, then, this packets of audio data is sent to first terminal.
Step S203: when needs generate video data bag, it is thus achieved that video module currently carries out the video sampling time point SV sampled, and according to default video time stamp generating algorithm (SV-B) * (V/1000), it is thus achieved that audio time stamp TV.
Wherein, when needs generate video data bag, equally first audio-frequency module gathers video data, therefore, records the system time of audio-frequency module sampling, this system time and video sampling time point SV.
It should be noted that video data after once sampling, just can be sent by video data bag by packet, therefore, video sampling time point is the system time of this sampling.
After obtaining video sampling time point SV, according to video time stamp generating algorithm (SV-B) * (V/1000), i.e. (system time during video sampling time point-access success) * (video sampling rate/1000), calculates and obtains video time stamp TA.Wherein, video sampling rate V can be but be not limited to 90000Hz, namely within 1 second, samples 90000 times, and with the above explanation to A/1000, V/1000 is to carry out unit conversion, and video sampling rate is converted to Millisecond.
Step A2: described audio time stamp is encapsulated in packets of audio data, and described video time stamp is encapsulated in video data bag.
Wherein, after obtaining video time stamp, video time stamp is included in video data bag, video data bag also comprises the video data sampled, then, this video data bag is sent to first terminal.
It addition, timestamp is the field data in packet, can't additionally increase the bandwidth occupancy of transmission packets of audio data and video data bag, save bandwidth resources.
Step A3: send described packets of audio data and described video data bag to first terminal.
It should be noted that illustrate cross-referenced with the above-mentioned explanation being applied in first terminal about the audio & video synchronous method that is applied in the second terminal, do not repeat herein.
In force, normally, the audio & video between first terminal and the second terminal continues for some time alternately, and the second terminal when generating packets of audio data, all needs to generate the audio time stamp of this packets of audio data every time.
Owing to the audio collection module in the second terminal is according to the fixing sampling period, such as 20ms, sampling obtains voice data.Therefore, when the second terminal generates first audio time stamp, it is possible to obtain according to audio time stamp generating algorithm (SA-B) * (A/1000) in above-mentioned steps S202.
But, second terminal is when the voice data according to subsequent sampling generates follow audio timestamp, sampling time point SA can be obtained, and then be required for the acquisition of this audio time stamp generating algorithm, but by first audio time stamp plus this fixing sampling period.
In this kind of implementation, it is only necessary to record first audio time stamp, then when being subsequently generated audio time stamp, first audio time stamp was added with the fixing sampling period.Comparing when generating audio time stamp every time, the sampling time point all obtaining voice data the mode being calculated according to audio time stamp generating algorithm, this kind of mode data operation quantity is less, it is achieved mode is simple.
After first terminal receives packets of audio data and the video data bag of above-mentioned form, it is necessary to the time corresponding to the timestamp in packet is compared.Concrete manner of comparison includes step S1 and step S2.
Step S1: according to computing formula TA/ (A/1000), it is thus achieved that audio time stamp relative time, and according to computing formula TV/ (V/1000), it is thus achieved that video time stamp relative time.
Wherein, first terminal extracts the audio time stamp TA in packets of audio data, and obtain the audio sample rate A of the second terminal, after both are substituted into default computing formula TA/ (A/1000) computing, what obtain is audio time stamp relative time, it should be noted that the time corresponding to this audio time stamp relative time and audio time stamp.
Wherein, audio sample rate A is stored in advance in first terminal, it is also possible to be that the second terminal is sent to first terminal at call establishment stage.Such as, first terminal and the second terminal use pcmu agreement to communicate, when in VoP rtp, payloadtype field is 0, then it represents that what carry in VoP rtp is the Media Stream of the pcmu coded format of 8000Hz audio sample rate.
First terminal extracts the video time stamp TV in video data bag, and obtain the video sampling rate V of the second terminal, after both are substituted into default computing formula TV/ (V/1000) computing, what obtain is video time stamp relative time, it should be noted that the time corresponding to this video time stamp relative time and video time stamp.
Wherein, video sampling rate V is stored in advance in first terminal, it is also possible to be that the second terminal is sent to first terminal at call establishment stage.Such as, first terminal and the second terminal use H264 agreement to communicate, when in video data bag rtp, payloadtype field is 126, then it represents that what carry in video data bag rtp is the Media Stream of the h264 coded format of 90000Hz video sampling rate.
Step S2: comparison video time stamp relative time and audio time stamp relative time.
Wherein, after video time stamp and audio time stamp are carried out above-mentioned computing, directly two operation results can be judged, and according to judged result, perform the operation that abandons of above-mentioned steps S103, or, perform the delay play operation of above-mentioned steps S104, or, perform the direct play operation of above-mentioned steps S105.
The audio & video synchronizer below the application provided is introduced, it is necessary to explanation, hereafter relevant explanation may refer to audio & video synchronous method provided above, does not repeat below.
Corresponding with the above-mentioned audio & video synchronous method embodiment 1 being applied in first terminal, this application provides a kind of audio & video synchronizer embodiment 1 being applied in first terminal.As it is shown on figure 3, this device embodiment can specifically include: packet-receiving module 301, timestamp comparation module 302, discard module 303, time difference are determined module 304, postponed playing module 305 and video data playing module 306.Wherein:
Packet-receiving module 301, for receiving packets of audio data and the video data bag that the second terminal sends;Wherein, packets of audio data comprises audio time stamp, video data bag comprises video time stamp, and audio time stamp and video time stamp are that the second terminal generated based on the same system time;
Timestamp comparation module 302, during for video data in playing video data bag, the time that time corresponding to comparison video time stamp is corresponding with the audio time stamp play;If time corresponding to video time stamp early than time corresponding to the audio time stamp play, then triggers discard module 303;If time corresponding to video time stamp is later than the time that the audio time stamp play is corresponding, then triggered time difference determines module 304;If time corresponding to video time stamp is later than the time that the audio time stamp play is corresponding, then trigger video data playing module 306;
Discard module 303, is used for abandoning video data bag;
Time difference determines module 304, for determining the time difference between time corresponding with audio time stamp time corresponding to video time stamp;
Postpone playing module 305, for the video data after time difference, in playing video data bag;
Video data playing module 306, for the video data in playing video data bag.
nullFrom above technical scheme,This application provides the synchronizer embodiment of a kind of audio & video,The present embodiment is applied on first terminal,Packet-receiving module 301 can receive packets of audio data and the video data bag that the second terminal sends,Audio time stamp is included in packets of audio data,Video time stamp is comprised in video data bag,And audio time stamp and video time stamp are that the second terminal generates based on same system time,Therefore,There is between audio time stamp and video time stamp incidence relation,And then,Can when playing video data,Timestamp comparation module 302 compares the time corresponding to video time stamp and is broadcasting the time corresponding to audio time stamp,If early than,Then video data bag is abandoned by discard module 303,If being later than,Then time difference determines that video data is postponed to play by module 304,If it is equal,The then direct playing video data of video data playing module 306,Thus realizing the synchronization of audio & video.
It should be noted that within a period of time, first terminal and the second terminal can constantly receive packets of audio data and the video data bag that the other side sends.To terminal, in this process, each packet only comprises voice data or the video data of less sampled point, abandons certain packet, can't affect the normal play of audio or video.
Specifically, the synchronizer of the audio & video being applied in first terminal can also include: message related to calls sending module.
Message related to calls sending module, in the stage setting up calling with the second terminal, receiving the message related to calls that the second terminal sends;Wherein, message related to calls carries audio sample rate A and video sampling rate V;
Correspondingly, the timestamp comparation module 302 in the synchronizer of audio & video can specifically include: relative time obtains submodule and relative time comparison sub-module.
Wherein, relative time obtains submodule, for according to computing formula TA/ (A/1000), it is thus achieved that audio time stamp relative time, and according to computing formula TV/ (V/1000), it is thus achieved that video time stamp relative time.Wherein, described TA is audio time stamp, and described TV is video time stamp.
Relative time comparison sub-module, for comparison video time stamp relative time and audio time stamp relative time.
It addition, the time difference in the synchronizer of audio & video determines that module 304 can specifically include: time difference determines submodule.
Time difference determines submodule, for determining the time difference between video time stamp relative time and audio time stamp relative time.
Present invention also provides the audio & video synchronizer being applied in the second terminal, this device can specifically include: timestamp generation module, timestamp package module and timestamp sending module.
Timestamp generation module, for based on the same system time, generating audio time stamp and video time stamp respectively;
Timestamp package module, for described audio time stamp being encapsulated in packets of audio data, and is encapsulated in described video time stamp in video data bag;
Timestamp sending module, for sending described packets of audio data and described video data bag to first terminal.
In force, the second terminal being provided with audio-frequency module and video module, voice is sampled by audio-frequency module according to audio sample rate A, and video is sampled by video module according to video sampling rate V.
Specifically, as shown in Figure 4, timestamp generation module may include that access success time record sub module 401, audio time stamp generate submodule 402 and video time stamp generates submodule 403.
Access success time record sub module 401, when the access success sent for receiving first terminal confirms packet, records the present system time B of the second terminal;
Audio time stamp generates submodule 402, for when needs generate packets of audio data, obtain the audio sample time point SA that audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that audio time stamp TA;
Video time stamp generates submodule 403, for when needs generate video data bag, obtain the video sampling time point SV that video module currently carries out sampling, and according to default video time stamp generating algorithm (SV-B) * (V/1000), it is thus achieved that audio time stamp TV.
In a concrete example, audio time stamp generates submodule 402 and can specifically include: first timestamp generates unit and follow-up time stamp generates unit.
First timestamp generates unit, for when needs generate packets of audio data and packets of audio data is first packets of audio data, obtain the audio sample time point SA that audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that the audio time stamp TA of packets of audio data;
Follow-up time stamp generates unit, for generating packets of audio data and the non-first packets of audio data of packets of audio data at needs, using the sampling period of the audio time stamp of first packets of audio data and audio-frequency module and be worth the audio time stamp TA as packets of audio data.
Additionally, present invention also provides a kind of terminal, this terminal can specifically include: above-mentioned any one audio & video synchronizer being applied in first terminal and above-mentioned any one video being applied in the second terminal and audio synchronizing apparatus, audio playing module and video playback module.Wherein, audio playing module is used for playing audio-fequency data, and video playback module is used for playing video data.
It should be noted that in such cases, first terminal and the second terminal may be considered same terminal.
It should be noted that each embodiment in this specification all adopts the mode gone forward one by one to describe, what each embodiment stressed is the difference with other embodiments, between each embodiment identical similar part mutually referring to.
It can further be stated that, in this article, the relational terms of such as first and second or the like is used merely to separate an entity or operation with another entity or operating space, and not necessarily requires or imply the relation that there is any this reality between these entities or operation or sequentially.And, term " includes ", " comprising " or its any other variant are intended to comprising of nonexcludability, so that include the process of a series of key element, method, article or equipment not only include those key elements, but also include other key elements being not expressly set out, or also include the key element intrinsic for this process, method, article or equipment.When there is no more restriction, statement " including ... " key element limited, it is not excluded that there is also other identical element in including the process of above-mentioned key element, method, article or equipment.
Described above to the disclosed embodiments, makes professional and technical personnel in the field be capable of or uses the application.The multiple amendment of these embodiments be will be apparent from for those skilled in the art, and generic principles defined herein when without departing from spirit herein or scope, can realize in other embodiments.Therefore, the application is not intended to be limited to the embodiments shown herein, and is to fit to the widest scope consistent with principles disclosed herein and features of novelty.

Claims (10)

1. the synchronous method of an audio & video, it is characterised in that being applied to first terminal, this includes:
Receive packets of audio data and video data bag that the second terminal sends;Wherein, described packets of audio data comprises audio time stamp, described video data bag comprises video time stamp, and described audio time stamp and described video time stamp are that described second terminal generated based on the same system time;
During video data in playing video data bag, the time that time corresponding to video time stamp described in comparison is corresponding with the described audio time stamp play;
If time corresponding to described video time stamp early than time corresponding to the described audio time stamp play, then abandons described video data bag;
If time corresponding to described video time stamp is later than the time that the described audio time stamp play is corresponding, then determine the time difference between time corresponding with described audio time stamp time corresponding to described video time stamp, and after described time difference, play the video data in described video data bag;
If the time that time corresponding to described video time stamp is corresponding equal to the described audio time stamp play, then play the video data in described video data bag.
2. the synchronous method of audio & video according to claim 1, it is characterised in that also included before the packets of audio data and video data bag of the transmission of described reception the second terminal:
In the stage setting up calling with described second terminal, receive the message related to calls that described second terminal sends;Wherein, described message related to calls carries audio sample rate A and video sampling rate V;
Correspondingly, the time that time corresponding to video time stamp described in described comparison is corresponding with the described audio time stamp play, including:
According to computing formula TA/ (A/1000), it is thus achieved that described audio time stamp relative time, and according to computing formula TV/ (V/1000), it is thus achieved that described video time stamp relative time;Wherein, described TA is audio time stamp, and described TV is video time stamp;
Video time stamp relative time described in comparison and described audio time stamp relative time;
The described time difference determined between time corresponding with described audio time stamp time corresponding to described video time stamp, including:
Determine the time difference between described video time stamp relative time and described audio time stamp relative time.
3. the synchronous method of an audio & video, it is characterised in that being applied to the second terminal, the method includes:
Based on the same system time, generate audio time stamp and video time stamp respectively;
Described audio time stamp is encapsulated in packets of audio data, and described video time stamp is encapsulated in video data bag;
Described packets of audio data and described video data bag is sent to first terminal.
4. the synchronous method of audio & video according to claim 3, it is characterized in that, being provided with audio-frequency module and video module in described second terminal, voice is sampled by described audio-frequency module according to audio sample rate A, and video is sampled by described video module according to video sampling rate V;
Correspondingly, described based on the same system time, generate audio time stamp respectively and video time stamp includes:
When receiving the access success confirmation packet that described first terminal sends, record the present system time B of described second terminal;
When needs generate described packets of audio data, it is thus achieved that described audio-frequency module currently carries out the audio sample time point SA sampled, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that audio time stamp TA;
When needs generate described video data bag, it is thus achieved that described video module currently carries out the video sampling time point SV sampled, and according to default video time stamp generating algorithm (SV-B) * (V/1000), it is thus achieved that audio time stamp TV.
5. the synchronous method of audio & video according to claim 4, it is characterized in that, it is described when needs generate described packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), obtain audio time stamp TA, including:
When needs generate described packets of audio data and described packets of audio data is first packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that the audio time stamp TA of described packets of audio data;
When generating described packets of audio data and the non-first packets of audio data of described packets of audio data at needs, using the sampling period of the audio time stamp of first packets of audio data and described audio-frequency module and be worth the audio time stamp TA as described packets of audio data.
6. the synchronizer of an audio & video, it is characterised in that being applied to first terminal, this includes:
Packet-receiving module, for receiving packets of audio data and the video data bag that the second terminal sends;Wherein, described packets of audio data comprises audio time stamp, described video data bag comprises video time stamp, and described audio time stamp and described video time stamp are that described second terminal generated based on the same system time;
Timestamp comparation module, during for video data in playing video data bag, the time that time corresponding to video time stamp described in comparison is corresponding with the described audio time stamp play;If time corresponding to described video time stamp early than time corresponding to the described audio time stamp play, then triggers discard module;If time corresponding to described video time stamp is later than the time that the described audio time stamp play is corresponding, then triggered time difference determines module;If time corresponding to described video time stamp is later than the time that the described audio time stamp play is corresponding, then trigger video data playing module;
Discard module, is used for abandoning described video data bag;
Time difference determines module, for determining the time difference between time corresponding with described audio time stamp time corresponding to described video time stamp;
Postpone playing module, for, after described time difference, playing the video data in described video data bag;
Video data playing module, for playing the video data in described video data bag.
7. the synchronizer of audio & video according to claim 6, it is characterised in that also include:
Message related to calls sending module, in the stage setting up calling with described second terminal, receiving the message related to calls that described second terminal sends;Wherein, described message related to calls carries described audio sample rate A and described video sampling rate V;
Correspondingly, described timestamp comparation module includes:
Relative time obtains submodule, for according to computing formula TA/ (A/1000), it is thus achieved that described audio time stamp relative time, and according to computing formula TV/ (V/1000), it is thus achieved that described video time stamp relative time;Wherein, described TA is audio time stamp, and described TV is video time stamp;
Relative time comparison sub-module, video time stamp relative time and described audio time stamp relative time described in comparison;
Described time difference determines that module includes:
Time difference determines submodule, for determining the time difference between described video time stamp relative time and described audio time stamp relative time.
8. the synchronizer of an audio & video, it is characterised in that be applied to the second terminal, this device includes:
Timestamp generation module, for based on the same system time, generating audio time stamp and video time stamp respectively;
Timestamp package module, for described audio time stamp being encapsulated in packets of audio data, and is encapsulated in described video time stamp in video data bag;
Timestamp sending module, for sending described packets of audio data and described video data bag to first terminal.
9. the synchronizer of audio & video according to claim 8, it is characterized in that, being provided with audio-frequency module and video module in described second terminal, voice is sampled by described audio-frequency module according to audio sample rate A, and video is sampled by described video module according to video sampling rate V;
Wherein, described timestamp generation module includes:
Access success time record sub module, when the access success sent for receiving described first terminal confirms packet, records the present system time B of described second terminal;
Audio time stamp generates submodule, for when needs generate described packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that audio time stamp TA;
Video time stamp generates submodule, for when needs generate described video data bag, obtain the video sampling time point SV that described video module currently carries out sampling, and according to default video time stamp generating algorithm (SV-B) * (V/1000), it is thus achieved that audio time stamp TV.
10. the synchronizer of audio & video according to claim 9, it is characterised in that described audio time stamp generates submodule and includes:
First timestamp generates unit, for when needs generate described packets of audio data and described packets of audio data is first packets of audio data, obtain the audio sample time point SA that described audio-frequency module currently carries out sampling, and according to preset audio timestamp generating algorithm (SA-B) * (A/1000), it is thus achieved that the audio time stamp TA of described packets of audio data;
Follow-up time stamp generates unit, for generating described packets of audio data and the non-first packets of audio data of described packets of audio data at needs, using the sampling period of the audio time stamp of first packets of audio data and described audio-frequency module and be worth the audio time stamp TA as described packets of audio data.
CN201610144802.9A 2016-03-14 2016-03-14 The synchronous method and device of audio & video Active CN105791939B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610144802.9A CN105791939B (en) 2016-03-14 2016-03-14 The synchronous method and device of audio & video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610144802.9A CN105791939B (en) 2016-03-14 2016-03-14 The synchronous method and device of audio & video

Publications (2)

Publication Number Publication Date
CN105791939A true CN105791939A (en) 2016-07-20
CN105791939B CN105791939B (en) 2019-03-19

Family

ID=56393273

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610144802.9A Active CN105791939B (en) 2016-03-14 2016-03-14 The synchronous method and device of audio & video

Country Status (1)

Country Link
CN (1) CN105791939B (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658133A (en) * 2016-10-26 2017-05-10 广州市百果园网络科技有限公司 Audio and video synchronous playing method and terminal
CN107438192A (en) * 2017-07-26 2017-12-05 武汉烽火众智数字技术有限责任公司 The synchronous method of audio and video playing and related system and multimedia play terminal
CN107484010A (en) * 2017-10-09 2017-12-15 武汉斗鱼网络科技有限公司 A kind of video resource coding/decoding method and device
CN108632656A (en) * 2018-05-23 2018-10-09 中山全播网络科技有限公司 A kind of interaction recording and broadcasting system based on Data Synthesis
CN109729404A (en) * 2019-01-15 2019-05-07 晶晨半导体(上海)股份有限公司 A kind of synchronous modulation method and system based on Embedded player
CN110602542A (en) * 2019-08-13 2019-12-20 视联动力信息技术股份有限公司 Audio and video synchronization method, audio and video synchronization system, equipment and storage medium
CN112235597A (en) * 2020-09-17 2021-01-15 深圳市捷视飞通科技股份有限公司 Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment
CN113364646A (en) * 2021-06-03 2021-09-07 杭州朗和科技有限公司 Method, device and system for determining round-trip delay, storage medium and electronic equipment
CN113784073A (en) * 2021-09-28 2021-12-10 深圳万兴软件有限公司 Method, device and related medium for synchronizing sound and picture of sound recording and video recording
CN115412757A (en) * 2022-08-31 2022-11-29 海宁奕斯伟集成电路设计有限公司 Video playing method and device and electronic equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369365A (en) * 2013-06-28 2013-10-23 东南大学 Audio and video synchronous recording device
CN103414957A (en) * 2013-07-30 2013-11-27 广东工业大学 Method and device for synchronization of audio data and video data
CN103686315A (en) * 2012-09-13 2014-03-26 深圳市快播科技有限公司 Synchronous audio and video playing method and device
US20140325559A1 (en) * 2010-07-01 2014-10-30 Comcast Cable Communications, Llc Alternate source programming

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140325559A1 (en) * 2010-07-01 2014-10-30 Comcast Cable Communications, Llc Alternate source programming
CN103686315A (en) * 2012-09-13 2014-03-26 深圳市快播科技有限公司 Synchronous audio and video playing method and device
CN103369365A (en) * 2013-06-28 2013-10-23 东南大学 Audio and video synchronous recording device
CN103414957A (en) * 2013-07-30 2013-11-27 广东工业大学 Method and device for synchronization of audio data and video data

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106658133B (en) * 2016-10-26 2020-04-14 广州市百果园网络科技有限公司 Audio and video synchronous playing method and terminal
CN106658133A (en) * 2016-10-26 2017-05-10 广州市百果园网络科技有限公司 Audio and video synchronous playing method and terminal
CN107438192A (en) * 2017-07-26 2017-12-05 武汉烽火众智数字技术有限责任公司 The synchronous method of audio and video playing and related system and multimedia play terminal
CN107484010A (en) * 2017-10-09 2017-12-15 武汉斗鱼网络科技有限公司 A kind of video resource coding/decoding method and device
CN108632656A (en) * 2018-05-23 2018-10-09 中山全播网络科技有限公司 A kind of interaction recording and broadcasting system based on Data Synthesis
CN109729404A (en) * 2019-01-15 2019-05-07 晶晨半导体(上海)股份有限公司 A kind of synchronous modulation method and system based on Embedded player
CN109729404B (en) * 2019-01-15 2021-06-04 晶晨半导体(上海)股份有限公司 Synchronous modulation method based on embedded player
CN110602542A (en) * 2019-08-13 2019-12-20 视联动力信息技术股份有限公司 Audio and video synchronization method, audio and video synchronization system, equipment and storage medium
CN110602542B (en) * 2019-08-13 2022-02-08 视联动力信息技术股份有限公司 Audio and video synchronization method, audio and video synchronization system, equipment and storage medium
CN112235597A (en) * 2020-09-17 2021-01-15 深圳市捷视飞通科技股份有限公司 Method and device for synchronous protection of streaming media live broadcast audio and video and computer equipment
CN113364646A (en) * 2021-06-03 2021-09-07 杭州朗和科技有限公司 Method, device and system for determining round-trip delay, storage medium and electronic equipment
CN113784073A (en) * 2021-09-28 2021-12-10 深圳万兴软件有限公司 Method, device and related medium for synchronizing sound and picture of sound recording and video recording
CN115412757A (en) * 2022-08-31 2022-11-29 海宁奕斯伟集成电路设计有限公司 Video playing method and device and electronic equipment

Also Published As

Publication number Publication date
CN105791939B (en) 2019-03-19

Similar Documents

Publication Publication Date Title
CN105791939B (en) The synchronous method and device of audio & video
JP5270567B2 (en) Method and system for synchronizing media streams across multiple devices
CN100579238C (en) Synchronous playing method for audio and video buffer
EP1773072A1 (en) Synchronization watermarking in multimedia streams
RU2392772C2 (en) Traffic formation in dormant state of user plane
CN111010614A (en) Method, device, server and medium for displaying live caption
US20060088000A1 (en) Terminal having plural playback pointers for jitter buffer
WO2005088931A1 (en) Timing of quality of experience metrics
WO2006137762A1 (en) Method for synchronizing the presentation of media streams in a mobile communication system and terminal for transmitting media streams
JP2009118487A (en) Network state capture and reproduction
RU2369978C2 (en) Method for transfer of packets in transfer system
CN103546662A (en) Audio and video synchronizing method in network monitoring system
CN109565466A (en) More equipment room labial synchronization method and apparatus
CN108924631B (en) Video generation method based on audio and video shunt storage
CN101272383B (en) Real-time audio data transmission method
CN101651815B (en) Visual telephone and method for enhancing video quality by utilizing same
US8320449B2 (en) Method for controlling video frame stream
CN101540871A (en) Method and terminal for synchronously recording sounds and images of opposite ends based on circuit domain video telephone
CN102932673B (en) The transmission synthetic method of a kind of vision signal and audio signal, system and device
JP5092493B2 (en) Reception program, reception apparatus, communication system, and communication method
US20190089755A1 (en) Multiplexing data
US7817576B1 (en) Transitioning between multiple data streams of a media channel based on client conditions
CN114554242B (en) Live broadcast method and readable storage medium
Papadaki et al. Mobistream: Live multimedia streaming in mobile devices
KR100808981B1 (en) Timing of quality of experience metrics

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP02 Change in the address of a patent holder

Address after: 1110-08, 10th floor, No.8, Haidian North 2nd Street, Haidian District, Beijing 100080

Patentee after: BEIJING JIESIRUI TECHNOLOGY Co.,Ltd.

Address before: 100080, Beijing, Haidian Haidian District Road, 21, Zhongguancun intellectual property building, block B, 6

Patentee before: BEIJING JIESIRUI TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder