CN112995730A - Sound and picture synchronous adjustment method and device, electronic equipment and medium - Google Patents

Sound and picture synchronous adjustment method and device, electronic equipment and medium Download PDF

Info

Publication number
CN112995730A
CN112995730A CN202110338124.0A CN202110338124A CN112995730A CN 112995730 A CN112995730 A CN 112995730A CN 202110338124 A CN202110338124 A CN 202110338124A CN 112995730 A CN112995730 A CN 112995730A
Authority
CN
China
Prior art keywords
signal
terminal
sound
video stream
audio signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110338124.0A
Other languages
Chinese (zh)
Inventor
王岁宏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wingtech Communication Co Ltd
Original Assignee
Wingtech Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wingtech Communication Co Ltd filed Critical Wingtech Communication Co Ltd
Priority to CN202110338124.0A priority Critical patent/CN112995730A/en
Publication of CN112995730A publication Critical patent/CN112995730A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4398Processing of audio elementary streams involving reformatting operations of audio signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440218Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by transcoding between formats or standards, e.g. from MPEG-2 to MPEG-4

Abstract

The invention discloses a sound and picture synchronous adjustment method, a device, electronic equipment and a medium, wherein the method comprises the following steps: analyzing the obtained video stream data to obtain an audio signal and a video signal, wherein the video stream data is data obtained by coding and compressing a real-time video stream; acquiring a delay parameter T0 of a first terminal, wherein the first terminal is audio playing equipment; the audio signal is sent to the first terminal, the video signal is played after the time T0 is delayed, the audio signal and the video signal are synchronized by delaying the playing time of the video signal, the playing synchronization effect of the video is improved, and the user experience is further improved.

Description

Sound and picture synchronous adjustment method and device, electronic equipment and medium
Technical Field
The invention relates to the technical field of data transmission, in particular to a method and a device for synchronously adjusting sound and pictures, electronic equipment and a medium.
Background
The True Wireless Stereo (TWS) is mainly applied to the field of Bluetooth earphones or sound equipment, and can separate the left and right sound channels of the Bluetooth earphones in a Wireless way. The high latency performance of TWS headsets has been plagued by an increasing number of people using them. When people use the TWS earphone to match with the intelligent terminal to play videos at the intelligent terminal, the people can obviously feel that the videos and the sound pictures are not synchronous, and the mouth shape of the people is obviously different from the heard voice. Therefore, how to improve the consistency of sound and picture synchronization when the TWS earphone is used becomes a difficult problem to be solved urgently.
Disclosure of Invention
The embodiment of the invention provides a sound and picture synchronization adjusting method and device, electronic equipment and a storage medium, which are used for ensuring that audio and pictures are synchronized in time, improving the synchronous playing effect of videos and further improving user experience.
In order to solve the foregoing technical problem, an embodiment of the present application provides a method for adjusting audio and video synchronization, including:
analyzing the acquired video stream data to obtain an audio signal and a video signal, wherein the video stream data is data obtained by coding and compressing a real-time video stream;
acquiring a delay parameter T0 of a first terminal, wherein the first terminal is audio playing equipment;
and transmitting the audio signal to the first terminal, and playing the video signal after a delay T0.
Optionally, the parsing the acquired video stream data to obtain an audio signal and a video signal includes:
decoding the video stream data to obtain a decoded signal;
extracting the characteristics of the decoded signal to obtain a characteristic signal;
and separating based on the characteristic signal to obtain the audio signal and the video signal.
Optionally, the obtaining of the delay parameter T0 of the first terminal includes:
reading configuration parameters of a first terminal;
and analyzing and matching the configuration parameters to obtain the delay parameter T0.
Optionally, the sending the audio signal to the first terminal includes:
and performing voice compression on the audio signal by adopting a self-adaptive differential pulse code modulation mode to obtain a compressed signal, and sending the compressed signal to the first terminal.
Optionally, the method further comprises:
and communicating with the first terminal by adopting a first communication protocol and/or a second communication protocol, wherein the first communication protocol is a Bluetooth communication protocol, and the second communication protocol is a private communication protocol.
In order to solve the above technical problem, an embodiment of the present application further provides a device for adjusting audio and video synchronization, including:
the analysis module is used for analyzing the acquired video stream data to obtain an audio signal and a video signal;
an obtaining module, configured to obtain a delay parameter T0 of the first terminal;
and the sending module is used for sending the audio signal to the first terminal, delaying T0 and then sending a video signal to the second terminal.
In order to solve the foregoing technical problem, an embodiment of the present application further provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the sound-picture synchronization adjustment method when executing the computer program.
In order to solve the above technical problem, an embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and the computer program, when executed by a processor, implements the steps of the sound-picture synchronization adjusting method.
According to the sound and picture synchronous adjustment method, the sound and picture synchronous adjustment device, the electronic equipment and the storage medium, the audio signal and the video signal are obtained by analyzing the obtained video stream data, wherein the video stream data is obtained by coding and compressing the real-time video stream; acquiring a delay parameter T0 of a first terminal, wherein the first terminal is audio playing equipment; the audio signal is sent to the first terminal, the video signal is played after the time T0 is delayed, the purpose of synchronizing the audio signal and the video signal is achieved by delaying the playing of the video signal, the synchronous playing effect of the video is improved, and the user experience is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
FIG. 1 is an exemplary system architecture diagram in which the present application may be applied;
FIG. 2 is a flow chart of one embodiment of a method for adjusting the synchronization of sound and picture of the present application;
FIG. 3 is a schematic structural diagram of an embodiment of a device for adjusting synchronization of sound and picture according to the present application;
FIG. 4 is a schematic diagram of a structure of one embodiment of an electronic device according to the present application.
Detailed Description
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs; the terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application; the terms "including" and "having," and any variations thereof, in the description and claims of this application and the description of the above figures are intended to cover non-exclusive inclusions. The terms "first," "second," and the like in the description and claims of this application or in the above-described drawings are used for distinguishing between different objects and not for describing a particular order.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1, as shown in fig. 1, a system architecture 100 may include an electronic device 101 and a first terminal 102, where the electronic device 101 and the first terminal 102 are connected by a signal propagation medium 103.
The electronic device 101 may specifically be a mobile phone, a tablet computer, or other smart devices.
The first terminal 102 may specifically be a device for audio playing, such as a TWS headset.
Signal propagation medium 103 may include various connection types such as bluetooth and wireless communication links, among others.
It should be noted that the method for adjusting the audio-visual synchronization provided by the embodiment of the present application is executed by an electronic device.
Referring to fig. 2, fig. 2 shows a method for adjusting audio and video synchronization according to an embodiment of the present invention, which is described by taking the method applied to the electronic device in fig. 1 as an example, and is detailed as follows:
s201: analyzing the acquired video stream data to obtain an audio signal and a video signal, wherein the video stream data is obtained by coding and compressing a real-time video stream.
Specifically, the data volume of the real-time video stream is large, and in order to increase the transmission rate, in this embodiment, a coding compression mode is adopted to code and compress the real-time video stream to obtain video stream data, and then transmit the video stream data to the electronic device, so that the video stream data is analyzed by the electronic device to obtain an audio signal and a video signal.
The audio signal refers to a sound signal of a plurality of frames in the video stream data, and the video signal refers to a picture signal of a plurality of frames in the video stream data. Specifically, the electronic device receives video stream data, and then separates the video stream data by using a video stream data separation tool, wherein the video stream data separation tool includes but is not limited to a command line program of ffmpeg, to obtain an audio signal and a video signal. It will be appreciated that by parsing the acquired video stream data to obtain an audio signal and a video signal, a separation of the video stream data is achieved for subsequent further processing based on the audio signal and the video signal.
S202: acquiring a delay parameter T0 of the first terminal;
the first terminal refers to a device for playing video and audio signals, and the first terminal in this embodiment may be a TWS headset. The delay parameter T0 is a time preset by the first terminal to indicate a delay in playing the audio signal, and due to the limitation of stereo transmission of the first terminal, such as a TWS headset, there is a delay from the audio signal being emitted by the system to the beginning of playing the audio signal, while there is almost no delay in the video signal. Therefore, when the user plays the video, the user obviously feels that the sound and the picture are not synchronous, and the time difference exists between the human mouth shape and the heard voice, namely the numerical value of the delay parameter. Specifically, before the first terminal leaves the factory, the playing times of the audio signals of the plurality of second terminals may be tested a plurality of times, and the value of the delay parameter T0 may be determined. The delay parameter T0 is stored in the TWS headphone in advance, and when the video stream data is acquired, the delay parameter T0 of the first terminal is acquired. It is easy to understand that, the present embodiment is advantageous to ensure the synchronization of the audio signal and the video signal by obtaining the delay parameter T0, so as to control the playing time of the audio signal according to the delay parameter T0.
S203: the audio signal is sent to the first terminal and after a delay T0 time, the video signal is played.
Specifically, the audio signal is sent to the first terminal, the time for starting playing of the audio signal is prolonged backwards by T0, specifically, the timer can be used for delaying, and then the video signal is played on the electronic device, so that synchronous playing of the audio signal and the video signal is realized, the playing effect of video stream data is improved, and the watching experience of a user is improved.
In this embodiment, an audio signal and a video signal are obtained by analyzing the acquired video stream data; acquiring a delay parameter T0 of the first terminal; and sending the audio signal to the first terminal, delaying T0, playing the video signal, and delaying the playing time of the audio signal to synchronize the audio signal and the video, so that the playing effect of the video is improved, and the user experience is further improved.
In some optional implementation manners of this embodiment, in step S201, the parsing the acquired video stream data to obtain an audio signal and a video signal includes:
decoding the video stream data to obtain a decoded signal;
extracting the characteristics of the decoded signal to obtain a characteristic signal;
and separating based on the characteristic signal to obtain the audio signal and the video signal.
In this embodiment, the FFMPEG decoder may decode to obtain a decoded signal, then extract the feature of the decoded signal to obtain a feature signal, then calculate the distance between the feature signal and the standard audio signal and the standard video signal, process the audio signal and the video signal according to the calculated distance, and decode and extract the feature of the video stream data, thereby realizing fast separation of the video stream data and ensuring the accuracy of subsequent audio-video synchronization.
In some optional implementation manners of this embodiment, in step S202, acquiring the delay parameter T0 of the first terminal includes:
reading configuration parameters of a first terminal;
and analyzing and matching the configuration parameters to obtain the delay parameter T0.
In this embodiment, the configuration parameter is data used to identify relevant information of the first terminal, and specifically, the configuration parameter may be obtained by identifying a nameplate on the first terminal, then the configuration parameter is analyzed to determine a time parameter, then, according to a preset naming rule of the parameter, matching of the parameter item is performed, and the configuration parameter corresponding to the parameter item that is successfully matched is determined as the delay parameter T0.
In some optional implementations of this embodiment, in step S202, the sending the audio signal to the first terminal includes:
and performing voice compression on the audio signal by adopting a self-adaptive differential pulse code modulation mode to obtain a compressed signal, and sending the compressed signal to the first terminal.
In order to ensure the communication efficiency, the embodiment adopts a private protocol for transmission of the audio data, and preferably, in the private protocol of the embodiment, an adaptive differential pulse code modulation algorithm is adopted to compress and transmit the audio data, and after the transmission is completed, decompression and playing are performed, so as to improve the efficiency of the audio data in the transmission process.
An Adaptive Differential Pulse Code Modulation (ADPCM)) is one of predictive coding, and is improved on the basis of PCM to encode a difference signal between an actual signal and a predicted value obtained from previous signals. The core idea is as follows: firstly, the size of the quantization step is changed by utilizing the self-adaptive idea, namely a small quantization step (step-size) is used for coding a small difference value, a large quantization step is used for coding a large difference value, and secondly, the past sample value is used for estimating the predicted value of the next input sample, so that the difference value between the actual sample value and the predicted value is always minimum. The size of the quantization interval used in ADPCM can also be automatically adapted according to the statistics of the difference signal to achieve optimal quantization, thereby minimizing distortion due to quantization.
In the embodiment, the voice compression and decompression are performed by adopting the audio compression algorithm ADPCM with lower complexity, and the compression and decompression time is shorter, so that the transmission delay is reduced, the efficiency of audio information transmission is improved, the communication quality is improved, and the delay is reduced.
Optionally, the method for adjusting the sound-picture synchronization further includes: and communicating with the first terminal by adopting a first communication protocol and/or a second communication protocol, wherein the first communication protocol is a Bluetooth communication protocol, and the second communication protocol is a private communication protocol.
In an optional embodiment, the first communication protocol is a bluetooth communication protocol, the second communication protocol is a private communication protocol, and a channel bandwidth of the private communication protocol may be different from that of the bluetooth communication protocol, so as to avoid mutual interference between information streams. The communication frequency band of the private communication protocol may be the same as or different from the bluetooth communication protocol. In some examples, the first communication protocol and the bluetooth communication protocol are both in the 2.4GHz band.
Since the private communication protocol does not need to set a large-scale protocol stack and a complicated transmission standard, generally, the delay of wireless communication based on the private communication protocol is often small, and thus, data transmission with ultra-low delay can be realized.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 3 is a schematic block diagram of a sound-picture synchronization adjusting apparatus corresponding to the sound-picture synchronization adjusting method according to the above embodiment. As shown in fig. 3, the device for adjusting the synchronization of sound and picture comprises an analysis module 31, an acquisition module 32 and a sending module 33. The functional modules are explained in detail as follows:
the analysis module 31 is configured to analyze the acquired video stream data to obtain an audio signal and a video signal, where the video stream data is obtained by encoding and compressing a real-time video stream;
the obtaining module 32 is configured to obtain a delay parameter T0 of a first terminal, where the first terminal is an audio playing device;
and the sending module 33 is configured to send the audio signal to the first terminal, and play the video signal after a delay of T0.
Optionally, the obtaining module 31 includes:
the decoding unit is used for decoding the video stream data to obtain a decoded signal;
the extraction unit is used for extracting the characteristics of the decoded signal to obtain a characteristic signal;
and the separation unit is used for separating based on the characteristic signal to obtain the audio signal and the video signal.
Optionally, the obtaining module 32 includes:
the reading unit is used for reading the configuration parameters of the first terminal;
and the matching unit is used for analyzing and matching the configuration parameters to obtain the delay parameter T0. The sending module comprises:
optionally, the sending module 33 includes: and the voice compression unit is used for performing voice compression on the audio signal in a self-adaptive differential pulse code modulation mode to obtain a compressed signal and sending the compressed signal to the first terminal.
For the specific limitation of the audio and video synchronization adjusting device, reference may be made to the above limitation of the audio and video synchronization adjusting method, and details are not described herein again. The modules in the sound and picture synchronization adjusting device can be wholly or partially realized by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent of a processor in the electronic device, or can be stored in a memory in the electronic device in a software form, so that the processor can call and execute operations corresponding to the modules.
In order to solve the technical problem, an embodiment of the application further provides an electronic device. Referring to fig. 4, fig. 4 is a block diagram of a basic structure of the electronic device according to the embodiment.
The electronic device 4 comprises a memory 41, a processor 42, and a network interface 43 communicatively connected to each other via a system bus. It is noted that only the electronic device 4 having the components connection memory 41, processor 42, network interface 43 is shown, but it is understood that not all of the shown components are required to be implemented, and that more or fewer components may be implemented instead. As will be understood by those skilled in the art, the electronic device is a device capable of automatically performing numerical calculation and/or information processing according to a preset or stored instruction, and the hardware includes, but is not limited to, a microprocessor, an Application Specific Integrated Circuit (ASIC), a Programmable Gate Array (FPGA), a Digital Signal Processor (DSP), an embedded device, and the like.
The electronic device can be a desktop computer, a notebook, a palm computer, a mobile phone and the like. The electronic equipment can be in man-machine interaction with a user through a keyboard, a mouse, a remote controller, a touch panel or voice control equipment and the like.
The memory 41 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or D interface display memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 41 may be an internal storage unit of the electronic device 4, such as a hard disk or a memory of the electronic device 4. In other embodiments, the memory 41 may also be an external storage device of the electronic device 4, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the electronic device 4. Of course, the memory 41 may also include both an internal storage unit and an external storage device of the electronic device 4. In this embodiment, the memory 41 is generally used for storing an operating system installed in the electronic device 4 and various types of application software, such as program codes for controlling electronic files. Further, the memory 41 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 42 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 42 is typically used to control the overall operation of the electronic device 4. In this embodiment, the processor 42 is configured to execute the program code stored in the memory 41 or process data, such as program code for executing control of an electronic file.
The network interface 43 may comprise a wireless network interface or a wired network interface, and the network interface 43 is generally used for establishing communication connection between the electronic device 4 and other electronic devices.
The present application further provides another embodiment, which is to provide a computer-readable storage medium, wherein the computer-readable storage medium stores an interface display program, and the interface display program is executable by at least one processor, so as to cause the at least one processor to execute the steps of the sound-picture synchronization adjustment method as described above.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal device (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
It is to be understood that the above-described embodiments are merely illustrative of some, but not restrictive, of the broad invention, and that the appended drawings illustrate preferred embodiments of the invention and do not limit the scope of the invention. This application is capable of embodiments in many different forms and is provided for the purpose of enabling a thorough understanding of the disclosure of the application. Although the present application has been described in detail with reference to the foregoing embodiments, it will be apparent to one skilled in the art that the present application may be practiced without modification or with equivalents of some of the features described in the foregoing embodiments. All equivalent structures made by using the contents of the specification and the drawings of the present application are directly or indirectly applied to other related technical fields and are within the protection scope of the present application.

Claims (10)

1. A sound and picture synchronous adjustment method is applied to electronic equipment and is characterized by comprising the following steps:
analyzing the acquired video stream data to obtain an audio signal and a video signal, wherein the video stream data is data obtained by coding and compressing a real-time video stream;
acquiring a delay parameter T0 of a first terminal, wherein the first terminal is audio playing equipment;
and transmitting the audio signal to the first terminal, and playing the video signal after a delay T0.
2. The method for adjusting audio-visual synchronization of claim 1, wherein the parsing the acquired video stream data to obtain an audio signal and a video signal comprises:
decoding the video stream data to obtain a decoded signal;
extracting the characteristics of the decoded signal to obtain a characteristic signal;
and separating based on the characteristic signal to obtain the audio signal and the video signal.
3. The method for adjusting the synchronization of sound and picture according to claim 1, wherein said obtaining the delay parameter T0 of the first terminal comprises:
reading configuration parameters of a first terminal;
and analyzing and matching the configuration parameters to obtain the delay parameter T0.
4. The method for adjusting sound and picture synchronization as claimed in claim 1 or 2, wherein the transmitting the audio signal to the first terminal comprises:
and performing voice compression on the audio signal by adopting a self-adaptive differential pulse code modulation mode to obtain a compressed signal, and sending the compressed signal to the first terminal.
5. The method for adjusting the synchronization of sound and picture according to claim 1, wherein the method further comprises:
and communicating with the first terminal by adopting a first communication protocol and/or a second communication protocol, wherein the first communication protocol is a Bluetooth communication protocol, and the second communication protocol is a private communication protocol.
6. A sound and picture synchronous adjusting device is characterized by comprising:
the analysis module is used for analyzing the acquired video stream data to obtain an audio signal and a video signal, wherein the video stream data is data obtained by coding and compressing a real-time video stream;
the device comprises an acquisition module, a delay module and a processing module, wherein the acquisition module is used for acquiring a delay parameter T0 of a first terminal, and the first terminal is audio playing equipment;
and the sending module is used for sending the audio signal to the first terminal and playing the video signal after delaying T0 time.
7. The device for synchronously adjusting sound and picture according to claim 6, wherein said obtaining module comprises:
the reading unit is used for reading the configuration parameters of the first terminal;
and the matching unit is used for analyzing and matching the configuration parameters to obtain the delay parameter T0.
8. The device for adjusting synchronization of sound and picture according to claim 6, wherein said transmitting module comprises:
and the voice compression unit is used for performing voice compression on the audio signal in a self-adaptive differential pulse code modulation mode to obtain a compressed signal and sending the compressed signal to the first terminal.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of the method for adjusting the synchronization of sound and picture according to any one of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for adjusting the synchronization of sound and picture according to any one of claims 1 to 5.
CN202110338124.0A 2021-03-30 2021-03-30 Sound and picture synchronous adjustment method and device, electronic equipment and medium Pending CN112995730A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110338124.0A CN112995730A (en) 2021-03-30 2021-03-30 Sound and picture synchronous adjustment method and device, electronic equipment and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110338124.0A CN112995730A (en) 2021-03-30 2021-03-30 Sound and picture synchronous adjustment method and device, electronic equipment and medium

Publications (1)

Publication Number Publication Date
CN112995730A true CN112995730A (en) 2021-06-18

Family

ID=76338069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110338124.0A Pending CN112995730A (en) 2021-03-30 2021-03-30 Sound and picture synchronous adjustment method and device, electronic equipment and medium

Country Status (1)

Country Link
CN (1) CN112995730A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302192A (en) * 2021-12-15 2022-04-08 广州小鹏汽车科技有限公司 Sound and picture synchronization method and device, vehicle and storage medium
CN115942021A (en) * 2023-02-17 2023-04-07 央广新媒体文化传媒(北京)有限公司 Audio and video stream synchronous playing method and device, electronic equipment and storage medium
CN116958331A (en) * 2023-09-20 2023-10-27 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257642A (en) * 2018-10-12 2019-01-22 Oppo广东移动通信有限公司 Video resource playback method, device, electronic equipment and storage medium
CN109587542A (en) * 2018-12-27 2019-04-05 北京奇艺世纪科技有限公司 Audio, video data synchronizer, method, data processing equipment, medium
CN111988647A (en) * 2020-08-27 2020-11-24 广州视源电子科技股份有限公司 Sound and picture synchronous adjusting method, device, equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109257642A (en) * 2018-10-12 2019-01-22 Oppo广东移动通信有限公司 Video resource playback method, device, electronic equipment and storage medium
CN109587542A (en) * 2018-12-27 2019-04-05 北京奇艺世纪科技有限公司 Audio, video data synchronizer, method, data processing equipment, medium
CN111988647A (en) * 2020-08-27 2020-11-24 广州视源电子科技股份有限公司 Sound and picture synchronous adjusting method, device, equipment and medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114302192A (en) * 2021-12-15 2022-04-08 广州小鹏汽车科技有限公司 Sound and picture synchronization method and device, vehicle and storage medium
CN114302192B (en) * 2021-12-15 2023-06-30 广州小鹏汽车科技有限公司 Sound and picture synchronization method and device, vehicle and storage medium
CN115942021A (en) * 2023-02-17 2023-04-07 央广新媒体文化传媒(北京)有限公司 Audio and video stream synchronous playing method and device, electronic equipment and storage medium
CN116958331A (en) * 2023-09-20 2023-10-27 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment
CN116958331B (en) * 2023-09-20 2024-01-19 四川蜀天信息技术有限公司 Sound and picture synchronization adjusting method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN112995730A (en) Sound and picture synchronous adjustment method and device, electronic equipment and medium
US11109138B2 (en) Data transmission method and system, and bluetooth headphone
CN104980788B (en) Video encoding/decoding method and device
CN105306110B (en) A kind of method and system realized synchronous music and played
TWI513320B (en) Video conferencing device and lip synchronization method thereof
CN107682752B (en) Method, device and system for displaying video picture, terminal equipment and storage medium
CN112423075B (en) Audio and video timestamp processing method and device, electronic equipment and storage medium
CN108847248B (en) Bluetooth device audio processing method, system, readable storage medium and Bluetooth device
WO2020155964A1 (en) Audio/video switching method and apparatus, and computer device and readable storage medium
CN111885412B (en) HDMI signal screen transmission method and wireless screen transmission device
CN213906675U (en) Portable wireless bluetooth recording equipment
CN111352605A (en) Audio playing and sending method and device
CN111796794B (en) Voice data processing method, system and virtual machine
CN110022510B (en) Sound vibration file generation method, sound vibration file analysis method and related device
CN112565923A (en) Audio and video stream processing method and device, electronic equipment and storage medium
CN115767158A (en) Synchronous playing method, terminal equipment and storage medium
CN113422997B (en) Method and device for playing audio data and readable storage medium
CN115223577A (en) Audio processing method, chip, device, equipment and computer readable storage medium
CN111556406B (en) Audio processing method, audio processing device and earphone
CN114639392A (en) Audio processing method and device, electronic equipment and storage medium
CN113613221A (en) TWS master device, TWS slave device, audio device and system
CN113055706A (en) Video synthesis method and device, electronic equipment and storage medium
CN111385780A (en) Bluetooth audio signal transmission method and device
CN111355996A (en) Audio playing method and computing device
CN113261300B (en) Audio sending and playing method and smart television

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination