CN115767158A - Synchronous playing method, terminal equipment and storage medium - Google Patents

Synchronous playing method, terminal equipment and storage medium Download PDF

Info

Publication number
CN115767158A
CN115767158A CN202211347078.1A CN202211347078A CN115767158A CN 115767158 A CN115767158 A CN 115767158A CN 202211347078 A CN202211347078 A CN 202211347078A CN 115767158 A CN115767158 A CN 115767158A
Authority
CN
China
Prior art keywords
information
playing
audio
video
stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211347078.1A
Other languages
Chinese (zh)
Inventor
张洪译
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kaihong Digital Industry Development Co Ltd
Original Assignee
Shenzhen Kaihong Digital Industry Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kaihong Digital Industry Development Co Ltd filed Critical Shenzhen Kaihong Digital Industry Development Co Ltd
Priority to CN202211347078.1A priority Critical patent/CN115767158A/en
Publication of CN115767158A publication Critical patent/CN115767158A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application provides a synchronous playing method, terminal equipment and a storage medium, which relate to the technical field of computers, wherein the method is applied to the terminal equipment, the terminal equipment is connected with a plurality of playing equipment through a distributed soft bus, and the method comprises the following steps: acquiring audio and video information, equipment information of the plurality of playing equipment and environment information; based on the distributed soft bus, according to the device information and the environment information of the plurality of playing devices, the audio and video information is synchronously processed, and target audio and video data corresponding to each playing device is obtained; and sending the target audio and video data to the corresponding playing devices so that each playing device plays the target audio and video data according to the corresponding playing device. The embodiment of the application aims to realize audio and video synchronization of each playing device and can improve the use experience of users on each playing device.

Description

Synchronous playing method, terminal equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a synchronous playing method, a terminal device, and a storage medium.
Background
With the development of intelligent terminals, intelligent playback devices are emerging endlessly, and in audio/video transmission systems such as home theaters, intelligent home theaters start playing stereo music and intelligent TV videos by using intelligent sound boxes instead of traditional sound boxes.
With the continuous development of the mobile internet, the requirement of a user on the viewing experience cannot be met only by realizing the audio and video synchronization of each playing device in the intelligent home theater. In the prior art, environmental factors are not considered in the audio and video synchronization process, for example, the user needs to repeatedly adjust the playing settings of each playing device due to too close distance between the device and the user or too high noise, and the user operation is cumbersome, so that the user experience is poor.
Disclosure of Invention
The application provides a synchronous playing method, terminal equipment and a storage medium, which aim to improve the use experience of users for each playing device while realizing the audio and video synchronization of each playing device.
In a first aspect, the present application provides a synchronous playing method, which is applied to a terminal device, where the terminal device is connected to multiple playing devices through a distributed soft bus, and the method includes:
acquiring audio and video information, equipment information of the plurality of playing equipment and environment information;
based on the distributed soft bus, according to the equipment information and the environment information of the plurality of playing equipment, the audio and video information is synchronously processed to obtain target audio and video data corresponding to each playing equipment;
and sending the target audio and video data to the corresponding playing devices so that each playing device plays the target audio and video data according to the corresponding playing device.
In a second aspect, the present application provides a terminal device comprising a memory and a processor;
the memory for storing a computer program;
the processor is configured to execute the computer program and implement the synchronous playing method when executing the computer program.
In a third aspect, the present application provides a computer-readable storage medium storing a computer program, which when executed by a processor causes the processor to implement the synchronized playback method as described above.
The application provides a synchronous playing method, a terminal device and a storage medium, wherein the synchronous playing method is applied to the terminal device, and the terminal device is connected with a plurality of playing devices through a distributed soft bus. The method comprises the steps of obtaining audio and video information, equipment information of a plurality of playing equipment and environment information; based on a distributed soft bus, according to the device information and the environment information of a plurality of playing devices, carrying out synchronous processing on the audio and video information to obtain target audio and video data corresponding to each playing device; and sending the target audio and video data to corresponding playing equipment so that each playing equipment plays according to the corresponding target audio and video data. Therefore, the audio and video synchronization of each playing device can be realized, and simultaneously, the corresponding audio and video can be output according to the synchronization requirements when different playing devices output the audio and video, so that the playing setting of each playing device meets the requirements of users, and the use experience of the users on each playing device is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the description of the embodiments will be briefly described below, and it is apparent that, the drawings in the following description are examples of the present application, and it will be apparent to those skilled in the art that other drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a schematic view of an application scenario of a synchronous playing method provided in an embodiment of the present application;
fig. 2 is a schematic flowchart illustrating steps of a synchronous playing method according to an embodiment of the present application;
fig. 3 is a schematic application scenario diagram of another synchronous playing method provided in an embodiment of the present application;
fig. 4 is a schematic block diagram of a structure of a terminal device provided in an embodiment of the present application;
it is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The flow diagrams depicted in the figures are merely illustrative and do not necessarily include all of the elements and operations/steps, nor do they necessarily have to be performed in the order depicted. For example, some operations/steps may be decomposed, combined or partially combined, so that the actual execution sequence may be changed according to the actual situation.
It is to be understood that the terminology used in the description of the present application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. As used in the specification of the present application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that, for the convenience of clearly describing the technical solutions of the embodiments of the present application, the words "first", "second", and the like are used in the embodiments of the present application to distinguish the same items or similar items with basically the same functions and actions. For example, the first callback function and the second callback function are only used for distinguishing different callback functions, and the order of the callback functions is not limited. Those skilled in the art will appreciate that the terms "first," "second," etc. do not denote any order or quantity, nor do the terms "first," "second," etc. denote any order or importance.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
Referring to fig. 1, fig. 1 is a schematic view of an application scenario of a synchronous playing method according to an embodiment of the present application. An application scenario of the synchronous playing method will be described below with reference to fig. 1.
As shown in fig. 1, the synchronous playing method is applied to a terminal device, wherein the terminal device and a plurality of playing devices are connected through a distributed soft bus.
The communication connection between the terminal device 11 and the plurality of playback devices 12 may be a wired connection or a wireless connection, and may be, for example, a connection via Wi-Fi, bluetooth, ethernet, 3G communication, 4G communication, 5G communication, or the like. The plurality of playback devices 12 may be independent of each other, may be respectively disposed at different spatial locations, or may be integrated in the same device. The distributed soft bus is a bus which provides uniform distributed communication capability for interconnection and intercommunication among different devices, and has the functions of discovery, connection, networking/topology management, task bus, data bus and the like.
In this embodiment, the terminal device 11 and the plurality of playback devices 12 are both configured with a hongmeng operating system, and are provided with a distributed soft bus, and the hongmeng operating systems in the devices may be communicatively connected through the distributed soft bus, thereby implementing functions such as resource fusion, data sharing, and function sharing.
The playing device 12 may be a device for playing video or audio, such as a sound, a display screen, a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, and a wearable device. The terminal device 11 may be a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, a wearable device, or the like, which can be used to perform synchronous processing on audio and video information.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating a synchronous playing method according to an embodiment of the present application. The synchronous playing method can be applied to terminal equipment. The terminal equipment is connected with a plurality of playing equipment through a distributed soft bus. The synchronous playing method can output corresponding audio and video according to the synchronization requirements when different playing devices output audio and video while realizing the audio and video synchronization of the playing devices, thereby enabling the playing settings of the playing devices to meet the requirements of users and improving the use experience of the users for the playing devices.
As shown in fig. 1, the synchronized playback method may be applied to a terminal device, and the synchronized playback method includes steps S101 to S103.
S101, obtaining audio and video information, and device information and environment information of the plurality of playing devices.
The audio and video information is information obtained by integrating and processing audio information and video information. Specifically, the audio and video information may be acquired from the outside of the terminal device, such as the internet, and the audio information and the video information may be acquired from a plurality of recording devices, and then the audio information and the video information may be integrated into the audio and video information. The device information of the playback device may include parameter information and interface information, where the parameter information may be used to indicate parameters of the playback device, such as a highest supported frame rate and a highest supported frequency. The interface information may be used to indicate device interfaces supported by the playback device, and the device interfaces supported by different playback devices may be different. The environment information is environment information of an environment in which a plurality of playing devices are located, and may specifically include position information, ambient brightness information, ambient noise information, and the like of a user and each of the playing devices.
In some embodiments, the terminal device is further connected to a plurality of recording devices through a distributed soft bus to obtain audio information or video information of each recording device; and based on the distributed soft bus, carrying out synchronous processing on the audio information and the video information to generate audio and video information corresponding to the terminal equipment. Therefore, the audio information and the video information acquired from different recording devices can be synchronously processed, and the terminal device can conveniently output corresponding audio and video according to the synchronization requirement.
As shown in fig. 3, the terminal device 11 is further connected with a plurality of recording devices 13 through a distributed flexible bus.
The communication connection between the terminal device 11 and the plurality of recording devices 13 may be a wired connection or a wireless connection, and may be, for example, a connection through Wi-Fi, bluetooth, ethernet, 3G communication, 4G communication, 5G communication, or the like. The recording devices 13 may be independent of each other, may be respectively disposed at different spatial positions, or may be integrated in the same device. The distributed soft bus is a bus which provides uniform distributed communication capability for interconnection and intercommunication among different devices, and has the functions of discovery, connection, networking/topology management, task bus, data bus and the like.
In the embodiment of the present application, both the terminal device 11 and the plurality of recording devices 13 are configured with a hongmeng operating system, and are provided with a distributed soft bus, and the hongmeng operating systems in the devices may be in communication connection through the distributed soft bus, thereby implementing functions such as resource fusion, data sharing, and function sharing.
The recording device 13 may be a microphone, a camera, a mobile phone, a tablet computer, a notebook computer, a desktop computer, a personal digital assistant, a wearable device, or other devices for playing video or audio.
For example, in an application scenario of recording live broadcasts and the like, for better recording effect, there may be a case where a plurality of audio recording devices are used for audio recording, and a plurality of video recording devices are used for video recording, and specifically, videos at different viewing angles may be respectively recorded by the plurality of video recording devices. Based on the distributed soft bus provided by the application, the audio information acquired by the plurality of recording devices and the video information acquired by the plurality of recording devices are synchronized and synthesized, so that the audio and video information corresponding to the terminal device is generated. Therefore, the required output or input audio and video can be correspondingly and synchronously processed according to the distributed characteristic of the Hongmon system, and the required audio and video information can be obtained.
And S102, based on the distributed soft bus, according to the device information and the environment information of the plurality of playing devices, carrying out synchronous processing on the audio and video information to obtain target audio and video data corresponding to each playing device.
The target audio and video data is synchronized according to the playing setting information of each playing device. The playing setting information can be determined according to the device information and the environment information, so that the terminal device can synchronously process the audio and video information according to the playing setting information corresponding to each playing device.
In some embodiments, the playing setting information corresponding to each playing device is obtained according to the device information and the environment information of the plurality of playing devices; and according to the playing setting information, synchronously processing the audio stream and the video stream in the audio and video information to obtain target audio and video data corresponding to each playing device. Therefore, the playing setting information can be accurately determined according to the equipment information and the environment information, and the target audio and video data corresponding to each playing equipment can be accurately generated.
The playing setting information may include frame rate configuration information, frequency configuration information, playing mode, playing volume, and other information corresponding to the playing device. The frame rate configuration information may be used to indicate a frame rate that is most suitable for playing audio and video for the playing device; the frequency configuration information may be used to indicate a most suitable frequency for playing the audio/video for the playing device. Taking an audio playing device as an example, the playing mode may include a stereo mode, a surround sound mode, a pure human voice mode, and the like; taking a video playing device as an example, the playing mode may include a night mode, a live mode, a movie mode, and the like.
Specifically, according to parameter information and environment information of a playing device, frame rate configuration information, frequency configuration information, a playing mode, playing volume and other information which conform to the playing device are determined, audio and video information is decoded to obtain an audio stream and a video stream in the audio and video information, and then according to the playing setting information, the audio stream and the video stream in the audio and video information are synchronously processed to obtain target audio and video data corresponding to each playing device.
For example, if the playback device includes bass sound, the playback device may be determined to be in the bass mode according to the parameter information and the environment information of the playback device and the mode specific to the playback device.
For example, if the application scene of the playing device is a scene of recovering the radio, it may be determined that the playing device is in the pure human voice mode according to the parameter information, the environment information, and the user requirement of the playing device.
For example, if the application scene of the playback device is a movie or a scene such as a live broadcast, it may be determined that the playback device is in a live broadcast mode or a movie mode according to the parameter information, the environment information, and the user requirement of the playback device.
In some embodiments, the environment information includes position information, environment brightness information, and environment noise information of a user and each of the playback devices; determining the distance between the user and each playing device according to the position information; and determining the playing setting information of each playing device according to the distance, the ambient brightness information and the ambient noise information. Thereby, the playing setting information of the playing device can be accurately determined according to the environment information.
The user and each playing device are located in a preset space within a certain range, the position information of the user is used for indicating the position of the user in the preset space, and the position information of the playing device is used for indicating the position of the playing device in the preset space. The ambient brightness information is the ambient brightness of the position where the playing device is located, and the ambient noise information is the noise level of the position where the playing device is located.
Specifically, the distance between the user and the playing device may be calculated according to the position information of the user and the position information of the playing device, and the playing setting information that the playing device best meets is determined from a plurality of preset playing modes according to the distance between the user and the playing device, the ambient brightness information, and the ambient noise information.
Illustratively, if a first playing device and a second playing device exist, if the distance between the user and the first playing device exceeds a preset distance threshold, it indicates that the distance between the user and the first playing device is far, and the distance between the user and the first playing device is very close, at this time, the playing volume of the first playing device can be set to a high point, and the playing volume of the second playing device can be set to a low point, so that the playing volumes of the first playing device and the second playing device received by the user are the same, and meanwhile, the playing time of the audio stream of the first playing device is preset for a preset time, thereby avoiding that the audio played by the first playing device and the second playing device is echoed, and affecting the user experience.
The preset distance threshold may be any distance, such as 10m; the preset time may be any time, such as 0.1s, and is not limited in this respect.
For example, if the ambient brightness of the playback device is lower than the preset ambient brightness, it indicates that the environment where the playback device is located is dark, and at this time, it may be determined that the screen brightness of the playback device is lower. Therefore, the eyes of the user can adapt to a darker environment conveniently, and the film watching experience of the user is improved.
The preset ambient brightness may be any brightness value, and is not specifically limited herein.
For example, if the ambient noise decibel of the playback device is lower than the preset noise decibel, it indicates that the ambient noise level of the playback device is higher, and at this time, the playback volume setting up point of the playback device may be determined. Therefore, the situation that the user cannot clearly hear the audio played by the playing device due to the influence of noise can be avoided.
In some embodiments, after determining the playing setting information of each playing device, acquiring synchronization requirement information; determining a synchronization parameter corresponding to the playing device according to the synchronization requirement information; and modifying the playing setting information according to the synchronization parameters so as to synchronously process the audio stream and the video stream in the audio and video information according to the modified playing setting information. Therefore, the playing setting information can be corrected according to the synchronous demand information, and the target audio and video data which better meets the user demand can be obtained.
The synchronous demand information may include demand information input by a user, for example, the demand information indicates that the influence of the distance is not considered, the distance factor is not considered during synchronous processing, for example, the demand information indicates that noise is set to be removed, and at this time, only human voice is saved during synchronous processing. The synchronous parameter is the playing setting information needing to be corrected.
Illustratively, if the requirement information input by the user indicates that the influence of the distance is not considered, the synchronization parameter is the distance between the user and each of the playing devices, so as to correct the playing setting information, so that the distance factor is not considered when subsequently performing synchronization processing on the audio stream and the video stream in the audio and video information.
Illustratively, if the requirement information input by the user indicates that noise is set to be removed, the synchronization parameter is ambient noise information, so as to modify the playing setting information, so that the noise factor is not considered when the audio stream and the video stream in the audio and video information are synchronized subsequently.
In some embodiments, the playing setting information includes frame rate configuration information and frequency configuration information, and according to the frame rate configuration information, frame division analysis and frame rate conversion processing are performed on a video stream in the audio and video information to obtain a target video stream corresponding to the playing device; according to the frequency configuration information, performing track splitting analysis and frequency conversion processing on the audio stream in the audio and video information to obtain a target audio stream corresponding to the playing device; and generating target audio and video data corresponding to the playing equipment according to the target video stream and the target audio stream. Therefore, frame rate conversion can be carried out on the video stream and frequency conversion can be carried out on the audio stream, and therefore target audio and video data which meet the requirements of users better can be obtained.
The target video stream is a video stream subjected to frame rate conversion, and the target audio stream is an audio stream subjected to frequency conversion.
Specifically, according to frame rate configuration information, each frame of picture in the video stream in the audio/video information is analyzed, and frame rate conversion processing is performed according to the frame rate supported by the playing device, so as to obtain a target video stream corresponding to the playing device.
Specifically, according to the frequency configuration information, the audio of each track in the audio stream in the audio and video information is analyzed, and frame rate conversion processing is performed according to the frequency supported by the playing device, so as to obtain a target audio stream corresponding to the playing device.
In some embodiments, timestamp information of the target video stream and timestamp information of the target audio stream are obtained; and according to the timestamp information, performing time calibration and integration processing on the target video stream and the target audio stream to generate target audio and video data corresponding to the playing equipment. Therefore, the target video stream and the target audio stream of each playing device can be synchronized, and the condition that the sound and the picture are not synchronized is avoided.
The time stamp information of the target video stream includes the playing time of the first frame picture in the target video stream, and the time stamp information of the target audio stream includes the playing time of the audio corresponding to the first frame picture in the target audio stream.
Specifically, according to the timestamp information, time calibration is performed on the playing time of a first frame picture in the target video stream and the playing time of an audio corresponding to the first frame picture in the target audio stream, so that the playing time of the first frame picture in the target video stream is matched with the playing time of the audio corresponding to the first frame picture in the target audio stream, and finally, the frame pictures and the audio of multiple tracks are integrated to generate target audio and video data corresponding to the playing device.
And S103, sending the target audio and video data to the corresponding playing devices so that each playing device plays according to the corresponding target audio and video data.
The target audio and video data are the audio and video data which are synchronously processed according to the playing devices, when the playing devices play the corresponding target audio and video data, not only can the sound and picture synchronization of the playing devices be realized, but also the audio synchronization and the video synchronization among a plurality of playing devices can be realized, meanwhile, the playing setting of each playing device meets the requirements of users, and the use experience of the users on each playing device is improved.
In some embodiments, the target audio and video data is sent to the corresponding playing devices, so that each playing device plays according to the corresponding target audio and video data, and a device interface corresponding to the playing device is determined according to the interface information, so that the target audio and video data is sent to the corresponding playing device through the device interface. Therefore, the target audio and video data can be sent to the playing device through the device interface corresponding to the playing device. Because the distributed synchronous integration is based on the framework of distributed sister, the data circulation between the interfaces is fast, and the synchronous speed is fast.
Specifically, different playing devices may correspond to different device interfaces, so that interface information corresponding to the playing device needs to be acquired, and a device interface corresponding to the playing device is determined according to the interface information, so that target audio/video data can be quickly and accurately sent to the corresponding playing device through the device interface, so that each playing device plays according to the corresponding target audio/video data, and thus, each playing device can play audio/video synchronously.
Referring to fig. 4, fig. 4 is a schematic block diagram of a terminal device according to an embodiment of the present application. As shown in fig. 4, the terminal device 200 includes one or more processors 201 and a memory 202, and the processors 201 and the memory 202 are connected by a bus, such as an I2C (Inter-integrated Circuit) bus.
Wherein the one or more processors 201 work individually or collectively to perform the steps of the synchronized playback method provided by the above embodiments.
Specifically, the Processor 201 may be a Micro-controller Unit (MCU), a Central Processing Unit (CPU), a Digital Signal Processor (DSP), or the like.
Specifically, the Memory 202 may be a Flash chip, a Read-Only Memory (ROM) magnetic disk, an optical disk, a usb disk, or a removable hard disk.
The processor 201 is configured to run a computer program stored in the memory 202, and when executing the computer program, implement the steps of the synchronized playing method provided by the foregoing embodiments.
Illustratively, the processor 201 is configured to run a computer program stored in the memory 202 and, when executing the computer program, to implement the steps of:
acquiring audio and video information, equipment information of the plurality of playing equipment and environment information; based on the distributed soft bus, according to the device information and the environment information of the plurality of playing devices, the audio and video information is synchronously processed, and target audio and video data corresponding to each playing device is obtained; and sending the target audio and video data to the corresponding playing devices so that each playing device plays the target audio and video data according to the corresponding playing device.
In some embodiments, the processor is configured to implement, when implementing synchronous processing on the audio/video information according to the device information and the environment information of the multiple playing devices to obtain target audio/video data corresponding to each playing device, that: obtaining playing setting information corresponding to each playing device according to the device information and the environment information of the playing devices; and according to the playing setting information, synchronously processing the audio stream and the video stream in the audio and video information to obtain target audio and video data corresponding to each playing device.
In some embodiments, the environment information includes position information, environment brightness information, and environment noise information of a user and each of the playback devices; the processor is configured to, when the playing setting information corresponding to each of the playing devices is obtained according to the device information and the environment information of the plurality of playing devices, implement: determining the distance between the user and each playing device according to the position information; and determining the playing setting information of each playing device according to the distance, the ambient brightness information and the ambient noise information.
In some embodiments, after said determining the playing setting information of each of said playing devices, said processor is configured to implement: acquiring synchronous demand information; determining a synchronization parameter corresponding to the playing device according to the synchronization requirement information; and modifying the playing setting information according to the synchronization parameters so as to synchronously process the audio stream and the video stream in the audio and video information according to the modified playing setting information.
In some embodiments, the playing setting information includes frame rate configuration information and frequency configuration information, and the processor is configured to implement, when the processor implements that the audio stream and the video stream in the audio and video information are synchronously processed according to the playing setting information to obtain target audio and video data corresponding to each playing device: according to the frame rate configuration information, performing frame analysis and frame rate conversion processing on the video stream in the audio and video information to obtain a target video stream corresponding to the playing device; according to the frequency configuration information, performing track splitting analysis and frequency conversion processing on the audio stream in the audio and video information to obtain a target audio stream corresponding to the playing device; and generating target audio and video data corresponding to the playing equipment according to the target video stream and the target audio stream.
In some embodiments, when the processor generates the target audio-video data corresponding to the playing device according to the target video stream and the target audio stream, the processor is configured to implement: acquiring timestamp information of the target video stream and timestamp information of the target audio stream; and according to the timestamp information, performing time calibration and integration processing on the target video stream and the target audio stream to generate target audio and video data corresponding to the playing equipment.
In some embodiments, the device information includes interface information, and the processor is further configured to, before the target audio/video data is sent to the corresponding playing device, enable each playing device to play according to the corresponding target audio/video data: and determining an equipment interface corresponding to the playing equipment according to the interface information so as to send the target audio and video data to the corresponding playing equipment through the equipment interface.
In some embodiments, the terminal device is further connected to multiple recording devices through a distributed soft bus, and the processor is configured to implement, when implementing the acquiring of the audio/video information: acquiring audio information or video information of each recording device; and based on the distributed soft bus, carrying out synchronous processing on the audio information and the video information to generate audio and video information corresponding to the terminal equipment.
An embodiment of the present application further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the processor is enabled to implement the steps of the synchronized playing method provided in the foregoing embodiment.
The computer-readable storage medium may be an internal storage unit of the remote controller or the electronic device according to any of the foregoing embodiments, for example, a hard disk or a memory of the terminal device. The computer readable storage medium may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like provided on the terminal device.
While the invention has been described with reference to specific embodiments, the scope of the invention is not limited thereto, and those skilled in the art can easily conceive various equivalent modifications or substitutions within the technical scope of the invention. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A synchronous playing method is applied to a terminal device, the terminal device is connected with a plurality of playing devices through a distributed soft bus, and the method comprises the following steps:
acquiring audio and video information, equipment information of the plurality of playing equipment and environment information;
based on the distributed soft bus, according to the device information and the environment information of the plurality of playing devices, the audio and video information is synchronously processed, and target audio and video data corresponding to each playing device is obtained;
and sending the target audio and video data to the corresponding playing devices so that each playing device plays the target audio and video data according to the corresponding playing device.
2. The method according to claim 1, wherein the synchronizing, based on the distributed soft bus, the audio/video information according to the device information and the environment information of the multiple playing devices to obtain target audio/video data corresponding to each of the playing devices comprises:
obtaining playing setting information corresponding to each playing device according to the device information and the environment information of the playing devices;
and according to the playing setting information, synchronously processing the audio stream and the video stream in the audio and video information to obtain target audio and video data corresponding to each playing device.
3. The method according to claim 2, wherein the environment information includes position information of a user and each of the playback devices, ambient brightness information, and ambient noise information; the obtaining of the playing setting information corresponding to each playing device according to the device information and the environment information of the multiple playing devices includes:
determining the distance between the user and each playing device according to the position information;
and determining the playing setting information of each playing device according to the distance, the ambient brightness information and the ambient noise information.
4. The method according to claim 3, wherein after said determining the playing setting information of each of said playing devices, comprising:
acquiring synchronous demand information;
determining a synchronization parameter corresponding to the playing device according to the synchronization requirement information;
and according to the synchronization parameter, modifying the playing setting information so as to synchronously process the audio stream and the video stream in the audio and video information according to the modified playing setting information.
5. The method according to claim 2, wherein the playing setting information includes frame rate configuration information and frequency configuration information, and the performing synchronous processing on an audio stream and a video stream in the audio/video information according to the playing setting information to obtain target audio/video data corresponding to each of the playing devices includes:
according to the frame rate configuration information, performing frame analysis and frame rate conversion processing on the video stream in the audio and video information to obtain a target video stream corresponding to the playing device;
according to the frequency configuration information, performing track splitting analysis and frequency conversion processing on the audio stream in the audio and video information to obtain a target audio stream corresponding to the playing device;
and generating target audio and video data corresponding to the playing equipment according to the target video stream and the target audio stream.
6. The method according to claim 5, wherein the generating target audio/video data corresponding to the playing device according to the target video stream and the target audio stream comprises:
acquiring timestamp information of the target video stream and timestamp information of the target audio stream;
and according to the timestamp information, performing time calibration and integration processing on the target video stream and the target audio stream to generate target audio and video data corresponding to the playing equipment.
7. The method according to claim 1, wherein the device information includes interface information, and before the target audio/video data is sent to the corresponding playback devices so that each playback device plays according to the corresponding target audio/video data, the method further includes:
and determining an equipment interface corresponding to the playing equipment according to the interface information so as to send the target audio and video data to the corresponding playing equipment through the equipment interface.
8. The method according to claim 1, wherein the terminal device is further connected to a plurality of recording devices through a distributed soft bus, and the acquiring audio and video information includes:
acquiring audio information or video information of each recording device;
and based on the distributed soft bus, carrying out synchronous processing on the audio information and the video information to generate audio and video information corresponding to the terminal equipment.
9. A terminal device, characterized in that the terminal device comprises a memory and a processor;
the memory for storing a computer program;
the processor is configured to execute the computer program and implement the synchronized playback method according to any one of claims 1 to 8 when executing the computer program.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, causes the processor to implement the synchronized playback method of any one of claims 1 to 8.
CN202211347078.1A 2022-10-31 2022-10-31 Synchronous playing method, terminal equipment and storage medium Pending CN115767158A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211347078.1A CN115767158A (en) 2022-10-31 2022-10-31 Synchronous playing method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211347078.1A CN115767158A (en) 2022-10-31 2022-10-31 Synchronous playing method, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115767158A true CN115767158A (en) 2023-03-07

Family

ID=85354627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211347078.1A Pending CN115767158A (en) 2022-10-31 2022-10-31 Synchronous playing method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115767158A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074571A (en) * 2023-04-06 2023-05-05 深圳开鸿数字产业发展有限公司 Control method of audio-video system, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116074571A (en) * 2023-04-06 2023-05-05 深圳开鸿数字产业发展有限公司 Control method of audio-video system, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105070304B (en) Realize method and device, the electronic equipment of multi-object audio recording
US10992451B2 (en) Audio and video playback system and method for playing audio data applied thereto
WO2017113734A1 (en) Video multipoint same-screen play method and system
US8391671B2 (en) Information processing device and method, recording medium, and program
US20080168505A1 (en) Information Processing Device and Method, Recording Medium, and Program
WO2020233263A1 (en) Audio processing method and electronic device
CN109379613A (en) Audio-visual synchronization method of adjustment, TV, computer readable storage medium and system
WO2022148319A1 (en) Video switching method and apparatus, storage medium, and device
CN111092898B (en) Message transmission method and related equipment
WO2017193830A1 (en) Video switching method, device and system, and storage medium
CN113050916A (en) Audio playing method, device and storage medium
CN111641829A (en) Video processing method, device, system, storage medium and electronic equipment
CN114697742A (en) Video recording method and electronic equipment
JP4572615B2 (en) Information processing apparatus and method, recording medium, and program
CN115767158A (en) Synchronous playing method, terminal equipment and storage medium
CN114531564A (en) Processing method and electronic equipment
US20210227005A1 (en) Multi-user instant messaging method, system, apparatus, and electronic device
US20180152802A1 (en) Methods and systems for rendering binaural audio content
JP6364130B2 (en) Recording method, apparatus, program, and recording medium
WO2017185338A1 (en) Audio-video file generation method and apparatus
CN114667737B (en) Multiple output control based on user input
US20220121416A1 (en) Virtual universal serial bus interface
CN112004100B (en) Driving method for integrating multiple audio and video sources into single audio and video source
CN113411636A (en) Live wheat-connecting method and device, electronic equipment and computer-readable storage medium
JP2013187841A (en) Electronic apparatus, output control method, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination