CN114501126A - Video playing method, system and storage medium - Google Patents

Video playing method, system and storage medium Download PDF

Info

Publication number
CN114501126A
CN114501126A CN202111605767.3A CN202111605767A CN114501126A CN 114501126 A CN114501126 A CN 114501126A CN 202111605767 A CN202111605767 A CN 202111605767A CN 114501126 A CN114501126 A CN 114501126A
Authority
CN
China
Prior art keywords
video
playing
identifier
instruction
video playing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111605767.3A
Other languages
Chinese (zh)
Other versions
CN114501126B (en
Inventor
张升辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fibocom Wireless Inc
Original Assignee
Fibocom Wireless Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fibocom Wireless Inc filed Critical Fibocom Wireless Inc
Priority to CN202111605767.3A priority Critical patent/CN114501126B/en
Publication of CN114501126A publication Critical patent/CN114501126A/en
Priority to PCT/CN2022/103692 priority patent/WO2023115904A1/en
Application granted granted Critical
Publication of CN114501126B publication Critical patent/CN114501126B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8455Structuring of content, e.g. decomposing content into time segments involving pointers to the content, e.g. pointers to the I-frames of the video stream

Abstract

The application relates to a video playing method, a system and a storage medium, wherein the video playing method comprises the following steps: receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier; and respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first audio data in the first video file according to the first video identifier and the first device identifier, and controlling the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier. The video playing method, the video playing system and the storage medium can respectively play audio corresponding to video pictures on different playing devices.

Description

Video playing method, system and storage medium
Technical Field
The present application relates to the field of multi-screen display technologies, and in particular, to a video playing method, a video playing system, and a storage medium.
Background
With the development of multi-screen display technology, multi-screen different display technology has emerged. The multi-screen different display technology refers to displaying different video pictures on different playing devices.
However, in the conventional multi-screen different display technology, the audio corresponding to different video frames is played by one playing device, and different audio cannot be played on different playing devices.
Disclosure of Invention
The embodiment of the application provides a video playing method, a video playing system and a storage medium, which can respectively play audio corresponding to video pictures on different playing devices.
In a first aspect, the present application provides a video playing method applied to a video playing system, where the video playing system includes a video processing device, a first playing device, and a second playing device, and the method includes:
receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier;
and respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first audio data in the first video file according to the first video identifier and the first device identifier, responding to the second video playing instruction, and controlling the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier.
In one embodiment, before responding to the first video playing instruction and the second video playing instruction respectively, the method includes:
acquiring receiving interval time between the first video playing instruction and the second video playing instruction;
and respectively responding to the first video playing instruction and the second video playing instruction under the condition that the receiving interval time is greater than or equal to the preset interval time.
In one embodiment, the method further comprises:
and respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first image data in the first video file according to the first video identifier and the first device identifier, and controlling the second playing device to play the second image data in the second video file according to the second video identifier and the second device identifier.
In one embodiment, before responding to the first video playing instruction and the second video playing instruction respectively, the method includes:
detecting whether the video playing system enters a multi-screen different display mode or not;
responding to the first video playing instruction and the second video playing instruction respectively, including:
and respectively responding to the first video playing instruction and the second video playing instruction under the condition that the video playing system is detected to enter a multi-screen different display mode.
In one embodiment, the step of controlling the playing device to play the audio data in response to the video playing instruction includes:
responding to a video playing instruction, and configuring an audio scene according to a video identifier and a device identifier carried by the video playing instruction, wherein the video playing instruction is at least one of the first video playing instruction and the second video playing instruction;
and playing the audio data according to the audio scene so as to control the playing device corresponding to the device identifier to play the audio data corresponding to the video identifier.
In one embodiment, after responding to the first video playing instruction and the second video playing instruction respectively, the method includes:
receiving a third video playing instruction, where the third video playing instruction carries a third video identifier and a third device identifier, and the third device identifier is one of the first device identifier and the second device identifier;
and responding to the third video playing instruction, and controlling a playing device corresponding to the third device identifier to play third audio data in a third video file according to the third video identifier and the third device identifier.
In one embodiment, controlling the playing device to play the audio data in response to the video playing instruction includes:
responding to a video playing instruction, and changing the equipment zone bit information associated with the device identifier carried by the video playing instruction from the initial equipment zone bit information to target equipment zone bit information, wherein the video playing instruction is the first playing instruction or the second playing instruction;
inquiring the equipment zone bit information of each playing device;
and under the condition that the zone bit information of the target equipment is inquired, controlling a playing device corresponding to the device identifier associated with the zone bit information of the target equipment to play audio data, wherein the played audio data is the audio data corresponding to the video identifier carried by the video playing instruction.
In one embodiment, after the controlling the playing device corresponding to the device identifier associated with the target device flag bit information plays audio data, the method includes:
reducing the zone bit information of the target equipment to the zone bit information of the initial equipment;
the responding to the third video playing instruction, and controlling a playing device corresponding to the third device identifier to play third audio data in a third video file according to the third video identifier and the third device identifier includes:
and changing the equipment zone bit information associated with the third device identifier from the initial equipment zone bit information to target equipment zone bit information so as to control the playing device corresponding to the third device identifier to play third audio data in a third video file under the condition that the target equipment zone bit information is inquired.
In a second aspect, the present application provides a video playing apparatus, which is applied to a video playing system, where the video playing system includes a video processing apparatus, a first playing apparatus and a second playing apparatus, and the apparatus includes:
the receiving module is used for receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier;
and the response module is used for respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first audio data in the first video file according to the first video identifier and the first device identifier, and controlling the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier.
In a third aspect, the present application provides a processing apparatus comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method as described above when executing the computer program.
In a fourth aspect, the present application provides a video playback system comprising processing means, first playback means and second playback means, said processing means being adapted to perform the steps of the method as described above.
In a fifth aspect, the present application provides a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method as described above.
In a sixth aspect, the present application provides a computer program product comprising a computer program, characterized in that the computer program realizes the steps of the method as described above when executed by a processor.
The video playing method, the system and the storage medium are characterized in that the video playing method comprises the following steps: receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier; responding to the first video playing instruction, and controlling the first playing device to play first audio data in a first video file according to the first video identifier and a first device identifier; and responding to the second video playing instruction, and controlling the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier. According to the method and the device, after the first video playing instruction is received, the first video playing instruction can be responded to control the first playing device to play the first audio data in the first video file, and after the second video playing instruction is received, the second video playing instruction can be responded to control the second playing device to play the second audio data in the second video file.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a diagram illustrating an exemplary video playback method;
FIG. 2 is a flowchart illustrating a video playing method according to an embodiment;
FIG. 3 is a flowchart illustrating an exemplary embodiment of controlling a playback device to play back audio data in response to a video playback command;
FIG. 4 is a flowchart illustrating a video playing method according to another embodiment;
FIG. 5 is a flowchart illustrating a video playing method according to another embodiment;
FIG. 6 is a flowchart illustrating a video playing method according to another embodiment;
FIG. 7 is a flowchart illustrating a video playing method according to another embodiment;
FIG. 8 is a video playback device in one embodiment;
fig. 9 is a schematic diagram of an internal structure of the playback apparatus in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first video playback instruction may be referred to as a second video playback instruction, and similarly, a second video playback instruction may be referred to as a first video playback instruction, without departing from the scope of the present application. Both the first video playback instruction and the second video playback instruction are video playback instructions, but they are not the same video playback instruction. "plurality" means more than two.
Fig. 1 is a schematic application environment diagram of a video playing method in an embodiment. As shown in fig. 1, the application environment includes a video processing apparatus 110, a first playback apparatus 120, and a second playback apparatus 130.
Among them, the video processing apparatus 110 has a video processing capability. Specifically, the video processing device 110 has the capability of processing image data in a video file, and the capability of processing audio data in a video file. The first playback device 120 and the second playback device 130 are each equipped with a display screen and speakers. The display screen is used for playing video pictures, such as movie pictures, game pictures, and the like. The speaker is used to play audio, such as movie audio, game audio, and the like. Optionally, the video processing apparatus 110 is connected to the first playing apparatus 120 and the second playing apparatus 130 respectively.
During the use of the first playback device 120 and the second playback device 130, the multi-screen display can be performed, so that different video frames are displayed on the first playback device 120 and the second playback device 130. However, in the conventional multi-screen different display technology, the audio corresponding to different video frames is played by one of the playing devices, and different audio cannot be played on different playing devices.
For example, the first playing device 120 and the second playing device 130 may display different video frames, but audio corresponding to different video frames is output from the second playing device 130, for example, when the first playing device 120 plays a game frame, game audio corresponding to the game frame is also played through the second playing device 130, and at this time, movie audio of a movie frame played by the second playing device 130 is also played through the second playing device 130, so that the movie audio and the game audio are mixed, and a movie watching experience and a game experience are impressed. Therefore, a solution is needed to play audio corresponding to video frames on different playing devices.
The following embodiments are described in terms of how to play audio corresponding to video frames on different playing devices.
Referring to fig. 2, fig. 2 is a flowchart illustrating a video playing method according to an embodiment. The video playing method in this embodiment is described by taking the video processing apparatus as an example. As shown in fig. 2, the video playing method in one embodiment includes steps 210 to 230.
Step 210, receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier.
The video playing instruction refers to an instruction for selecting one of the playing devices in the video playing system to play the audio data of the video file. In this embodiment, the first video playing instruction is an instruction for selecting the first playing device to play the first audio. The second video playing instruction is an instruction for selecting the second playing device to play the second audio. The audio data refers to audio data in a video file to be played. The video identification may be information characterizing the unique identity of the video file, but also information characterizing the source of the audio data. In this embodiment, the first video identifier refers to an identifier corresponding to the first video file. The first device identifier is an identifier corresponding to the first playback device. The second video identifier refers to an identifier corresponding to the second video file. The second device identifier is an identifier corresponding to the second playback device. In this embodiment, the first playback device and the second playback device are different playback devices, and correspondingly, the first device identifier and the second device identifier are different. In this embodiment, after receiving the first video playing instruction and the second video playing instruction at different times, the first video playing instruction and the second video playing instruction are responded to respectively.
Optionally, the video identification includes, but is not limited to, an application ID, a video ID, and the like. The device identification may play an interface ID of an interface to which the device is connected, or the like.
Step 220, responding to the first video playing instruction, and controlling the first playing device to play the first audio data in the first video file according to the first video identifier and the first device identifier.
In this embodiment, since the first video playing instruction carries the first video identifier and the first device identifier, the first video identifier is an identifier corresponding to the first video file, and the first device identifier is an identifier corresponding to the first playing device, when the first video playing instruction is responded, the first playing device may be controlled to play the first audio data in the first video file according to the first video identifier and the first device identifier.
Optionally, the first audio data may be sent to the first playing device, so that the first audio data is played by the first playing device.
Step 230, responding to the second video playing instruction, and controlling the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier.
In this embodiment, since the second video playing instruction carries the second video identifier and the second device identifier, the second video identifier is an identifier corresponding to the second video file, and the second device identifier is an identifier corresponding to the second playing device, when the second video playing instruction is responded, the second playing device can be controlled to play the second audio data in the second video file according to the second video identifier and the second device identifier.
Optionally, the second audio data may be sent to a second playing device, so that the second audio data is played by the second playing device.
It can be understood that, if the first video playing instruction is received, the first video playing instruction is responded; and responding to the second video playing instruction if the second video playing instruction is received.
In this embodiment, after receiving the first video playing instruction, the processing device may respond to the first video playing instruction to control the first playing device to play the first audio data in the first video file, and after receiving the second video playing instruction, may respond to the second video playing instruction to control the second playing device to play the second audio data in the second video file, so that, when the video playing system includes a plurality of playing devices, the processing device may respond to the first video playing instruction and the second video playing instruction, respectively, so as to control different playing devices to play the audio data according to the video identifiers and the device identifiers carried in the video playing instruction, thereby avoiding the technical problem that the audio corresponding to different video pictures can be played through one playing device, and realizing the technical effect of playing the audio corresponding to the video pictures on different playing devices, respectively, the effect of multi-screen different sound reproduction can be realized.
It should be noted that the first video file and the second video file may be the same video file or different video files, and this embodiment is not limited in particular.
In one possible implementation, the video playing method further includes:
and respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first image data in the first video file according to the first video identifier and the first device identifier, and controlling the second playing device to play the second image data in the second video file according to the second video identifier and the second device identifier.
In this embodiment, when responding to the first video playing instruction, the first playing device is further controlled to play the first image data according to the first video identifier and the first device identifier, so that the first playing device plays the complete first video. Similarly, when responding to the second video playing instruction, the second playing device is also controlled to play the second image data according to the second video identifier and the second device identifier, so that the second playing device plays the complete second video.
Referring to fig. 3, fig. 3 is a flowchart illustrating a detailed process of controlling a playing device to play audio data in response to a video playing command in an embodiment. In one embodiment, as shown in fig. 3, the step of controlling the playing device to play the audio data in response to the video playing instruction includes steps 310 to 320.
And step 310, responding to a video playing instruction, and configuring an audio scene according to a video identifier and a device identifier carried by the video playing instruction, wherein the video playing instruction is at least one of the first video playing instruction and the second video playing instruction.
Here, the audio scene refers to a scene (use case) for playing audio. The use case refers to a use case and a use condition, is a description of how the system reacts to an external request in software engineering or system engineering, and is a technology for acquiring a demand through a use scene of a user. Each scenario provides one or more scenarios that illustrate how the system interacts with the end user or other system, i.e., who can do what with the system, to achieve a clear business goal. The audio scene of the embodiment is a currently generated scene, and provides an audio playing scene. Specifically, each time new audio data needs to be played, a use case is generated, for example, when the first audio data is played, the use case is generatednThus passing through the use casenPlaying the first audio data; generating a use case while playing the second audio datamThus passing through the use casemAnd playing the second audio data. Therefore, when the audio data needs to be played, a use case for playing the audio data is also generated. Therefore, an audio scene needs to be configured according to the video identifier and the device identifier carried by the video playing instruction, so as to change the audio scene. The audio scene is generated based on audio data corresponding to the video identification.
Specifically, for the first video playing instruction, a first audio scene is configured according to a first video identifier and a first device identifier carried by the first video playing instruction, and the first audio scene is generated based on the first audio data. For a second video playing instruction, configuring an audio scene according to a second video identifier and a second device identifier carried by the second video playing instruction, wherein the second audio scene is generated based on second audio data.
It should be noted that this step may be performed at the application layer of the video processing apparatus. Specifically, an APK (Android application package) may be installed in the video processing apparatus, and the application layer of the video processing apparatus responds to the video playing instruction to configure the audio scene.
And step 320, playing the audio data according to the audio scene to control the playing device corresponding to the device identifier to play the audio data corresponding to the video identifier.
It should be noted that, the video data is played according to the current audio playing scene, or the audio data corresponding to the video identifier may be sent to the speaker of the playing device corresponding to the device identifier according to the audio scene, and then the speaker may decode and play the audio data.
Specifically, for a first audio scene, first audio data is sent to a first playing device; and for the second audio scene, sending the second audio data to a second playing device.
It should be noted that this step may be performed in the driver layer (audio hal) of the video processing apparatus. Specifically, controlling the playing device to play the audio data is generally performed in the driver layer, but the audio scene of the driver layer is generally fixed, and thus the playing device playing the audio data is also fixed. In the embodiment, the application layer configures the audio scene according to the video identifier and the device identifier, so that the audio scene is changed, and the driving layer plays the audio data according to the changed audio scene equivalently according to the video identifier and the device identifier.
According to the technical scheme, the device identification and the video identification carried by the video playing instruction can be configured with the audio scene, so that the audio data can be played according to the audio scene, the playing device corresponding to the device identification can play the audio data corresponding to the video identification, the audio scene can be changed as required, the playing device for playing the audio data can be changed, the problem that different audios cannot be played on different playing devices according to a fixed audio scene is avoided, and the audios corresponding to video pictures can be played on different playing devices respectively.
In one possible embodiment, the generating method of the video playing instruction includes:
and responding to a selection operation initiated on a visual interface of the playing device, and generating the video playing instruction, wherein the selection operation is an operation of selecting a video file and selecting the playing device.
In this embodiment, the user can select a video file to be played on the visual interface of the playing device and select the playing device according to the needs of the user, and then the processing device can respond to the selection operation initiated by the user on the visual interface of the playing device, so as to generate a video playing instruction carrying the video identifier and the device identifier.
In the embodiment, the user can select the played video and the played playing device according to the needs of the user, so that the selectivity of the user is enriched.
Referring to fig. 4, fig. 4 is a flowchart illustrating a video playing method in another embodiment. As shown in fig. 4, the video playing method in another embodiment includes steps 410 to 450.
Step 410, receiving a first video playing instruction and a second video playing instruction at different time.
And step 420, acquiring the receiving interval time between the first video playing instruction and the second video playing instruction.
The receiving interval time can be obtained by calculating a difference value between the receiving time of the first video playing instruction and the receiving time of the second video playing instruction.
Step 430, determining whether the receiving interval is less than a preset interval.
Specifically, since one use case is generated each time a new audio needs to be played, assuming that a user selects a first playing device to play first audio data to generate a use case1, and a user selects a second playing device to play second audio data to generate a use case2, if the receiving interval time between a first video playing instruction and a second video playing instruction is too short, it indicates that the time that the user selects the first playing device to play the first audio data is relatively close to the time that the user selects the second playing device to play the second audio data, at this time, the first playing device may generate two or more use cases at the same time, and since the interval time between the use cases 1 and 2 is too short, the first playing device cannot know whether the use case1 is generated by the user selecting the first audio data or the user selecting the second audio data, it cannot be determined that the use case1 is configured according to the first video identifier and the first device identifier, still speaking, the use case2 is configured according to the first video identifier and the first device identifier, and the playback device playing the audio data cannot accurately identify the user at this time, which may cause an error in video playback.
In this step, if the receiving interval is less than the predetermined interval, step 440 is executed. If the receiving interval is greater than or equal to the predetermined interval, step 450 is executed.
Step 440, refusing to respond to the first video playing instruction and the second video playing instruction, and outputting a prompt that the first video playing instruction and the second video playing instruction need to be received again.
In this step, it is assumed that a user selects the first playing device to generate a use case1 when playing the first audio data, and a user selects the second playing device to generate a use case2 when playing the second audio data, if the receiving interval time is less than the preset interval time, the interval time between the use case1 and the use case2 is considered to be too short, and it cannot be known that the use case1 is generated due to the user selecting the first audio data, or the user selects the second audio data, in order to avoid the audio false play, the prompt that the first video playing instruction and the second video playing instruction need to be re-received is output, and then the user may initiate the selection operation on the visual interface of the first playing device again.
Optionally, the prompting manner includes, but is not limited to, voice prompting, text prompting on a visual interface, and the like, and the manner how to prompt is not limited in this embodiment.
And step 450, responding to the first video playing instruction and the second video playing instruction respectively.
In this step, it is assumed that the user generates the use case1 when the user selects the first playing device to play the first audio data, and the user generates the use case2 when the user selects the second playing device to play the second audio data, and if the receiving interval is greater than or equal to the preset interval, it can be accurately determined which audio data the use case is generated based on, and the first video playing instruction can be respectively responded, so that the first audio data is played by the first playing device, and the second video playing instruction is responded, so that the second audio data is played by the second playing device.
It can be understood that, in response to the video playing instruction, the description of any embodiment may be referred to for the scheme of playing the audio data through the playing device, and details are not described in this embodiment.
According to the technical scheme, when the video playing instruction comprises the first video playing instruction and the second video playing instruction, the receiving interval time between the first video playing instruction and the second video playing instruction is obtained, so that whether the video playing instruction needs to be responded or not is determined through the receiving interval time, the situation that the playing device for playing the audio cannot be accurately identified due to the fact that the receiving interval time between the first video playing instruction and the second video playing instruction is too short is avoided, and the accuracy of video playing is improved.
It should be noted that, in a possible embodiment, there may be only one of step 440 and step 450.
Optionally, in this embodiment, the preset interval time is not a fixed value and is related to the processing capability of the processing apparatus. Specifically, the stronger the processing capability of the processing device, the faster the audio scene is generated, and correspondingly, the faster the video playback command is responded to, in which case the preset interval time is shorter.
Referring to fig. 5, fig. 5 is a flowchart illustrating a video playing method in another embodiment. As shown in fig. 5, the video playing method in another embodiment includes steps 510 to 530.
Step 510, receiving a first video playing instruction and a second video playing instruction at different time.
And step 520, detecting whether the video playing system enters a multi-screen different display mode.
And 530, respectively responding to the first video playing instruction and the second video playing instruction under the condition that the video playing system is detected to enter a multi-screen different display mode.
In this embodiment, the multi-screen display-only mode can be set by the user to enter in the video playing system.
According to the technical scheme, whether the video playing system enters the multi-screen different display mode or not is detected, and the first video playing instruction and the second video instruction are responded only when the video playing system is detected to enter the multi-screen different display mode, so that the accuracy of video playing of multi-screen different display is improved.
In one possible embodiment, detecting whether the video playing system enters a multi-screen display-only mode includes:
inquiring mode flag bit information associated with a display mode of the video playing system, wherein the mode flag bit information comprises target mode flag bit information of the video playing system in the multi-screen different display mode; and under the condition that the mode flag bit information is the target mode flag bit information, confirming that the video playing system enters a multi-screen different display mode.
In this embodiment, the mode flag information refers to information that is currently available for querying and is associated with the display mode of the video playback system. Optionally, the display modes in this embodiment include a multi-screen simultaneous display mode and a multi-screen separate display mode. And if the mode flag bit information is the target mode flag bit information corresponding to the multi-screen different display mode, determining that the video playing system enters the multi-screen different display mode.
Illustratively, the initialization mode flag bit information of the video playback system is "Dscreen-display 0", and when the video playback system enters the multi-screen playback mode, the mode flag bit information of the video playback system is changed to the target mode flag bit information: "Dscreen-display 1", inquiring the mode flag bit information, and if the mode flag bit information is "Dscreen-display 1", it indicates that the mode flag bit information is the mode flag bit information corresponding to the multi-screen display-on-display mode, and it may be determined that the video playing system has entered the multi-screen display-on-display mode at this time.
It should be noted that the steps of this embodiment may be performed in the driver layer of the video processing apparatus. Specifically, a user sets a mode of entering a multi-screen different display mode in a video playing system, at this time, an application layer of a video processing device changes mode flag bit information into target mode flag bit information, and a driving layer of the video processing device confirms that the video playing system enters the multi-screen different display mode when inquiring that the mode flag bit information is the target mode flag bit information.
Referring to fig. 6, fig. 6 is a flowchart illustrating a video playing method in another embodiment. As shown in fig. 6, the video playing method in one embodiment includes steps 610 to 640.
Step 610, receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier.
And step 620, responding to the first video playing instruction and the second video playing instruction respectively.
Step 630, receiving a third video playing instruction, where the third video playing instruction carries a third video identifier and a third device identifier, and the third device identifier is one of the first device identifier and the second device identifier.
In this embodiment, the third video identifier refers to an identifier of a third video file. If the third device identifier is one of the first device identifier and the second device identifier, it indicates that after controlling the first playing device to play the first video file and after controlling the second playing device to play the second video file, one of the first playing device and the second playing device continues to be selected to play the third video file. In this case, if the playing device selected by the third video playing instruction does not finish playing the video file, the audio data played by the playing device selected by the third video playing instruction at this time will be replaced by the third audio data.
And step 640, responding to the third video playing instruction, and controlling a playing device corresponding to the third device identifier to play third audio data in a third video file according to the third video identifier and the third device identifier.
In this embodiment, since the third video playing instruction carries the third video identifier and the third device identifier, the first video identifier is an identifier corresponding to the first video file, and the first device identifier is the first device identifier or the second device identifier, when the third video playing instruction is responded, the third audio data of the playing device corresponding to the third device identifier may be controlled according to the third video identifier and the third device identifier.
In this embodiment, in the process of playing the first video by the first playing device and playing the second video by the second playing device, the first playing device and the second playing device may still be controlled to play other videos again.
In one possible embodiment, controlling the playing device to play the audio data in response to the video playing instruction includes:
responding to a video playing instruction, and changing the equipment zone bit information associated with the device identifier carried by the video playing instruction from the initial equipment zone bit information to target equipment zone bit information, wherein the video playing instruction is the first playing instruction or the second playing instruction; inquiring the equipment zone bit information of each playing device; and under the condition that the zone bit information of the target equipment is inquired, controlling a playing device corresponding to the device identifier associated with the zone bit information of the target equipment to play audio data, wherein the played audio data is the audio data corresponding to the video identifier carried by the video playing instruction.
In the present embodiment, when a playback device is selected, the device flag information corresponding to the selected playback device is changed to the target device flag information, and the device flag information corresponding to the other playback devices that are not selected is the original device flag information. Taking the example that the processing device runs the Android system, after receiving a video playing instruction, the processing device changes the equipment zone bit information associated with the device identifier carried in the video playing instruction from the initial equipment zone bit information to the target equipment zone bit information, at this time, an Audio hardware abstraction layer (Audio HAL) queries the equipment zone bit information of each playing device, and if the target equipment zone bit information is queried, it indicates that the playing device corresponding to the device identifier associated with the target equipment zone bit information is selected to play Audio data, thereby controlling the playing device corresponding to the device identifier associated with the target equipment zone bit information to play Audio data, where the played Audio data is Audio data corresponding to the video identifier carried in the video playing instruction.
It should be noted that, in response to the video playing instruction, the step of changing the device flag bit information associated with the device identifier carried in the video playing instruction from the initial device flag bit information to the target device flag bit information may be executed in an application layer of the video processing device. Inquiring the equipment zone bit information of each playing device; in the case that the flag information of the target device is found, the step of controlling the playing device corresponding to the device identifier associated with the flag information of the target device to play the audio data may be performed in a driver layer of the video processing device.
Specifically, generally, playing video data is executed by the driver layer, and in this embodiment, the application layer responds to the video playing instruction, and changes the equipment flag bit information associated with the device identifier carried in the video playing instruction from the initial equipment flag bit information to the target equipment flag bit information, so that the driver layer can know which playing device is selected, and thus play audio data by the selected playing device.
In a possible embodiment, after the controlling the device identifier associated with the target device flag bit information corresponds to a playing device to play audio data, the method includes:
and restoring the mark bit information of the target equipment to the mark bit information of the initial equipment.
The responding to the third video playing instruction, and controlling a playing device corresponding to the third device identifier to play third audio data in a third video file according to the third video identifier and the third device identifier includes:
and changing the equipment zone bit information associated with the third device identifier from the initial equipment zone bit information to target equipment zone bit information so as to control the playing device corresponding to the third device identifier to play third audio data in a third video file under the condition that the target equipment zone bit information is inquired.
In this embodiment, after controlling the playing device corresponding to the target device zone bit information association device identifier to play the Audio data, the target device zone bit information is restored to the original device zone bit information, so that the original device zone bit information table is updated to the target device zone bit information when the playing device plays the Audio next time, and the Audio HAL can know which playing device is selected.
Illustratively, when the playback apparatus plays back audio data for the first time in the multi-screen display-enabled mode, the target device flag information "audio-1"; when the playing device is selected to play the audio data again, that is, when the audio data is played in the multi-screen different display mode for more than the second time, the flag bit information of the target device is "audio-2", and after responding to the video playing instruction, the flag bit information is restored to the flag bit information of the initial device: "audio-1".
It should be noted that the step of restoring the target device flag information to the initial device flag information may be performed at an application layer of the video processing apparatus.
In the technical scheme of this embodiment, after controlling the playing device corresponding to the target device zone bit information association device identifier to play Audio data, the driving layer restores the application layer target device zone bit information to the initial device zone bit information, so that the playing device can update the initial device zone bit information table to the target device zone bit information when playing Audio next time, and the Audio HAL can know which playing device is selected.
Referring to fig. 7, fig. 7 is a flowchart illustrating a video playing method in another embodiment. In this embodiment, two playback devices are exemplified. One of the two playing devices is a DP playing device, the other one is an MIPI playing device, and the processing device is a processor embedded in the DP playing device. In the following embodiments, a DP playback device is used as a first playback device, and an MIPI playback device is used as a second playback device. As shown in fig. 7, another video playing method includes steps 710 to 780.
And 710, defaulting the system to a non-dual-screen different display mode when the system is started, and initializing system attributes.
Exemplary, the initialized system attributes may be MIPI-audio 0, DP-audio 0, and dsgreen-display 0. The MIPI-audio 0 indicates that the MIPI playing device does not play audio, the DP-audio 0 indicates that the DP playing device does not play audio, and the dsgreen-display 0 indicates a non-double-screen different display mode.
And step 720, switching to a double-screen different display mode.
In this step, the application layer may set dsgreen-display 0 to dsgreen-display 1, so that the driver layer may confirm whether the dual-screen metamer mode has been entered.
Step 730, receiving a first video playing instruction, where the first video playing instruction carries a first video identifier and a DP device identifier.
The first video playing instruction carries a DP device identifier of a DP playing device playing the first audio data. Before this step, if the DP playing device is selected for playing the audio for the first time, the DP-audio 0 is set as DP-audio 1. If the DP playing device is selected to play the audio for the second time, the DP-audio 1 is set as DP-audio 2.
Step 740, configuring the first audio scene according to the first video identifier and the DP device identifier.
Step 750, controlling the DP playing device to play the first audio data according to the first audio scene.
And 760, receiving a second video playing instruction, wherein the second video playing instruction carries a second video identifier and an MIPI device identifier.
And the second video playing instruction carries the MIPI device identification of the MIPI playing device playing the second audio data. Before the step, if the MIPI playing device is selected to play the audio for the first time, the MIPI-audio 0 is set as MIPI-audio 1. And if the MIPI playing device is selected to play the audio for the second time, the MIPI-audio 1 is set as the MIPI-audio 2.
Step 770, configuring a second audio scene according to the second video identifier and the MIPI device identifier.
Step 780, controlling the MIPI playing device to play the second audio data according to the second audio scene.
It should be noted that, in any of the above embodiments, except for the case where the video identifier and the device identifier are configured to the audio scene, other cases may be played in a playing manner in the prior art, which is not described in detail in this embodiment.
It should be understood that although the various steps in the flow charts of fig. 2-7 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performance of the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternating with other steps or at least some of the sub-steps or stages of other steps.
Referring to fig. 8, fig. 8 is a video playback device in an embodiment. The video playback device in this embodiment is described by taking the first playback device in fig. 1 as an example. The first playing device is a playing device with processing capability in the video playing system. As shown in fig. 8, the video playback apparatus in one embodiment includes a receiving module 810 and a response module 820, wherein:
a receiving module 810, configured to receive a first video playing instruction and a second video playing instruction at different times; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier; a response module 820, configured to respectively respond to the first video playing instruction and the second video playing instruction, control the first playing device to play the first audio data in the first video file according to the first video identifier and the first device identifier, and control the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier.
In one embodiment, the apparatus further comprises: the acquisition module is used for acquiring the receiving interval time between the first video playing instruction and the second video playing instruction; the response module 820 is specifically configured to respectively respond to the first video playing instruction and the second video playing instruction when the receiving interval time is greater than or equal to the preset interval time.
In one embodiment, the response module 820 is further configured to control the first playing device to play the first image data in the first video file according to the first video identifier and the first device identifier and control the second playing device to play the second image data in the second video file according to the second video identifier and the second device identifier in response to the first video playing instruction and the second video playing instruction, respectively.
In one embodiment, the apparatus further comprises: the detection module is used for detecting whether the video playing system enters a multi-screen different display mode or not; the response module 820 is specifically configured to respond to the first video playing instruction and the second video playing instruction respectively when it is detected that the video playing system enters the multi-screen display mode.
In one embodiment, the response module 820 includes: the response unit is used for responding to the video playing instruction, and configuring an audio scene according to the video identifier and the device identifier carried by the video playing instruction, wherein the video playing instruction is at least one of the first video playing instruction and the second video playing instruction; and the playing unit is used for playing the audio data according to the audio scene so as to control the playing device corresponding to the device identifier to play the audio data corresponding to the video identifier.
In an embodiment, the receiving module 810 is further configured to receive a third video playing instruction, where the third video playing instruction carries a third video identifier and a third device identifier, and the third device identifier is one of the first device identifier and the second device identifier; the response module 820 is further configured to respond to the third video playing instruction, and control the playing device corresponding to the third device identifier to play the third audio data in the third video file according to the third video identifier and the third device identifier.
In one embodiment, the response module 820 is specifically configured to respond to a video playing instruction, change equipment flag bit information associated with a device identifier carried in the video playing instruction from initial equipment flag bit information to target equipment flag bit information, where the video playing instruction is a first playing instruction or a second playing instruction; inquiring the equipment zone bit information of each playing device; and under the condition that the zone bit information of the target equipment is inquired, controlling a playing device corresponding to the device identifier associated with the zone bit information of the target equipment to play audio data, wherein the played audio data is the audio data corresponding to the video identifier carried by the video playing instruction.
In one embodiment, the response module 820 is further configured to restore the target device flag information to the original device flag information after controlling the playing apparatus corresponding to the apparatus identifier associated with the target device flag information to play the audio data; and changing the equipment zone bit information associated with the third device identifier from the initial equipment zone bit information to target equipment zone bit information so as to control a playing device corresponding to the third device identifier to play third audio data in the third video file under the condition that the target equipment zone bit information is inquired.
The division of each module in the video playing apparatus is only for illustration, and in other embodiments, the video playing apparatus may be divided into different modules as needed to complete all or part of the functions of the video playing apparatus.
For specific limitations of the video playing apparatus, reference may be made to the above limitations of the video playing method, which is not described herein again. The modules in the video playing apparatus can be wholly or partially implemented by software, hardware and a combination thereof. The modules can be embedded in a hardware form or independent from a processor in the computer device, and can also be stored in a memory in the computer device in a software form, so that the processor can call and execute operations corresponding to the modules.
Fig. 9 is a schematic diagram of an internal structure of the playback apparatus in one embodiment. As shown in fig. 9, the playback apparatus includes a processor and a memory connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole playing device. The memory may include a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a video playing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The playing device may be any terminal device such as a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a Point of Sales (POS), a vehicle-mounted computer, and a wearable device. The playback device is also provided with a display screen and a loudspeaker. The display screen is used for video pictures and the loudspeaker is used for playing audio.
In one embodiment, a video playback system is also provided that includes a plurality of playback devices. Wherein, the playing device with processing capability in the plurality of playing devices is used for executing the steps of the method of any of the above embodiments.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method of any of the above embodiments.
The implementation of each module in the video playing apparatus provided in the embodiment of the present application may be in the form of a computer program. The computer program may be run on a terminal or a server. Program modules constituted by the computer program may be stored on the memory of the playback apparatus. Which when executed by a processor, performs the steps of the method described in the embodiments of the present application.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the steps of a video playback method.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), synchronous Link (Synchlink) DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. A video playing method is applied to a video playing system, wherein the video playing system comprises a video processing device, a first playing device and a second playing device, and the method comprises the following steps:
receiving a first video playing instruction and a second video playing instruction at different moments; the first video playing instruction carries a first video identifier and a first device identifier, and the second video playing instruction carries a second video identifier and a second device identifier;
and respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first audio data in the first video file according to the first video identifier and the first device identifier, and controlling the second playing device to play the second audio data in the second video file according to the second video identifier and the second device identifier.
2. The method of claim 1, prior to responding to the first video playback command and the second video playback command, respectively, comprising:
acquiring receiving interval time between the first video playing instruction and the second video playing instruction;
and respectively responding to the first video playing instruction and the second video playing instruction under the condition that the receiving interval time is greater than or equal to the preset interval time.
3. The method of claim 1, further comprising:
and respectively responding to the first video playing instruction and the second video playing instruction, controlling the first playing device to play the first image data in the first video file according to the first video identifier and the first device identifier, and controlling the second playing device to play the second image data in the second video file according to the second video identifier and the second device identifier.
4. The method of claim 1, prior to responding to the first video playback command and the second video playback command, respectively, comprising:
detecting whether the video playing system enters a multi-screen different display mode or not;
responding to the first video playing instruction and the second video playing instruction respectively, including:
and respectively responding to the first video playing instruction and the second video playing instruction under the condition that the video playing system is detected to enter a multi-screen different display mode.
5. The method of claim 1, wherein the step of controlling the playing device to play the audio data in response to the video playing command comprises:
responding to a video playing instruction, and configuring an audio scene according to a video identifier and a device identifier carried by the video playing instruction, wherein the video playing instruction is at least one of the first video playing instruction and the second video playing instruction;
and playing the audio data according to the audio scene so as to control the playing device corresponding to the device identifier to play the audio data corresponding to the video identifier.
6. The method according to any one of claims 1-5, after responding to the first video playback instruction and the second video playback instruction, respectively, comprising:
receiving a third video playing instruction, where the third video playing instruction carries a third video identifier and a third device identifier, and the third device identifier is one of the first device identifier and the second device identifier;
and responding to the third video playing instruction, and controlling a playing device corresponding to the third device identifier to play third audio data in a third video file according to the third video identifier and the third device identifier.
7. The method of claim 6, wherein controlling the playback device to play back the audio data in response to the video playback command comprises:
responding to a video playing instruction, and changing the equipment zone bit information associated with the device identifier carried by the video playing instruction from the initial equipment zone bit information to target equipment zone bit information, wherein the video playing instruction is the first playing instruction or the second playing instruction;
inquiring the equipment zone bit information of each playing device;
and under the condition that the zone bit information of the target equipment is inquired, controlling a playing device corresponding to the device identifier associated with the zone bit information of the target equipment to play audio data, wherein the played audio data is the audio data corresponding to the video identifier carried by the video playing instruction.
8. The method according to claim 7, wherein after controlling the corresponding playing device to play the audio data according to the device identifier associated with the target device flag information, the method comprises:
reducing the zone bit information of the target equipment to the zone bit information of the initial equipment;
the responding to the third video playing instruction, and controlling a playing device corresponding to the third device identifier to play third audio data in a third video file according to the third video identifier and the third device identifier includes:
and changing the equipment zone bit information associated with the third device identifier from the initial equipment zone bit information to target equipment zone bit information so as to control the playing device corresponding to the third device identifier to play third audio data in a third video file under the condition that the target equipment zone bit information is inquired.
9. A video playback system, comprising processing means, first playback means and second playback means, said processing means being adapted to perform the steps of the method according to any one of claims 1 to 8.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method according to any one of claims 1 to 8.
CN202111605767.3A 2021-12-25 2021-12-25 Video playing method, system and storage medium Active CN114501126B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111605767.3A CN114501126B (en) 2021-12-25 2021-12-25 Video playing method, system and storage medium
PCT/CN2022/103692 WO2023115904A1 (en) 2021-12-25 2022-07-04 Video playing method and system, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111605767.3A CN114501126B (en) 2021-12-25 2021-12-25 Video playing method, system and storage medium

Publications (2)

Publication Number Publication Date
CN114501126A true CN114501126A (en) 2022-05-13
CN114501126B CN114501126B (en) 2024-03-15

Family

ID=81495596

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111605767.3A Active CN114501126B (en) 2021-12-25 2021-12-25 Video playing method, system and storage medium

Country Status (2)

Country Link
CN (1) CN114501126B (en)
WO (1) WO2023115904A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115904A1 (en) * 2021-12-25 2023-06-29 深圳市广和通无线股份有限公司 Video playing method and system, and storage medium

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137304A (en) * 2010-10-29 2011-07-27 华为终端有限公司 Television signal transmission method, set top box and television
CN105657492A (en) * 2015-12-29 2016-06-08 广州视源电子科技股份有限公司 Television signal processing method, television signal processing device and television playing control system
US20160165296A1 (en) * 2013-06-10 2016-06-09 Viaccess Terminal identification method in a system for providing multimedia content
CN108958693A (en) * 2018-06-22 2018-12-07 联想(北京)有限公司 A kind of audio control method and electronic equipment
CN109032555A (en) * 2018-07-06 2018-12-18 广州视源电子科技股份有限公司 Throw screen sound intermediate frequency data processing method, device, storage medium and electronic equipment
CN109165004A (en) * 2018-08-16 2019-01-08 努比亚技术有限公司 Double screen terminal audio frequency output method, terminal and computer readable storage medium
CN109246545A (en) * 2018-09-04 2019-01-18 福建星网智慧科技股份有限公司 A kind of double screen audio-frequency inputting method
CN109640125A (en) * 2018-12-21 2019-04-16 广州酷狗计算机科技有限公司 Video content processing method, device, server and storage medium
CN110267091A (en) * 2019-05-07 2019-09-20 北京奇艺世纪科技有限公司 Play instance processes method, apparatus and computer readable storage medium
CN110764724A (en) * 2019-09-25 2020-02-07 东软集团股份有限公司 Display equipment control method, device, equipment and storage medium
CN111045631A (en) * 2019-11-29 2020-04-21 深圳市宏电技术股份有限公司 Method and device for realizing double-screen different display and different sound and storage medium
CN111405337A (en) * 2020-02-17 2020-07-10 福州瑞芯微电子股份有限公司 Method, device, equipment and medium for realizing double-screen abnormal sound of android system
CN214151678U (en) * 2020-12-16 2021-09-07 南京南方电讯有限公司 Dual-screen different display system under android system
US20210281888A1 (en) * 2020-03-09 2021-09-09 Beijing Dajia Internet Information Technology Co., Ltd. Method, device, and storage medium for transmitting data
CN113453054A (en) * 2021-06-30 2021-09-28 深圳市斯博科技有限公司 Audio and video frame loss method and device, computer equipment and storage medium

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10719291B2 (en) * 2016-08-31 2020-07-21 Arris Enterprises Llc Simultaneous output of multiple audio tracks
CN107027053A (en) * 2017-05-08 2017-08-08 深圳Tcl数字技术有限公司 Audio frequency playing method, terminal and computer-readable recording medium
CN114501126B (en) * 2021-12-25 2024-03-15 深圳市广和通无线股份有限公司 Video playing method, system and storage medium

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102137304A (en) * 2010-10-29 2011-07-27 华为终端有限公司 Television signal transmission method, set top box and television
US20160165296A1 (en) * 2013-06-10 2016-06-09 Viaccess Terminal identification method in a system for providing multimedia content
CN105657492A (en) * 2015-12-29 2016-06-08 广州视源电子科技股份有限公司 Television signal processing method, television signal processing device and television playing control system
CN108958693A (en) * 2018-06-22 2018-12-07 联想(北京)有限公司 A kind of audio control method and electronic equipment
CN109032555A (en) * 2018-07-06 2018-12-18 广州视源电子科技股份有限公司 Throw screen sound intermediate frequency data processing method, device, storage medium and electronic equipment
CN109165004A (en) * 2018-08-16 2019-01-08 努比亚技术有限公司 Double screen terminal audio frequency output method, terminal and computer readable storage medium
CN109246545A (en) * 2018-09-04 2019-01-18 福建星网智慧科技股份有限公司 A kind of double screen audio-frequency inputting method
CN109640125A (en) * 2018-12-21 2019-04-16 广州酷狗计算机科技有限公司 Video content processing method, device, server and storage medium
CN110267091A (en) * 2019-05-07 2019-09-20 北京奇艺世纪科技有限公司 Play instance processes method, apparatus and computer readable storage medium
CN110764724A (en) * 2019-09-25 2020-02-07 东软集团股份有限公司 Display equipment control method, device, equipment and storage medium
CN111045631A (en) * 2019-11-29 2020-04-21 深圳市宏电技术股份有限公司 Method and device for realizing double-screen different display and different sound and storage medium
CN111405337A (en) * 2020-02-17 2020-07-10 福州瑞芯微电子股份有限公司 Method, device, equipment and medium for realizing double-screen abnormal sound of android system
US20210281888A1 (en) * 2020-03-09 2021-09-09 Beijing Dajia Internet Information Technology Co., Ltd. Method, device, and storage medium for transmitting data
CN214151678U (en) * 2020-12-16 2021-09-07 南京南方电讯有限公司 Dual-screen different display system under android system
CN113453054A (en) * 2021-06-30 2021-09-28 深圳市斯博科技有限公司 Audio and video frame loss method and device, computer equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023115904A1 (en) * 2021-12-25 2023-06-29 深圳市广和通无线股份有限公司 Video playing method and system, and storage medium

Also Published As

Publication number Publication date
CN114501126B (en) 2024-03-15
WO2023115904A1 (en) 2023-06-29

Similar Documents

Publication Publication Date Title
US20200401286A1 (en) Method, apparatus and medium for switching application interfaces
US10296284B2 (en) Audio processing device and audio processing method
WO2017101325A1 (en) Vehicle-mounted display control method and device thereof
CN112261467B (en) Screen-casting content display method and device, electronic equipment and readable storage medium
US10860280B2 (en) Display method and device
US9374668B2 (en) Method of processing multimedia and electronic device thereof
CN110069919B (en) Information acquisition method and device
CN114844680B (en) Vehicle control method, apparatus, device, and readable storage medium
CN104899039A (en) Method and device for providing screen shooting service in terminal device
CN114501126A (en) Video playing method, system and storage medium
CN104503786B (en) Firmware refreshing method and device
CN110390641B (en) Image desensitizing method, electronic device and storage medium
CN106919679B (en) Log replay method, device and terminal applied to distributed file system
US20150067510A1 (en) Display system, recording medium, and selection control method
US20210354712A1 (en) Agent control device, agent control method, and recording medium
US10019604B2 (en) Method and apparatus of verifying terminal and medium
CN113126741B (en) Mobile terminal frame rate control method, device, computer equipment and storage medium
CN107423060B (en) Animation effect presenting method and device and terminal
CN114138230B (en) Audio processing method, system, device and computer readable storage medium
CN113687757B (en) Agent control device, agent control method, and non-transitory recording medium
US10135893B2 (en) System and method for controlling output of a function in response to media content
US20210357086A1 (en) Agent control device, agent control method, and recording medium
JP7310706B2 (en) AGENT CONTROL DEVICE, AGENT CONTROL METHOD, AND AGENT CONTROL PROGRAM
CN110968706B (en) Method, system and terminal for controlling slide show animation
CN115474292A (en) Information transmission method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant