CN114513626A - Multimedia playing method and device, electronic equipment and computer readable storage medium - Google Patents

Multimedia playing method and device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN114513626A
CN114513626A CN202011279087.2A CN202011279087A CN114513626A CN 114513626 A CN114513626 A CN 114513626A CN 202011279087 A CN202011279087 A CN 202011279087A CN 114513626 A CN114513626 A CN 114513626A
Authority
CN
China
Prior art keywords
multimedia information
video frames
video
frame
multimedia
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011279087.2A
Other languages
Chinese (zh)
Inventor
赵瑞祥
马永昌
卢小亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsingoal Beijing Technology Co ltd
Original Assignee
Tsingoal Beijing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsingoal Beijing Technology Co ltd filed Critical Tsingoal Beijing Technology Co ltd
Priority to CN202011279087.2A priority Critical patent/CN114513626A/en
Publication of CN114513626A publication Critical patent/CN114513626A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • H04N21/23106Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion involving caching operations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64784Data processing by the network
    • H04N21/64792Controlling the complexity of the content stream, e.g. by dropping packets
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a multimedia playing method, a multimedia playing device, electronic equipment and computer readable storage, wherein when a trigger condition is detected, whether video frames in multimedia information stored in a cache region meet a preset frame loss condition or not is detected, and the multimedia information comprises a plurality of video frames; and then when the video frame in the multimedia information meets the preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information, and then playing the multimedia information after the frame loss processing. Compared with the prior art, the frame loss processing is carried out on the video frames in the multimedia information stored in the buffer area, so that the outdated multimedia information is discarded, the real-time performance of the subsequently played multimedia information can be ensured to be higher, the video playing delay is reduced, and the real-time supervision of management personnel is facilitated.

Description

Multimedia playing method and device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of internet technologies, and in particular, to a multimedia playing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
Video surveillance is an important component of security systems. The conventional monitoring system comprises a front-end monitoring device and a video monitoring platform. Video monitoring is widely applied to many occasions due to intuition, accuracy, timeliness and rich information content.
In the prior art, the front-end monitoring equipment transmits the acquired monitoring video to the video monitoring platform, and managers can check the real-time monitoring video on the video monitoring platform. However, when communication obstacles are generated between the video monitoring platform and the front-end monitoring device, the monitoring video collected by the front-end monitoring device cannot be transmitted to the video monitoring platform in time, and when the video monitoring platform receives and plays the video frames, the monitoring video collected by the front-end monitoring device cannot be pushed to the video monitoring platform in time, so that the video played by the video monitoring platform may have part of historical video, the video playing delay is large, and the real-time supervision of a manager is not facilitated.
Disclosure of Invention
In view of the above, the present application provides a multimedia playing method, an apparatus, an electronic device and a computer readable storage, and mainly aims to solve at least one technical problem in the prior art.
In a first aspect, a multimedia playing method is provided, including:
when the trigger condition is detected, detecting whether video frames in the multimedia information stored in the cache region meet a preset frame loss condition or not, wherein the multimedia information comprises a plurality of video frames;
when a video frame in the multimedia information meets a preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information;
and playing the multimedia information after frame loss processing.
In a possible implementation manner of the embodiment of the present application, the frame loss condition preset for frame loss includes at least one of the following conditions:
the total number of video frames of the multimedia information is greater than a first preset threshold value;
the video frame containing the timestamp earlier than the current playing time in the multimedia information.
In another possible implementation manner of the embodiment of the present application, when the total number of video frames in the multimedia information is greater than the first preset threshold,
the frame loss processing is carried out on the multimedia information, and comprises the following steps:
determining video frames to be discarded in the multimedia information based on the total number of the video frames in the multimedia information;
and discarding the determined video frames to be discarded so that the frame number of the video frames in the buffer area after the frame loss processing is not greater than a first preset threshold value.
Another possible implementation manner of the embodiment of the present application, determining a video frame to be discarded in multimedia information based on the total number of video frames in the multimedia information, includes:
determining all video frames before an Nth video frame in the multimedia information as video frames to be discarded, wherein the Nth video frame is a starting frame in the video frames of which the first preset threshold is reserved in the multimedia information, and N is a positive integer greater than 1.
In another possible implementation manner of the embodiment of the present application, when a video frame containing a timestamp earlier than the current playing time in the multimedia information,
the frame loss processing is carried out on the multimedia information, and comprises the following steps:
and discarding the video frames with the time stamps earlier than the current playing time in the multimedia information.
In another possible implementation manner of the embodiment of the present application, the trigger condition includes at least one of:
receiving first communication fault information sent by a pull stream service, wherein the first communication obstacle information is used for indicating that a communication fault occurs between the pull stream service and a multimedia acquisition device, and the pull stream service is used for pulling a video frame from the multimedia acquisition device and storing the video frame in a cache region;
receiving second communication fault information sent by the pull stream service, wherein the second communication fault information is used for indicating that a video frame contained in a video frame pulled by the pull stream service from the multimedia device is larger than a second preset threshold value;
a preset detection period is reached.
Another possible implementation manner of the embodiment of the present application, after performing frame loss processing on multimedia information, includes:
the dropped video frames are stored.
In another possible implementation manner of the embodiment of the present application, the method further includes:
determining time stamps corresponding to all video frames in the discarded video frames respectively;
determining a time period corresponding to the discarded video frame based on the timestamp corresponding to each video frame;
determining whether alarm information exists in a time period, wherein the alarm information is stored in a database;
and if the video frame contains the alarm information, playing the discarded video frame based on the generation time of the alarm information.
Another possible implementation manner of the embodiment of the present application is that playing the discarded video frame based on the generation time of the alarm information, including:
determining a video frame with matched time from the discarded video frames based on the generation time of the alarm information and the time stamps corresponding to the video frames in the discarded video frames;
playing the time-matched dropped video frames.
Another possible implementation manner of the embodiment of the present application, a manner of playing a dropped video frame, includes at least one of the following:
switching and playing the discarded video frames based on a switching instruction triggered by a user;
and playing the discarded video frames through a specific window, wherein the specific window is different from the window for playing the multimedia information after the frame loss processing.
Another possible implementation manner of the embodiment of the present application, detecting whether a video frame in multimedia information stored in a buffer meets a preset frame loss condition, includes:
pulling a video frame from the pull streaming service, wherein the video frame is pulled from the multimedia acquisition equipment by the pull streaming service;
and storing the pulled video frame in a buffer area.
In another possible implementation manner of the embodiment of the present application, the method further includes:
establishing a communication link with a pull service, the established communication link comprising: a first communication link and a second communication link;
sending a pull stream request through a first communication link, wherein the pull stream request is used for indicating a pull stream service to pull a video frame from a multimedia acquisition device;
wherein, pull the video frame from pulling the stream service, including:
the video frames are pulled from the pull streaming service over the second communication link.
In a second aspect, a multimedia playing apparatus is provided, which includes:
the detection module is used for detecting whether video frames in the multimedia information stored in the cache region meet a preset frame loss condition or not when the trigger condition is detected, wherein the multimedia information comprises a plurality of video frames;
the frame loss module is used for performing frame loss processing on the multimedia information to discard overdue multimedia information when a video frame in the multimedia information meets a preset frame loss condition;
and the playing module is used for playing the multimedia information after the frame loss processing.
In a possible implementation manner of the embodiment of the present application, the frame loss condition preset for frame loss includes at least one of the following conditions:
the total number of the video frames of the multimedia information is greater than a first preset threshold;
the video frame containing the timestamp earlier than the current playing time in the multimedia information.
In another possible implementation manner of the embodiment of the present application, when the total number of video frames in the multimedia information is greater than a first preset threshold,
the frame loss module comprises:
the determining unit is used for determining video frames to be discarded in the multimedia information based on the total number of the video frames in the multimedia information;
and the discarding unit is used for discarding the determined video frames to be discarded so as to enable the frame number of the video frames in the buffer area after the frame loss processing not to be larger than a first preset threshold value.
In another possible implementation manner of the embodiment of the present application, the determining unit is specifically configured to determine all video frames before an nth video frame in the multimedia information as video frames to be discarded, where the nth video frame is a starting frame in video frames reserved with a first preset threshold in the multimedia information, where N is a positive integer greater than 1.
In another possible implementation manner of the embodiment of the present application, when a video frame containing a timestamp earlier than the current playing time in the multimedia information,
and the discarding unit is specifically configured to discard a video frame of which the timestamp is earlier than the current playing time in the multimedia information.
In another possible implementation manner of the embodiment of the present application, the trigger condition includes at least one of:
receiving first communication fault information sent by a pull stream service, wherein the first communication obstacle information is used for indicating that a communication fault occurs between the pull stream service and a multimedia acquisition device, and the pull stream service is used for pulling a video frame from the multimedia acquisition device and storing the video frame in a cache region;
receiving second communication fault information sent by the pull stream service, wherein the second communication fault information is used for indicating that a video frame contained in a video frame pulled by the pull stream service from the multimedia device is larger than a second preset threshold value;
a preset detection period is reached.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
and the storage module is used for storing the discarded video frames.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
the time stamp determining module is used for determining the time stamp corresponding to each video frame in the discarded video frames;
the time period determining module is used for determining the time period corresponding to the discarded video frame based on the timestamp corresponding to each video frame;
the judging module is used for determining whether alarm information exists in a time period or not, and the alarm information is stored in a database;
and the playing module is also used for playing the discarded video frames based on the generation time of the alarm information when the alarm information is contained.
In another possible implementation manner of the embodiment of the present application, the playing module includes:
the matching unit is used for determining a video frame with matched time from the discarded video frames based on the generation time of the alarm information and the time stamps corresponding to the video frames in the discarded video frames;
and the playing unit is used for playing the discarded video frames with the matched time.
In another possible implementation manner of the embodiment of the present application, when the playing unit plays the discarded video frame, the playing unit is further specifically configured to at least one of:
switching and playing the discarded video frames based on a switching instruction triggered by a user;
and playing the discarded video frames through a specific window, wherein the specific window is different from the window for playing the multimedia information after the frame loss processing.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
the pull stream module is used for pulling a video frame from the pull stream service, and the video frame is pulled from the multimedia acquisition equipment by the pull stream service;
and the cache module is used for storing the pulled video frames in a cache region.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
a communication link establishing module, configured to establish a communication link with a pull service, where the established communication link includes: a first communication link and a second communication link;
the communication module is used for sending a pull stream request through a first communication link, wherein the pull stream request is used for indicating a pull stream service to pull a video frame from the multimedia acquisition equipment;
the pull streaming module is specifically configured to pull the video frame from the pull streaming service through the second communication link.
In a third aspect, an electronic device is provided, including:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the multimedia playing method according to the above is performed.
In a fourth aspect, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the multimedia playback method described above.
The beneficial effect that technical scheme that this application provided brought is:
the application provides a multimedia playing method, a multimedia playing device, electronic equipment and computer readable storage, when a trigger condition is detected, whether video frames in multimedia information stored in a cache region meet a preset frame loss condition or not is detected, wherein the multimedia information comprises a plurality of video frames; and then when the video frame in the multimedia information meets the preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information, and then playing the multimedia information after the frame loss processing. Compared with the prior art, the frame loss processing is carried out on the video frames in the multimedia information stored in the buffer area, so that the outdated multimedia information is discarded, the real-time performance of the subsequently played multimedia information can be ensured to be higher, the video playing delay is reduced, and the real-time supervision of management personnel is facilitated.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 illustrates an application scenario diagram of a multimedia playing method provided in an embodiment of the present application;
fig. 2 is an application scene diagram illustrating another multimedia playing method provided by an embodiment of the present application;
fig. 3 is a flowchart illustrating a multimedia playing method according to an embodiment of the present application;
fig. 4 is a flowchart illustrating another multimedia playing method provided by an embodiment of the present application;
FIG. 5 shows a flowchart of step S404 in FIG. 4;
FIG. 6 shows a flow chart before step S301 of FIG. 3;
fig. 7 is a block diagram illustrating a multimedia playing apparatus according to an embodiment of the present application;
fig. 8 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present application will be described in more detail below with reference to the accompanying drawings. While exemplary embodiments of the present application are shown in the drawings, it should be understood that the present application may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that such uses are interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in other sequences than those illustrated or described herein. Furthermore, the term "include" and its variants are to be read as open-ended terms meaning "including, but not limited to".
The multimedia playing method provided by the present application may be applied to the application environment shown in fig. 1, and may also be applied to the application environment shown in fig. 2, which is not limited in the embodiments of the present application.
As shown in fig. 1, the multimedia capturing device 102 and the first electronic device 103 are located in a network 101, and the multimedia capturing device 102 performs data interaction with the first electronic device 103 through the network 101. The multimedia acquisition device 102 may be a camera or other image pickup devices, and the multimedia acquisition device 102 may be mounted on the first electronic device 103 or may be used alone; the first electronic device 103 is installed with a pull streaming service and a browser for playing video, and the first electronic device 103 may be, but is not limited to, an electronic device such as a mobile phone, a personal computer, a server, and the like. In an embodiment of the present application, the pull streaming service is used for playing from the multimedia capturing device 102 and by a browser.
As shown in fig. 2, a multimedia acquisition device 202, a first electronic device 203, and a second electronic device 204 are located in a network 201, and data interaction is performed among the multimedia acquisition device 202, the first electronic device 203, and the second electronic device 204 through the network 201, where the multimedia acquisition device 202 may be a camera or other image pickup devices, and the multimedia acquisition device 202 may be mounted on the first electronic device 203 or may be used alone; the first electronic device 203 is installed with a pull streaming service, the second electronic device 204 is installed with a browser for playing video, and the first electronic device 203 and the second electronic device 204 may be, but are not limited to, electronic devices such as a mobile phone, a personal computer, a server, and the like.
In combination with the above application environment, an embodiment of the present application provides a multimedia playing method, which is executed by the first electronic device 103 in fig. 1 or the second electronic device 204 in fig. 2, as shown in fig. 3, and the method includes:
step S301: when the trigger condition is detected, whether the video frames in the multimedia information stored in the buffer area meet the preset frame loss condition or not is detected, and the multimedia information comprises a plurality of video frames.
The multimedia information can be a playing video in a network live broadcast room, and the content of the video can be, for example, a main live broadcast performance and an event live broadcast; the method can also be used for live broadcast of a network television, real-time monitoring video and the like, and the method is not limited in the application.
The first electronic device 103 in fig. 1 or the second electronic device 204 in fig. 2 is installed with a browser program for playing live video, and the buffer may be a video frame buffer created by the browser to buffer video frames, so that the received video stream and the played video stream are separated, and thus the video frames added into the buffer and the video frames output from and played in the buffer may be synchronized as much as possible, so as to improve the real-time performance of the picture playing.
Step S302: and when the video frame in the multimedia information meets the preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information.
Specifically, in the embodiment of the present application, the preset frame loss condition may specifically be referred to in the following embodiments, and details are not described herein.
Step S303: and playing the multimedia information after frame loss processing.
For the embodiment of the application, the multimedia information after frame dropping processing is generally the multimedia information corresponding to the current time, so that the real-time performance of playing the multimedia information after frame dropping processing is relatively high.
The embodiment of the application provides a multimedia playing method, when a trigger condition is detected, whether video frames in multimedia information stored in a cache region meet a preset frame loss condition or not is detected, wherein the multimedia information comprises a plurality of video frames; and then when the video frame in the multimedia information meets the preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information, and then playing the multimedia information after the frame loss processing. Compared with the prior art, the frame loss processing is carried out on the video frames in the multimedia information stored in the buffer area, so that the outdated multimedia information is discarded, the real-time performance of the subsequently played multimedia information can be ensured to be higher, the video playing delay is reduced, and the real-time supervision of management personnel is facilitated. .
In the embodiment of the present application, a possible implementation manner is provided, where the frame dropping condition preset for frame dropping in step S301 includes at least one of the following conditions:
frame loss condition one: the total number of the video frames of the multimedia information is greater than a first preset threshold value.
The first preset threshold may be set by a worker, or may be determined according to a connection condition of a current network, which is not limited in this embodiment of the application.
For example, the first preset threshold may be set to 20 frames, 30 frames, etc. For example, assuming that the first preset threshold is 20 frames, if the total number of video frames in the multimedia information cached in the current cache region is 25 frames, it is determined that the video frames in the multimedia information satisfy the frame loss condition, and if the total number of video frames in the multimedia information cached in the current cache region is 15 frames, it is determined that the video frames in the multimedia information cached in the current cache region do not satisfy the frame loss condition.
And frame loss condition two: the video frame containing the timestamp earlier than the current playing time in the multimedia information.
The time stamp of a video frame may be a point in time when the video was captured, at which the video frame was marked. In this embodiment, the timestamp of a video frame may be the corresponding time point when the video frame is formed by shooting, for example, the time when a certain video frame is shot is 2016, 12, 3, 10, 7, 5 seconds, and then the timestamp of the video frame may be marked as 2016, 12, 3, 10, 7, 5 seconds. But of course may be any information that characterizes the time at which the video frames are formed.
For example, assuming that the current playing time is 2016, 12, 3, 16, 7, 5 seconds, if the multimedia information contains a video frame with a timestamp before 2016, 12, 3, 16, 7, 5 seconds, the frame dropping condition is satisfied, otherwise, the frame dropping condition is not satisfied.
For example, if the video frames stored in the buffer area include 25 video frames, where the timestamps corresponding to the first 5 video frames are all before the current playing time, it is determined that the multimedia information stored in the buffer area satisfies the frame loss condition two.
Another possible implementation manner is provided in this embodiment of the present application, when the total number of video frames in the multimedia information is greater than a first preset threshold (that is, the frame loss condition 1), performing frame loss processing on the multimedia information, including: determining video frames to be discarded in the multimedia information based on the total number of the video frames in the multimedia information; and discarding the determined video frames to be discarded so that the frame number of the video frames in the buffer area after the frame loss processing is not greater than a first preset threshold value.
Specifically, all video frames before the nth video frame in the multimedia information are determined as video frames to be discarded. Further, all video frames preceding the N video frames are dropped.
The nth video frame is a starting frame of video frames of the multimedia information, wherein the video frames keep a first preset threshold, and N is a positive integer greater than 1.
For example, assuming that the first preset threshold is 30 frames and the total number of video frames in the multimedia information is 35 frames, all video frames before the 6 th video frame are determined as video frames to be discarded, that is, the video frames to be discarded are the first frame (start frame) to the 5 th video frame, that is, the first frame to the 5 th video frame in the discard buffer (the last 30 video frames in the reserve buffer) are discarded.
Further, when the total number of video frames in the multimedia information stored in the buffer area is greater than a first preset threshold, frame dropping processing is performed on the multimedia information so as to drop the outdated multimedia information. In the embodiment of the application, the total number of the video frames of the multimedia information stored in the buffer area is greater than the first preset threshold value as a frame loss condition, the mode of determining the video frames to be lost is simpler, the processing speed is high, the delay of playing caused by the operation of determining the video frames to be lost is reduced, and the occupation of the computing capacity can be further reduced.
In another possible implementation manner of the embodiment of the present application, when a video frame containing a timestamp earlier than a current playing time in the multimedia information (i.e. frame dropping condition two), performing frame dropping processing on the multimedia information may specifically include: and discarding the video frames with the time stamps earlier than the current playing time in the multimedia information.
Further, in the embodiment of the present application, when it is determined whether the multimedia information stored in the buffer is earlier than the current playing time, the current playing time is obtained. The specific manner of obtaining the current playing time is not described in detail in this embodiment of the present application.
Illustratively, assuming that the current playing time is 2016, 12, 3, 16, 7, 5 minutes, the first to tenth video frames in the multimedia information are discarded if the timestamps of the first to tenth video frames are all before 2016, 12, 3, 16, 7, 5 minutes.
Further, when the video frames in the multimedia information stored in the buffer area contain video frames with time stamps earlier than the current playing time, frame dropping processing is performed on the multimedia information so as to drop the outdated multimedia information. In the embodiment of the application, the relation between the timestamp carried by the video frame and the current time is used as a frame loss condition, so that the accuracy of frame loss can be improved, and the real-time performance of multimedia information playing can be further improved.
In the above embodiments, in order to avoid guaranteeing the real-time performance of playing, whether the video frames stored in the buffer area meet the preset frame loss condition is detected, and in order to further avoid the accuracy of detection, avoid the waste of computing resources and the like caused by real-time detection, and avoid the inaccuracy caused by random detection, the embodiments of the present application provide several kinds of trigger conditions for detection. However, the embodiments of the present application are not limited to the triggering conditions mentioned in the present disclosure, and for example, whether the video frames stored in the buffer area meet the preset frame loss condition is detected in real time, whether the video frames stored in the buffer area meet the preset frame loss condition is detected randomly, and whether the video frames stored in the buffer area meet the preset frame loss condition is detected at specific time intervals, which are all within the protection scope of the embodiments of the present application.
The triggering condition of the step S301 includes at least one of the following:
triggering a condition one: and receiving first communication fault information sent by the pull stream service, wherein the first communication obstacle information is used for indicating that a communication fault occurs between the pull stream service and the multimedia acquisition equipment, and the pull stream service is used for pulling the video frames from the multimedia acquisition equipment and storing the video frames in a cache region.
The communication failure may be, but is not limited to, network fluctuation, a rendering mechanism of the graphics card causing a delay in rendering a screen, or an imbalance in data throughput. The communication link established between the pull stream service and the multimedia acquisition equipment can also adopt a TCP long link, so that the pull stream service can pull video frames from the multimedia acquisition equipment under the condition of communication failure, thereby avoiding the loss of the video frames. And under the condition that communication faults occur between the pull stream service and the multimedia acquisition equipment, the pull stream service generates corresponding first communication fault information and sends the first communication fault information to the browser. The browser receives the first communication failure information, which indicates that the video frames pulled by the pull streaming service are accumulated more, and thus the video frames stored in the buffer may include partially expired video frames.
Further, the connection relationship among the browser, the pull stream service, and the multimedia acquisition device is described in detail in the following embodiments, and is not described herein again.
Triggering condition two: and receiving second communication fault information sent by the pull stream service, wherein the second communication fault information is used for indicating that a video frame contained in a video frame pulled by the pull stream service from the multimedia device is greater than a second preset threshold value.
The second preset threshold may be set by a worker, for example, may be set to 20 frames, 30 frames, or the like, and may also be determined based on a historical pull stream condition, or may also be determined based on an acquisition rule of a multimedia acquisition device, and the like, which is not limited in this embodiment of the application.
For the embodiment of the application, when a video frame included in a video frame pulled by a pull streaming service from a multimedia acquisition device (e.g., a camera) is greater than a second preset threshold, the pull streaming service generates second communication fault information and sends the second communication fault information to a browser. For example, assuming that the second preset threshold is 30 frames, and the video frames pulled by the pull streaming service from the multimedia device include 40 frames of video frames, the pull streaming service generates second communication failure information and pushes the second communication failure information to the browser, and the browser receives the second communication failure information, and then represents that there may be a video frame that does not satisfy the preset frame loss condition (i.e., there is an expired video frame) in the video frames stored in the cache area, so that it may be started to detect whether the video frames in the multimedia information stored in the cache area satisfy the preset frame loss condition. .
Triggering conditions are as follows: a preset detection period is reached.
The preset detection period may be a preset time point or a preset time period, for example, if the preset detection period is three pm, and if the preset detection period is three pm, it is detected whether the video frames in the multimedia information stored in the buffer area meet the preset frame loss condition. Or the preset detection period is 15 minutes, and then whether the video frames in the multimedia information stored in the buffer area meet the preset frame loss condition is detected every 15 minutes.
When the first trigger condition, the second trigger condition, the third trigger condition or any combination of the first trigger condition, the second trigger condition and the third trigger condition is met, whether the video frames in the multimedia information stored in the cache region meet the preset frame loss condition is detected, the accuracy of detecting whether the video frames in the multimedia information stored in the cache region meet the preset frame loss condition can be improved, and the detection power consumption is further reduced.
The embodiment of the present application provides another possible implementation manner, and after performing frame loss processing on multimedia information, the method may further include: the dropped video frames are stored.
Specifically, in the embodiment of the present application, the dropped video frame may be stored locally, or the dropped video frame may be sent to another device for storage. In the embodiment of the present application, the other devices may include: the system comprises storage equipment such as a server, a mobile terminal, a USB flash disk and a cloud end.
For the embodiment of the application, since some important information may exist in the dropped video frame, storing the dropped video frame in a local or other device may avoid loss of the important information, and may also facilitate subsequent query, use, and the like of the content in the lost video frame.
In the embodiment of the present application, another possible implementation manner is provided, as shown in fig. 4, where the method further includes:
step 401: and determining the time stamp corresponding to each video frame in the discarded video frames.
Illustratively, the dropped video frames comprise 3 video frames with a first video frame having a timestamp of 2016, 12 months, 3 days, 16 hours, 7 minutes, 5 seconds, a second video frame having a timestamp of 2016, 12 months, 3 days, 16 hours, 7 minutes, 6 seconds, and a third video frame having a timestamp of 2016, 12 months, 3 days, 16 hours, 7 minutes, 7 seconds, respectively.
Step 402: and determining the time period corresponding to the discarded video frame based on the timestamp corresponding to each video frame.
For the embodiment of the present application, the time period formed by the timestamps corresponding to the respective video frames in the video frames discarded in step S401 is determined as the time period corresponding to the discarded video frame.
As an example, from the time stamps of the 3 video frames, it can be determined that the corresponding time period of the discarded video frames is 2016, 7 minutes and 5 seconds at 3 days 16 at 12 months and 3 days at 12 months and 2016, 7 minutes and 7 seconds at 16 months and 3 days at 12 months and 2016.
Step 403: determining whether alarm information exists in a time period, wherein the alarm information is stored in a database; if the alarm information is included, step 404 is performed.
The alarm information may be, but is not limited to, alarm information generated by the monitoring device. For example, the monitoring device is a wearable device worn by a person, such as a bracelet, a positioning device is installed in the wearable device, when the positioning information of the person exceeds a preset positioning range, corresponding warning information is generated and stored in a database, and the monitoring device can also be used for heart rate warning, crowd warning and the like. In the embodiment of the application, the heart rate alarm is alarm information reported by a bracelet worn by a person and the like when the heart rate of the person is abnormal; the crowd alarm is an alarm generated and reported by more people gathered in a certain area. In the embodiment of the application, the alarm information can be stored in one or at least two databases.
For example, assuming that the time period corresponding to the discarded video frame is determined to be 2016, 12, 3, 16 hours, 7 minutes, 5 seconds, to 2016, 12, 3, 16 hours, 7 minutes, 7 seconds, if the generation time of the warning information stored in the database is within the time period from 2016, 12, 3, 16 hours, 7 minutes, 5 seconds, to 2016, 12, 3, 16 hours, 7 minutes, 7 seconds, it is determined that the warning information exists in the time period corresponding to the discarded video frame.
Whether the alarm information exists or not is searched in the time period corresponding to the discarded video frame, whether the discarded video frame has important information or not can be automatically determined, and a user can conveniently search the discarded video frame containing the important information.
Step 404: playing the discarded video frame based on the generation time of the alarm information.
Specifically, as shown in fig. 5, step 404 may further include: step S4041-step S4042, wherein,
step 4041: and determining the video frames with matched time from the discarded video frames based on the generation time of the alarm information and the time stamps corresponding to the video frames in the discarded video frames respectively.
For the embodiment of the application, the video frame matched with the generation time of the alarm information in the discarded video frame is determined according to the generation time of the alarm information, and then the video frame matched with the generation time of the alarm information in the discarded video frame is played. Of course, all the dropped video frames may be played as long as there is alarm information in the dropped video frames. The embodiments of the present application are not limited. In the embodiment of the present application, the video frame matched with the generation time of the alarm information may be a video frame including the alarm information and having the same generation time, and may also include video frames of preset time before and after the video frame having the same generation time. The embodiments of the present application are not limited thereto.
The matching between the time stamp of the video frame and the generation time of the alarm information is not limited to the condition that the time stamp of the video frame is the same as the generation time of the alarm information, but also the time difference between the time stamp of the video frame and the generation time of the alarm information is a preset time period, so that the matching degree between the alarm information and the discarded video frame can be expanded, and a more comprehensive video picture can be detected.
Illustratively, if the discarded video frames include 3 video frames, the timestamp of the first video frame is 2016, 12, 3, 16, 7, 5 seconds, the timestamp of the second video frame is 2016, 12, 3, 16, 7, 6 seconds, the timestamp of the third video frame is 2016, 12, 3, 16, 7 seconds, and the generation time of the alert information is 2016, 12, 3, 16, 7, 5 seconds, the first video frame is determined to be a video frame matching the generation time of the alert information. Or, the discarded video frames include 3 video frames, which are respectively the first video frame with a timestamp of 2016, 12, 3, 16, 7, 5, seconds, the second video frame with a timestamp of 2016, 12, 3, 16, 7, 6, seconds, the third video frame with a timestamp of 2016, 12, 3, 16, 7, seconds, the alarm information is generated with a time of 2016, 12, 3, 16, 7, 0, seconds, and if the preset time period that the alarm information generation time is different from the timestamp of the video frame is set to be 5 seconds, the first video frame is determined to be the video frame that matches the alarm information generation time.
Step 4042: playing the time-matched dropped video frames.
For the embodiment of the application, the discarded video frames matched with the playing time can be conveniently played, so that the user can conveniently check the video content related to the alarm information, and the user can conveniently know the condition generated when the alarm information is generated. For example, when the positioning information of the person exceeds the preset positioning range, the corresponding warning information is generated, and the video frame matched with the warning information generation time in the discarded video frames is played, so that how the person escapes from the preset positioning range can be known.
Further, in the embodiment of the present application, step S401 to step S404 may be executed after step S102, or after step S103, or simultaneously with step S102 or step S103, and further, step S401 to step S404 may also be executed when or after determining a video frame to be dropped, or after storing the dropped video frame. In the embodiment of the present application, any feasible execution sequence of steps S401 to S404 is within the scope of the embodiment of the present application.
Further, this application embodiment still includes: and detecting the discarded video frames to determine whether the discarded video frames contain video frames meeting the alarm condition, and if the video frames meeting the alarm condition exist, playing the discarded video frames. In this embodiment of the present application, the video frames that satisfy the alarm condition may include: at least one of a video frame with strange face information, a video frame with certain specific object information and a video frame with dangerous behavior information. In the embodiment of the present application, the dropped video frame may be detected by a video frame detection model, or may be detected in other manners. In this embodiment, the training mode of the video frame detection model is not described in detail in this embodiment. Further, when detecting a video frame satisfying the alarm condition, the manner of playing the discarded video frame is described in detail in the following embodiments, and is not described herein again.
Specifically, the manner of playing the dropped video frames includes at least one of the following:
the first playing mode is as follows: and switching to play the discarded video frames based on a switching instruction triggered by a user.
The user can trigger the switching instruction through sliding or clicking and the like, and the discarded video frame matched with the generation time of the alarm information can be played in response to the switching instruction triggered by the user. In the embodiment of the present application, the switching instruction triggered based on the sliding or clicking operation of the user is not limited to switching and playing the discarded video frames matched with the generation time of the warning information, and may also play all the discarded video frames, and of course, the user may also select the played video frame from the discarded video frames. The embodiments of the present application are not limited.
And a second playing mode: the dropped video frames are played through a specific window.
Wherein, the specific window is different from the window for playing the multimedia information after the frame loss processing.
For example, the dropped video frames may be played through a small window or the like.
For the embodiment of the present application, playing the discarded video frames in the playing manner is not limited to playing the discarded video frames matched with the generation time of the warning information, and all the discarded video frames may also be played, and of course, the user may also select the played video frame from the discarded video frames. The embodiments of the present application are not limited.
For the embodiment of the application, the specific independent playing window is used for playing the discarded video frame matched with the generation time of the alarm information, and the discarded video frame can be played while the lost multimedia information is played, so that important information contained in the discarded video frame can be avoided being missed, and a real-time monitoring picture can be watched.
In the embodiment of the present application, as shown in fig. 6, before the step S301, a further possible implementation manner is provided, where:
step S601: and pulling the video frame from the pull streaming service, wherein the video frame is pulled from the multimedia acquisition equipment by the pull streaming service.
For the embodiment of the present application, a communication link is established with a pull stream service, where the established communication link includes: a first communication link and a second communication link; sending a pull stream request through a first communication link, wherein the pull stream request is used for instructing a pull stream service to pull video frames from a multimedia acquisition device; in an embodiment of the present application, pulling a video frame from a pull streaming service after establishing a communication link with the pull streaming service includes: the video frames are pulled from the pull streaming service over the second communication link.
For the embodiment of the present application, establishing a communication link with a pull streaming service may be performed before step S601, where both the first communication link and the second communication link may be Websocket links, and the second communication link is used to receive a video stream of a camera or the like pushed by the pull streaming service.
Further, in this embodiment of the application, the pull stream service establishes a TCP link with a multimedia acquisition device (such as a camera) as a Transmission Control Protocol (TCP) client, transmits a pull stream request and the like to the multimedia acquisition device after the TCP link is successful, and the multimedia acquisition device pushes a video stream to the pull stream service through the TCP link.
Specifically, in the embodiment of the present application, because both the TCP link and the websocket link belong to a long link protocol, data loss due to network fluctuation is avoided, and screen splash and frame drop are also avoided when the front-end browser plays a real-time video stream.
Step S602: and storing the pulled video frame in a buffer area.
For the embodiment of the present application, in step S601, a video frame is pulled up from the pull streaming service, and the pulled video frame is stored in the buffer, so that the browser plays the video frame stored in the buffer.
In the above embodiments, a method for multimedia playing is introduced from the perspective of method flow, and in the following embodiments, a multimedia playing apparatus is introduced from the perspective of a virtual module and/or a virtual unit, and the multimedia playing apparatus described in the following embodiments is applicable to the above method embodiments.
An embodiment of the present application provides a multimedia playing apparatus, as shown in fig. 7, including:
a detecting module 701, configured to detect, when a trigger condition is detected, whether a video frame in multimedia information stored in a buffer meets a preset frame loss condition, where the multimedia information includes multiple video frames;
a frame dropping module 702, configured to perform frame dropping processing on the multimedia information to drop out outdated multimedia information when a video frame in the multimedia information meets a preset frame dropping condition;
the playing module 703 is configured to play the multimedia information after frame loss processing.
The embodiment of the application provides a multimedia playing device, which detects whether video frames in multimedia information stored in a cache region meet a preset frame loss condition or not when a trigger condition is detected, wherein the multimedia information comprises a plurality of video frames; and then when the video frame in the multimedia information meets the preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information, and then playing the multimedia information after the frame loss processing. Compared with the prior art, the frame loss processing is carried out on the video frames in the multimedia information stored in the buffer area, so that the outdated multimedia information is discarded, the real-time performance of the subsequently played multimedia information can be ensured to be higher, the video playing delay is reduced, and the real-time supervision of management personnel is facilitated.
In another possible implementation manner of the embodiment of the present application, the preset frame loss condition for frame loss includes at least one of the following conditions:
the total number of video frames of the multimedia information is greater than a first preset threshold value;
the video frame containing the timestamp earlier than the current playing time in the multimedia information.
Specifically, when the total number of video frames in the multimedia information is greater than a first preset threshold,
the frame loss module 702 includes:
the determining unit is used for determining video frames to be discarded in the multimedia information based on the total number of the video frames in the multimedia information;
and the discarding unit is used for discarding the determined video frames to be discarded so as to enable the frame number of the video frames in the buffer area after the frame loss processing to be a first preset threshold value.
In another possible implementation manner of the embodiment of the present application, the determining unit is specifically configured to determine all video frames before an nth video frame in the multimedia information as video frames to be discarded, where the nth video frame is a starting frame in video frames reserved with a first preset threshold in the multimedia information, where N is a positive integer greater than 1.
In another possible implementation manner of the embodiment of the present application, when a video frame containing a timestamp earlier than the current playing time in the multimedia information,
and the discarding unit is specifically configured to discard a video frame of which the timestamp is earlier than the current playing time in the multimedia information.
In another possible implementation manner of the embodiment of the present application, the trigger condition includes at least one of:
receiving first communication fault information sent by a pull stream service, wherein the first communication obstacle information is used for indicating that a communication fault occurs between the pull stream service and a multimedia acquisition device, and the pull stream service is used for pulling a video frame from the multimedia acquisition device and storing the video frame in a cache region;
receiving second communication fault information sent by the pull stream service, wherein the second communication fault information is used for indicating that a video frame contained in a video frame pulled by the pull stream service from the multimedia device is larger than a second preset threshold value;
a preset detection period is reached.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
and the storage module is used for storing the discarded video frames.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
the time stamp determining module is used for determining the time stamp corresponding to each video frame in the discarded video frames;
the time period determining module is used for determining the time period corresponding to the discarded video frame based on the timestamp corresponding to each video frame;
the judging module is used for determining whether alarm information exists in a time period or not, and the alarm information is stored in the database;
the playing module 703 is further configured to play the discarded video frame based on the generation time of the alarm information when the alarm information is included.
In another possible implementation manner of the embodiment of the present application, the playing module includes:
the matching unit is used for determining a video frame with matched time from the discarded video frames based on the generation time of the alarm information and the time stamps corresponding to the video frames in the discarded video frames;
and the playing unit is used for playing the discarded video frames with the matched time.
In another possible implementation manner of the embodiment of the present application, when the playing unit plays the discarded video frame, the playing unit is further specifically configured to at least one of:
switching and playing the discarded video frames based on a switching instruction triggered by a user;
and playing the discarded video frames through a specific window, wherein the specific window is different from the window for playing the multimedia information after the frame loss processing.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
the pull stream module is used for pulling a video frame from the pull stream service, and the video frame is pulled from the multimedia acquisition equipment by the pull stream service;
and the cache module is used for storing the pulled video frames in a cache region.
In another possible implementation manner of the embodiment of the present application, the apparatus further includes:
a communication link establishing module, configured to establish a communication link with a pull service, where the established communication link includes: a first communication link and a second communication link;
the communication module is used for sending a pull stream request through a first communication link, wherein the pull stream request is used for indicating a pull stream service to pull a video frame from the multimedia acquisition equipment;
the pull streaming module is specifically configured to pull the video frame from the pull streaming service through the second communication link.
In the embodiments, a multimedia playing method and a multimedia playing apparatus are introduced from the perspective of a method flow and a virtual module, respectively, and in the embodiments, an electronic device is introduced to perform the operations shown in the embodiments of the methods.
In a third aspect, an electronic device is provided, including:
one or more processors;
a memory;
one or more application programs, wherein the one or more application programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: the multimedia playing method according to the above is performed.
An embodiment of the present application provides a computer-readable storage medium, on which a computer program is stored, and the computer program, when executed by a processor, implements the multimedia playing method described above.
Fig. 8 is a schematic structural diagram of an electronic device according to an embodiment of the present application, and the specific embodiment of the present application does not limit a specific implementation of the electronic device.
As shown in fig. 8, the electronic device may include: a processor (processor)801, a communication Interface 802, a memory 803, and a communication bus 804.
Wherein: the processor 801, the communication interface 802, and the memory 803 communicate with each other via a communication bus 804.
A communication interface 802 for communicating with network elements of other devices, such as clients or other servers.
The processor 801 is configured to execute the program 808, and may specifically perform relevant steps in the above-described embodiment of the multimedia playing method.
In particular, the program 808 may include program code that includes computer operational instructions.
The processor 801 may be a central processing unit CPU or an application Specific Integrated circuit asic or one or more Integrated circuits configured to implement embodiments of the present application. The computer device includes one or more processors, which may be the same type of processor, such as one or more CPUs; or may be different types of processors such as one or more CPUs and one or more ASICs.
The memory 803 stores a program 808. The memory 803 may comprise high-speed RAM memory, and may also include non-volatile memory (non-volatile memory), such as at least one disk memory.
It is clear to those skilled in the art that the specific working processes of the above-described systems, devices, modules and units may refer to the corresponding processes in the foregoing method embodiments, and for the sake of brevity, further description is omitted here.
Those of ordinary skill in the art will understand that: the technical solution of the present application may be essentially or wholly or partially embodied in the form of a software product, where the computer software product is stored in a storage medium and includes program instructions for enabling an electronic device (e.g., a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application when the program instructions are executed. And the aforementioned storage medium includes: various media capable of storing program codes, such as a U disk, a removable hard disk, a Read Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Alternatively, all or part of the steps of implementing the foregoing method embodiments may be implemented by hardware (an electronic device such as a personal computer, a server, or a network device) associated with program instructions, which may be stored in a computer-readable storage medium, and when the program instructions are executed by a processor of the electronic device, the electronic device executes all or part of the steps of the method described in the embodiments of the present application.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments can be modified or some or all of the technical features can be equivalently replaced within the spirit and principle of the present application; such modifications or substitutions do not depart from the scope of the present application.

Claims (10)

1. A multimedia playing method, comprising:
when a trigger condition is detected, detecting whether video frames in multimedia information stored in a cache region meet a preset frame loss condition or not, wherein the multimedia information comprises a plurality of video frames;
when the video frame in the multimedia information meets the preset frame loss condition, performing frame loss processing on the multimedia information to discard the overdue multimedia information;
and playing the multimedia information after frame loss processing.
2. The method of claim 1, wherein the preset frame loss condition comprises at least one of:
the total number of video frames of the multimedia information is greater than a first preset threshold value;
and the video frame containing the timestamp earlier than the current playing time in the multimedia information.
3. The method of claim 2, wherein when the total number of video frames in the multimedia information is greater than a first preset threshold,
the frame loss processing of the multimedia information comprises:
determining video frames to be discarded in the multimedia information based on the total number of the video frames in the multimedia information;
and discarding the determined video frames to be discarded so that the frame number of the video frames in the buffer area after the frame loss processing is not greater than the first preset threshold.
4. The method of claim 3, wherein the determining the video frames to be dropped in the multimedia information based on the total number of video frames in the multimedia information comprises:
determining all video frames before an nth video frame in the multimedia information as the video frames to be discarded, wherein the nth video frame is a starting frame in video frames reserved with a first preset threshold value in the multimedia information, and N is a positive integer greater than 1.
5. The method of claim 3 or 4, wherein when the video frame containing the timestamp earlier than the current playing time in the multimedia information,
the frame loss processing of the multimedia information comprises:
and discarding the video frame with the timestamp earlier than the current playing time in the multimedia information.
6. The method of claim 1, wherein the trigger condition comprises at least one of:
receiving first communication fault information sent by a pull stream service, wherein the first communication obstacle information is used for indicating that a communication fault occurs between the pull stream service and a multimedia acquisition device, and the pull stream service is used for pulling a video frame from the multimedia acquisition device and storing the video frame in a cache region;
receiving second communication fault information sent by the pull stream service, wherein the second communication fault information is used for indicating that a video frame contained in a video frame pulled from the multimedia information by the pull stream service is larger than a second preset threshold;
a preset detection period is reached.
7. The method of claim 1, wherein after the frame loss processing of the multimedia information, the method comprises:
the dropped video frames are stored.
8. A multimedia playback apparatus, comprising:
the detection module is used for detecting whether video frames in multimedia information stored in a cache region meet a preset frame loss condition or not when a trigger condition is detected, wherein the multimedia information comprises a plurality of video frames;
the frame loss module is used for performing frame loss processing on the multimedia information to discard overdue multimedia information when a video frame in the multimedia information meets a preset frame loss condition;
and the playing module is used for playing the multimedia information after the frame loss processing.
9. An electronic device, comprising:
one or more processors;
a memory;
one or more applications, wherein the one or more applications are stored in the memory and configured to be executed by the one or more processors, the one or more programs configured to: executing the multimedia playing method according to any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, the program, when executed by a processor, implementing the multimedia playback method of any one of claims 1 to 7.
CN202011279087.2A 2020-11-16 2020-11-16 Multimedia playing method and device, electronic equipment and computer readable storage medium Pending CN114513626A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011279087.2A CN114513626A (en) 2020-11-16 2020-11-16 Multimedia playing method and device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011279087.2A CN114513626A (en) 2020-11-16 2020-11-16 Multimedia playing method and device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN114513626A true CN114513626A (en) 2022-05-17

Family

ID=81547095

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011279087.2A Pending CN114513626A (en) 2020-11-16 2020-11-16 Multimedia playing method and device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114513626A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174978A (en) * 2022-06-08 2022-10-11 聚好看科技股份有限公司 3D digital person sound and picture synchronization method and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115174978A (en) * 2022-06-08 2022-10-11 聚好看科技股份有限公司 3D digital person sound and picture synchronization method and electronic equipment
CN115174978B (en) * 2022-06-08 2023-11-24 聚好看科技股份有限公司 Sound and picture synchronization method for 3D digital person and electronic equipment

Similar Documents

Publication Publication Date Title
US11218382B2 (en) Quality of service monitoring method, device, and system
CN107743228A (en) Video quality detection method, monitoring device and storage medium
CN104202576B (en) A kind of intelligent video analysis system
CN101465857A (en) Method and equipment for monitoring network multimedia information
CN111787256B (en) Management method, device, medium and electronic equipment for pre-alarm video
CN111178241A (en) Intelligent monitoring system and method based on video analysis
CN114513626A (en) Multimedia playing method and device, electronic equipment and computer readable storage medium
CN111479161B (en) Live broadcast quality data reporting method and device
CN113472858A (en) Buried point data processing method and device and electronic equipment
CN111263113B (en) Data packet sending method and device and data packet processing method and device
CN111741007B (en) Financial business real-time monitoring system and method based on network layer message analysis
TW201303753A (en) Dispersing-type algorithm system applicable to image monitoring platform
CN114598622B (en) Data monitoring method and device, storage medium and computer equipment
CN106603977B (en) Video acquisition method and device based on Linux multi-core environment
CN115103156A (en) Dynamic video stream transmission method
CN113177883B (en) Arrangement transmission system based on data queue
CN112004161B (en) Address resource processing method and device, terminal equipment and storage medium
CN101754045B (en) Method for distinguishing stay-dead picture in monitoring system, monitoring system and device
CN113038261A (en) Video generation method, device, equipment, system and storage medium
CN111639133A (en) Live broadcast monitoring method and system based on block chain
CN111111211A (en) Method, device, system, equipment and storage medium for reporting game data
CN116546191B (en) Video link quality detection method, device and equipment
CN113268422A (en) Classification quantization based stuck detection method, device, equipment and storage medium
CN117636220A (en) Event identification method, device, equipment and storage medium based on video stream
CN114040247A (en) Network video stream processing method, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination