CN113038254A - Video playing method, device and storage medium - Google Patents

Video playing method, device and storage medium Download PDF

Info

Publication number
CN113038254A
CN113038254A CN201911360821.5A CN201911360821A CN113038254A CN 113038254 A CN113038254 A CN 113038254A CN 201911360821 A CN201911360821 A CN 201911360821A CN 113038254 A CN113038254 A CN 113038254A
Authority
CN
China
Prior art keywords
video
edge
server
video stream
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911360821.5A
Other languages
Chinese (zh)
Other versions
CN113038254B (en
Inventor
张志远
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Original Assignee
China Mobile Communications Group Co Ltd
China Mobile Communications Ltd Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Mobile Communications Group Co Ltd, China Mobile Communications Ltd Research Institute filed Critical China Mobile Communications Group Co Ltd
Priority to CN201911360821.5A priority Critical patent/CN113038254B/en
Publication of CN113038254A publication Critical patent/CN113038254A/en
Application granted granted Critical
Publication of CN113038254B publication Critical patent/CN113038254B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/437Interfacing the upstream path of the transmission network, e.g. for transmitting client requests to a VOD server
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/15Conference systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The embodiment of the disclosure discloses a video playing method, which is applied to a central service server, and the method comprises the following steps: acquiring an access address of a first edge video server in an area range where a first terminal is located based on a video watching request message sent by the first terminal; sending a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of a second edge video server; receiving a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server; and sending a video access address for acquiring a video stream from the first edge video server to the first terminal according to the success message.

Description

Video playing method, device and storage medium
Technical Field
The disclosed embodiments relate to the field of video technologies, and in particular, to a video playing method and apparatus, and a storage medium.
Background
At present, a cloud time-frequency intelligent analysis system is generally applied to the field of video analysis based on the powerful data processing capacity of a cloud server. In the cloud video intelligent analysis system, a terminal needs to upload a video stream to the cloud server. And then the cloud server analyzes the received video stream frame by frame based on the strong data processing capacity of the cloud server and stores the video stream as a picture. And further analyzing the picture to obtain an analysis result. And then overlapping the analysis result with the content of the video stream to obtain the processed video stream, and returning the processed video stream to the terminal.
In the cloud video intelligent analysis system, the video stream sent by the terminal is processed and needs to undergo processing processes of video uploading, decoding, frame extraction, image preprocessing, intelligent analysis, video stream returning and the like. And after receiving the returned processed video stream, the terminal further needs to perform post-processing, re-encoding and other processes on the image to form a complete video stream. Due to the complex processing process, the cloud intelligent video analysis system can bring long time delay and has the problem of poor real-time performance.
Disclosure of Invention
The embodiment of the disclosure provides a video playing method, a video playing device and a storage medium. The technical scheme of the embodiment of the disclosure is realized as follows:
in a first aspect, an embodiment of the present disclosure provides a video playing method, which is applied to a central service server, and the method includes:
acquiring an access address of a first edge video server in an area range where a first terminal is located based on a video watching request message sent by the first terminal;
sending a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of a second edge video server;
receiving a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server;
and sending a video access address for acquiring a video stream from the first edge video server to the first terminal according to the success message.
In one embodiment, before sending the video playing request message to the first edge video server based on the access address of the first edge video server, the method further includes:
acquiring an access address of a second edge video server in the area range of a second terminal based on a video playing request message sent by the second terminal;
sending a start message to the second edge video server based on the access address of the second edge video server; the start message is used for triggering the second edge video server to acquire the video stream from the second terminal.
In a second aspect, an embodiment of the present disclosure further provides a video playing method, which is applied to a first edge video server, and the method includes:
receiving a video playing request message sent by a central service server; wherein the video playing request message includes: an access address of a second edge video server;
acquiring the video stream from the second edge video server based on the access address of the second edge video server;
and when the video stream is successfully acquired, sending a success message to the central service server.
In one embodiment, the method further comprises:
and when the video stream is successfully acquired, sending a message for acquiring the video stream from the first edge video server to a first terminal.
In a third aspect, an embodiment of the present disclosure further provides a video playing method, which is applied to a second edge video server, and the method includes:
receiving a starting message sent by a central service server;
acquiring a first video stream from the second terminal according to the starting message;
and performing video analysis processing on the first video stream to obtain a processed second video stream.
In one embodiment, the performing video analysis processing on the first video stream to obtain a processed second video stream includes:
acquiring a first video stream according to the identity of the video stream;
and calling a video analysis process to process the first video stream to obtain a processed second video stream.
In one embodiment, the invoking a video analytics process to process the first video stream comprises:
decoding the first video stream, and sequentially obtaining picture frames carrying timestamps;
according to the timestamp, carrying out image characteristic analysis processing on the picture frame at the preset acquisition moment to obtain a first picture frame carrying image characteristic information;
determining image characteristic information of a second picture frame by using the image characteristic information of the first picture frame with the acquisition time difference with the second picture frame within a set threshold range according to the timestamp, and obtaining the second picture frame carrying the image characteristic information;
and coding a first picture frame carrying the image characteristic information and a second picture frame carrying the image characteristic information into the second video stream according to the time stamp.
In a fourth aspect, an embodiment of the present disclosure further provides a video playing apparatus, which is applied to a central service server, where the apparatus includes a first obtaining module, a first receiving module, and a first sending module; wherein the content of the first and second substances,
the first obtaining module is used for obtaining an access address of a first edge video server in the area range of the first terminal based on a video watching request message sent by the first terminal;
the first sending module is configured to send a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of a second edge video server;
the first receiving module is configured to receive a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server;
the first sending module is further configured to send, to the first terminal, a video access address for acquiring a video stream from the first edge video server according to the success message.
In a fifth aspect, an embodiment of the present disclosure further provides a video playing apparatus, which is applied to a first edge video server, where the apparatus includes a second receiving module, a second obtaining module, and a second sending module; wherein the content of the first and second substances,
the second receiving module is used for receiving a video playing request message sent by the central service server; wherein the video playing request message includes: an access address of a second edge video server;
the second obtaining module is configured to obtain the video stream from the second edge video server based on an access address of the second edge video server;
and the second sending module is used for sending a success message to the central service server when the video stream is successfully acquired.
In a sixth aspect, an embodiment of the present disclosure further provides a video playing apparatus, which is applied to a second edge video server, where the apparatus includes a third receiving module, a third obtaining module, and a processing module; wherein the content of the first and second substances,
the third receiving module is used for receiving a starting message sent by the central service server;
the third obtaining module is configured to obtain a first video stream from the second terminal according to the start message;
and the processing module is used for performing video analysis processing on the first video stream to obtain a processed second video stream.
In a seventh aspect, an embodiment of the present disclosure further provides a video playing apparatus, including: a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to implement the method according to any of the embodiments of the present disclosure when running the computer program.
In an eighth aspect, embodiments of the present disclosure further provide a storage medium, where a computer program is stored in the storage medium, and when the computer program is executed by a processor, the method according to any embodiment of the present disclosure is implemented.
In the embodiment of the disclosure, based on a video viewing request message sent by a first terminal, an access address of a first edge video server in an area range where the first terminal is located is obtained; sending a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of the second edge video server. Here, since the first edge video server is an edge video server within an area range where the first terminal is located, the acquired first edge video server is an edge video server disposed nearby in the vicinity of the position of the first terminal. The distance between the first terminal and the first edge video server is relatively short, and the data transmission delay of the first terminal during data interaction with the first edge video server is shorter. Here, since the video playing request message includes an access address of a second edge video server, the first edge video server can acquire a video stream from the second edge video server based on the access address of the second edge video server after receiving the video playing request message. Receiving a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server; and sending a video access address for acquiring a video stream from the first edge video server to the first terminal according to the success message. Here, when the first edge video server successfully obtains the video stream from the second edge video server, the success message is sent to a central server. The central service server can inform the first terminal of acquiring the video access address of the video stream from the first edge video server in time after receiving the success message, so that the first terminal can acquire the video stream from the first edge video server in time based on the video access address of acquiring the video stream from the first edge video server, and time delay is reduced.
Drawings
Fig. 1 is a schematic structural diagram of a video intelligent analysis system according to an embodiment of the present disclosure.
Fig. 2 is a schematic structural diagram of a video intelligent analysis system according to another embodiment of the present disclosure.
Fig. 3 is a schematic flowchart of a video playing method according to an embodiment of the present disclosure.
Fig. 4 is a schematic structural diagram of a video intelligent analysis system according to another embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 6 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 7 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 8 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 9 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 10 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 11 is a schematic diagram of picture frame processing according to an embodiment of the present disclosure.
Fig. 12 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present disclosure.
Fig. 13 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present disclosure.
Fig. 14 is a schematic structural diagram of a video playing apparatus according to an embodiment of the present disclosure.
Fig. 15 is a schematic structural diagram of a video intelligent analysis system according to another embodiment of the present disclosure.
Fig. 16 is a flowchart illustrating a video playing method according to another embodiment of the present disclosure.
Fig. 17 is a schematic structural diagram of a video intelligent analysis system according to another embodiment of the present disclosure.
Fig. 18 is a flowchart illustrating a video playing method according to another embodiment of the disclosure.
Fig. 19 is a flowchart illustrating a video stream processing in the second edge video server according to an embodiment of the present disclosure.
Fig. 20 is a flowchart illustrating a video stream processing in the second edge video server according to another embodiment of the present disclosure.
Fig. 21 is a flowchart illustrating a video stream processing in a second edge video server according to another embodiment of the present disclosure.
Fig. 22 is a flowchart illustrating a video stream processing in the second edge video server according to another embodiment of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In order to facilitate understanding of the technical scheme of the present disclosure, first, an application scenario of video intelligent analysis is exemplarily illustrated by two video intelligent analysis systems.
As shown in fig. 1, one embodiment of the present disclosure provides a video intelligent analysis system. The video intelligent analysis system comprises a cloud server 11, an intelligent analysis server 12 and a terminal 13. When the terminal 13 needs to process a local video stream, the video stream to be processed may be uploaded to the cloud server 11 with a strong computing capability for processing. After the cloud server 11 processes the video stream, a processed video stream is obtained. The cloud server 11 sends the video stream processed by the cloud server 11 to the intelligent analysis server 12 for further processing, and after the intelligent analysis server 12 finishes processing the video stream, the video stream is sent to the terminal 13. In the embodiment of the present disclosure, since the video stream needs to be processed by the cloud server 11, the intelligent analysis server 12, and the like at a far deployment location before returning to the terminal 13, there is a long time delay. Especially in a live scene, the experience of the user is seriously affected.
As shown in fig. 2, another embodiment of the present disclosure provides a video intelligent analysis system. The intelligent video analysis system comprises an intelligent analysis server 12 and a terminal 13. When the terminal 13 needs to process a local video stream, the terminal may first send the video stream to the intelligent analysis server 12 for processing. After the intelligent analysis server 12 processes the video stream, it returns the processed video stream to the terminal 13. Here, the intelligent analysis server 12 is generally deployed near the video stream capture front end of the terminal 13, so as to perform intelligent analysis on the video stream captured by the terminal 13 in real time. In the embodiment of the present disclosure, although the time for uploading the video stream to the remote cloud server is saved, when the intelligent analysis server 12 is deployed, the analysis model of the intelligent analysis server 12 is often set according to a certain functional requirement, and cannot be changed at will, and the intelligent analysis capability is limited, and the flexibility is poor. It should be noted that the video stream processing process in the above embodiments may include decoding, frame extraction, image preprocessing, post-processing (e.g., face recognition, etc.), encoding, and other processing processes.
As shown in fig. 3, an embodiment of the present disclosure provides a video playing method applied to a central service server, including:
step 31, acquiring an access address of a first edge video server in the area range of the first terminal based on the video viewing request message sent by the first terminal.
In one embodiment, an application for viewing video is running on the first terminal. For example, an application for viewing live video or an application for viewing on-demand video. Here, the first terminal may send the video viewing request message to the central service server after the application receives an operation instruction of a user to view a video.
In one embodiment, the area coverage may be a geographic area coverage. Here, when the first edge video servers are deployed, each of the first edge video servers may correspond to a geographic area range, and the first edge video servers provide services for terminals in the geographic area range. Here, the first edge video server may be provided with different functional modules based on different video stream analysis requirements, for example, the first edge video server is provided with an analysis module with a face recognition function based on the requirements of face recognition analysis. Referring to fig. 4, the first edge video server 41 provides a face recognition analysis service to the terminal 1 in the area a; the first edge video server 42 provides a human body posture recognition service for the terminal 2 in the area B; the first edge video server 43 provides an iris recognition service to the terminal 3 of the C area. Taking the first terminal as the terminal 1 in the a area as an example, the first edge video server in the area where the terminal 1 is located may be the first edge video server 41 in the a area.
In one embodiment, the location relationship between the first edge video server and the area where the first edge video server is located may be stored in the central service server in a form of a list. For example, referring to table one, in the list, the first edge video server a corresponds to area a; the first edge video server B corresponds to the B area.
Figure BDA0002337120110000081
Watch 1
In an embodiment, after receiving a video viewing request message sent by a first terminal, the central service server may obtain a network access address of the first terminal during communication, and analyze a geographic location of the first terminal accessing to a network. And determining the first edge video server of the area where the first terminal is located according to the geographical position of the first terminal and the position relation between the first edge video server and the area where the first edge video server is located. For example, if the central service server determines that the first terminal is in the area a, it may be determined that the first edge video server in the area a where the first terminal is located is the first edge video server a.
In one embodiment, the access address of the first edge video server may be a network address for accessing the first edge video server.
Step 32, sending a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of the second edge video server.
Here, when the first edge video server does not store a video stream, a video play request message requesting to acquire the video stream from the second edge video server may be sent to the first edge video server. In one embodiment, after sending the video playing request message to the first edge video server, the first edge video server obtains the video stream that is not stored in the first edge video server from the second edge video server based on the access address of the second edge video server in the video playing request message. In this way, the first terminal can directly acquire the video stream from the first edge video server after knowing the video access address for acquiring the video stream.
Here, the video stream may be a video stream uploaded by the second terminal to the second edge video server in real time. For example, the second terminal is a live terminal, and the video stream may be a video stream uploaded by the live terminal to the second edge server. Here, an application for capturing a video stream may be run on the second terminal, and the video stream may be a video stream captured by the application.
In one embodiment, the access address of the second edge video server may be a network address for accessing the second edge video server.
Step 33, receiving a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server.
In one embodiment, it may be that the first edge video server sends the success message to the central traffic server upon successfully receiving the video stream data sent by the second edge video server. In this way, the central service server can timely notify the first terminal to acquire the video stream from the first edge video server. The embodiment can be applied to video playing in a live scene with high real-time performance.
In another embodiment, the first edge video server may send the success message to the central service server only after successfully receiving all the video stream data sent by the second edge video server. The embodiment can be applied to video playing in non-live scenes with low real-time performance.
And step 34, sending a video access address for acquiring a video stream from the first edge video server to the first terminal according to the success message.
In one embodiment, the video access address for obtaining the video stream from the first edge video server may be an address of a folder in the first edge video server. Wherein the video stream is stored in the folder. Here, the information of the address of the folder may include network address information of the first edge video server and storage address information that the folder is stored in the first edge video server. In this way, the first terminal can find the first edge video server based on the network address information addressing of the first edge video server. And after the first edge video server is found, the file folder stored in the first edge video server is addressed based on the storage address information, and a video stream is obtained from the file folder.
In the embodiment of the disclosure, since the first edge video server is an edge video server within an area range where the first terminal is located, the acquired first edge video server is an edge video server deployed nearby near the location of the first terminal. The distance between the first terminal and the first edge video server is relatively short, and the data transmission delay of the first terminal during data interaction with the first edge video server is shorter. Here, since the video playing request message includes an access address of a second edge video server, the first edge video server can acquire a video stream from the second edge video server based on the access address of the second edge video server after receiving the video playing request message. Here, when the first edge video server successfully obtains the video stream from the second edge video server, the success message is sent to a central server. The central service server can inform the first terminal of acquiring the video access address of the video stream from the first edge video server in time after receiving the success message, so that the first terminal can acquire the video stream from the first edge video server in time based on the video access address of acquiring the video stream from the first edge video server, and time delay is reduced.
As shown in fig. 5, another embodiment of the present disclosure provides a video playing method, in the step 32, before sending a video playing request message to the first edge video server based on the access address of the first edge video server, further including:
and step 51, acquiring an access address of a second edge video server in the area range of the second terminal based on the video playing request message sent by the second terminal.
In one embodiment, an application for recording video is running on the second terminal. For example, an application for publishing live video. Here, after the application receives an operation instruction of uploading a video by a user, the second terminal may send the video playing request message to the central service server.
In one embodiment, the area coverage may be a geographic area coverage. Here, when the second edge video servers are deployed, each of the second edge video servers may correspond to a geographic area range, and the second edge video servers provide services for terminals in the geographic area range. Here, the second edge video server may set different function modules based on different video stream analysis requirements, for example, the second edge video server sets an analysis module with a face recognition function based on the requirement of face recognition analysis.
In one embodiment, the location relationship between the second edge video server and the area where the second edge video server is located may be stored in the central service server in a form of a list.
In an embodiment, after receiving a video playing request message sent by a second terminal, the central service server may obtain a network access address of the second terminal during communication, and analyze a geographic location of the second terminal accessing to the network. And determining a second edge video server of the area where the second terminal is located according to the geographic position of the second terminal and the position relation between the second edge video server and the area where the second edge video server is located. For example, if the central service server determines that the second terminal is in the area a, it may be determined that the second edge video server in the area a where the second terminal is located is the second edge video server a.
In one embodiment, the access address of the second edge video server may be a network address for accessing the first edge video server.
Step 52, sending a start message to the second edge video server based on the access address of the second edge video server; the start message is used for triggering the second edge video server to acquire the video stream from the second terminal.
In one embodiment, the central service server may send the access address of the second edge video server to the second terminal, and the second terminal sends the video stream based on the access address of the second edge video server. For example, the second terminal is a live terminal, and the video stream may be a video stream uploaded by the live terminal to the second edge server. And the second edge video server receives and stores the video stream sent by the second terminal based on the starting message.
As shown in fig. 6, another embodiment of the present disclosure provides a video playing method applied to a first edge video server, where the method includes:
step 61, receiving a video playing request message sent by a central service server; wherein the video playing request message includes: an access address of the second edge video server.
In one embodiment, a video playing request message requesting the first edge video server to obtain a video stream from the second edge video server may be sent to the first edge video server. Here, the video stream may be a video stream uploaded by the second terminal to the second edge video server in real time. For example, the second terminal is a live terminal, and the video stream may be a video stream uploaded by the live terminal to the second edge server. Here, an application for capturing video data may be running on the second terminal, and the video stream may be a video stream captured by the application.
Step 62, obtaining the video stream from the second edge video server based on the access address of the second edge video server.
In one embodiment, the access address of the second edge video server may be an address of a folder in the second edge video server. Wherein the video stream is stored in the folder. Here, the information of the address of the folder may include network address information of the second edge video server and storage address information of the folder stored in the second edge video server. In this way, the first edge video server can find the second edge video server based on the network address information addressing of the second edge video server. And after the second edge video server is found, the file folder stored in the second edge video server is addressed based on the storage address information, and the video stream is obtained from the file folder.
And 63, when the video stream is successfully acquired, sending a success message to the central service server.
In one embodiment, it may be that the first edge video server sends the success message to the central traffic server upon successfully receiving the video stream data sent by the second edge video server.
In this way, the central service server can timely notify the first terminal to acquire the video stream from the first edge video server. The embodiment is suitable for video playing in a live scene with high real-time performance.
In another embodiment, the first edge video server may send the success message to the central service server only after successfully receiving all the video stream data sent by the second edge video server. The embodiment is suitable for video playing in a non-live scene with low real-time performance.
As shown in fig. 7, another embodiment of the present disclosure provides a video playing method, where the method further includes:
and step 71, when the video stream is successfully acquired, sending a message for acquiring the video stream from the first edge video server to the first terminal.
Here, when the video stream is successfully acquired, a message for acquiring the video stream from the first edge video server is sent to the first terminal, and the first terminal can be timely notified to acquire the video stream from the first edge video server.
As shown in fig. 8, another embodiment of the present disclosure provides a video playing method applied to a second edge video server, where the method includes:
step 81, receiving a start message sent by a central service server;
in one embodiment, the start message is used to trigger the second edge video server to obtain a video stream from the second terminal. For example, the second terminal is a live terminal, and the video stream may be a video stream uploaded by the live terminal to the second edge server.
Step 82, acquiring a first video stream from the second terminal according to the starting message;
in one embodiment, the central service server may send the access address of the second edge video server to the second terminal, and the second terminal sends the video stream to the second edge video server based on the access address of the second edge video server. And the second edge video server receives and stores the video stream sent by the second terminal based on the starting message.
And step 83, performing video analysis processing on the first video stream to obtain a processed second video stream.
In one embodiment, the video analysis processing on the first video stream may be processing a picture frame parsed based on the first video stream according to requirements. Here, the processing procedure for the picture frame may be different according to specific requirements. For example, when a person image in a picture frame analyzed from a video stream is subjected to face recognition, a neural network algorithm model may be used to analyze and process position information of a face in the picture to obtain the position information of the face in the picture frame, the position information of the face is labeled in the picture frame, and after the picture frame is encoded, a second video stream labeled with the position information of the face is obtained.
As shown in fig. 9, another embodiment of the present disclosure provides a video playing method, where in step 83, performing video analysis processing on the first video stream to obtain a processed second video stream includes:
step 91, acquiring a first video stream according to the identity of the video stream;
in one embodiment, each video stream obtained by the second edge video server from the second terminal may correspond to an identity. For example, a video stream corresponding to a shot nature scene is identified as "001"; one video stream corresponding to shooting a sporting event is identified as "010".
And step 92, calling a video analysis process to process the first video stream to obtain a processed second video stream.
In one embodiment, the video analysis process may be an application used for performing different processing on picture frames parsed from the first video stream. Such as a face detection application, a face recognition application, a pose recognition application, a license plate recognition application, and the like. Here, according to the requirement of the user, different stages of processing on the picture parsed from the first video stream may be completed by invoking different application programs. For example, a first application program is called to implement rotation processing on a person image in an image frame analyzed from a video, and a picture frame after the rotation processing is obtained. And then, the second application program is called to realize passivation processing and the like of the figure image in the picture frame. Here, different processing on the first video stream can be realized by calling a video analysis process, and compared with a mode of performing different processing on the first video stream by using various different hardware functional modules, time delay caused by encoding and decoding data for data transmission among different hardware functional modules is reduced, and efficiency of time-frequency stream processing is improved. Meanwhile, different requirements can be processed by calling different video analysis processes, and flexibility is better.
As shown in fig. 10, another embodiment of the present disclosure provides a video playing method, and in step 92, the invoking a video analysis process to process the first video stream includes:
step 101, decoding the first video stream, and sequentially obtaining picture frames with time stamps.
In one embodiment, the picture frames may be sequentially obtained at a set sampling frequency. For example, 100 picture frames per second are decoded from the first video stream, which may be acquired based on a sampling frequency of 90 picture frames per second, which may be obtained 90 picture frames per second.
In one embodiment, the timestamp of each picture frame may be the time at which the picture frame was captured when the video stream was recorded. The time stamp may be a sequence of characters that uniquely identifies the time of a moment. For example, if the time to capture a picture frame is ten tenths of a noon, the timestamp of the picture frame may correspond to "10: 10: 00". In another embodiment, the timestamp may also be a time corresponding to a time when a picture frame in the video stream is decoded when the first video stream is decoded. For example, if the time when a B picture frame is decoded is tenths of an afternoon, the timestamp of the B picture frame may correspond to "01: 10: 00".
And 102, according to the timestamp, carrying out image characteristic analysis processing on the picture frame at the preset acquisition moment to obtain a first picture frame carrying image characteristic information.
In one embodiment, the hardware module in the second edge video server has limited processing capability for performing image feature analysis processing on the picture frame in unit time, and only performs image feature analysis processing on the picture frame at a preset acquisition time. For example, the timestamp of the picture frame a is "10: 01: 01", the timestamp of the picture frame B is "10: 01: 02", the timestamp of the picture frame C is "10: 01: 03", the timestamp of the picture frame D is "10: 01: 04", the timestamp of the picture frame E is "10: 01: 05", and the preset acquisition time is 10:01:01, 10:01:03, and 10:01:03, the second edge video server only processes the picture frame a, the picture frame C, and the picture frame E, but does not process the picture frame B and the picture frame D.
Here, the image feature information may be face position information (e.g., a position of a face bounding box), pose information, color information, texture information, or the like. In one embodiment, the first picture frame carrying the image feature information may be a person picture frame carrying a face bounding box. The face of the person in the person picture frame is identified by the boundary box, and four corners of the boundary box respectively represent position coordinate information.
And 103, determining the image characteristic information of the second picture frame by using the image characteristic information of the first picture frame with the acquisition time difference with the second picture frame within a set threshold range according to the timestamp, and obtaining the second picture frame carrying the image characteristic information.
In one embodiment, referring to fig. 11, the picture frame 113 may be a picture frame to be processed in a stack queue. The picture frame 111 is a picture frame at a preset acquisition time, a corresponding timestamp is 10:01:01, and the picture frame 111 is subjected to image feature analysis processing to obtain a first picture frame carrying image feature information (bounding box information). The picture frame 112 is not a picture frame at a predetermined acquisition time, the corresponding timestamp is "10: 01: 02", and the picture frame 112 is a second picture frame. The acquisition time difference is 00:00:01, and the set threshold range is 00:00:01 to 00:00:02, so that the image feature information carried by the first picture frame can be determined as the image feature information of the second picture frame. Therefore, the second picture frame carrying the image characteristic information can be directly obtained without carrying out image characteristic analysis processing on the second picture frame, and time delay brought by picture processing analysis can be reduced.
In one embodiment, the image feature information is position information of four corners of a face bounding box, when the set threshold range is small, the position information between image frames corresponding to different timestamps does not change greatly, and the position information of the four corners of the face bounding box of a first image frame is applied to a second image frame, so that no perception of a user can be realized, the processing speed of the image frame is improved, and the time delay is reduced.
And step 104, encoding the first picture frame carrying the image characteristic information and the second picture frame carrying the image characteristic information into the second video stream according to the timestamp.
As shown in fig. 12, an embodiment of the present disclosure provides a video playing apparatus applied to a central service server, where the apparatus includes a first obtaining module 121, a first receiving module 122, and a first sending module 123; wherein the content of the first and second substances,
the first obtaining module 121 is configured to obtain, based on a video viewing request message sent by a first terminal, an access address of a first edge video server in an area range where the first terminal is located;
the first sending module 122 is configured to send a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of a second edge video server;
the first receiving module 123 is configured to receive a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server;
the first sending module 122 is further configured to send, to the first terminal, a video access address for acquiring a video stream from the first edge video server according to the success message.
In an embodiment, the first sending module 122 is further configured to obtain, based on a video playing request message sent by a second terminal, an access address of a second edge video server within a range of an area where the second terminal is located;
sending a start message to the second edge video server based on the access address of the second edge video server; the start message is used for triggering the second edge video server to acquire the video stream from the second terminal.
As shown in fig. 13, another embodiment of the present disclosure provides a video playing apparatus applied to a first edge video server, where the apparatus includes a second receiving module 131, a second obtaining module 132, and a second sending module 133; wherein the content of the first and second substances,
the second receiving module 131 is configured to receive a video playing request message sent by a central service server; wherein the video playing request message includes: an access address of a second edge video server;
the second obtaining module 132 is configured to obtain the video stream from the second edge video server based on the access address of the second edge video server;
the second sending module 133 is configured to send a success message to the central service server when the video stream is successfully acquired.
In an embodiment, the second sending module 133 is further configured to send, to the first terminal, a message for obtaining the video stream from the first edge video server when the video stream is successfully obtained.
As shown in fig. 14, an embodiment of the present disclosure provides a video playing apparatus applied to a second edge video server, where the apparatus includes a third receiving module 141, a third obtaining module 142, and a processing module 143; wherein the content of the first and second substances,
the third receiving module 141 is configured to receive a start message sent by the central service server;
the third obtaining module 142 is configured to obtain a first video stream from the second terminal according to the start message;
the processing module 143 is configured to perform video analysis processing on the first video stream to obtain a processed second video stream.
In an embodiment, the third obtaining module 142 is further configured to obtain the first video stream according to the identity of the video stream;
the processing module 143 is further configured to invoke a video analysis process to process the first video stream, so as to obtain a processed second video stream.
In an embodiment, the processing module 143 is further configured to decode the first video stream, and sequentially obtain picture frames carrying timestamps; according to the timestamp, carrying out image characteristic analysis processing on the picture frame at the preset acquisition moment to obtain a first picture frame carrying image characteristic information; determining image characteristic information of a second picture frame by using the image characteristic information of the first picture frame with the acquisition time difference with the second picture frame within a set threshold range according to the timestamp, and obtaining the second picture frame carrying the image characteristic information; and coding a first picture frame carrying the image characteristic information and a second picture frame carrying the image characteristic information into the second video stream according to the time stamp.
An embodiment of the present disclosure further provides a video playing device, including: a processor and a memory for storing a computer program capable of running on the processor; wherein the processor is configured to implement the method according to any of the embodiments of the present disclosure when running the computer program.
The embodiment of the present disclosure further provides a storage medium, in which a computer program is stored, and when the computer program is executed by a processor, the method according to any embodiment of the present disclosure is implemented.
In order to facilitate understanding of the technical solution of the present disclosure, the video playing method of the present disclosure is further exemplified by the following 3 embodiments.
Example 1:
the method in this example is applied to a video playing system, please refer to fig. 15, which includes a central service server 151, a live terminal 154, an a-edge video server 152, a B-edge video server 153, and a user terminal 155. Wherein, the live terminal 154 and the a-edge video server 152 are both located in the a area; the B-edge video server 153 and the user terminal 155 are both located in the B-zone. As shown in fig. 16, another embodiment of the present disclosure provides a video playing method, which is applied to a live video scene, and the user terminal 155 watches a video played by the live terminal 154 based on the video playing system. The method comprises the following steps:
in step a1, the live terminal 154 sends a video playing request message to the central service server 151.
Step a2, the central service server 151 returns the access address of the a-edge video server 152 to the live terminal 154.
Step a3, the central service server 151 sends a start message to the a-edge video server 152; wherein the start message is used to trigger the a-edge video server 152 to obtain a video stream from the live terminal 154.
Step a4, the live terminal 154 pushes the live video stream to the a-edge video server 152, and the a-edge video server 152 receives the video stream;
step a5, the user terminal 155 sends a video viewing request message to the central service server 151;
step a6, the central service server 151 obtains the access address of the B-edge video server 153 based on the video viewing request message;
step a7, sending a video playing request message to the B-edge video server 153 based on the access address of the B-edge video server 153; wherein the video playing request message includes: the access address of the a-edge video server 152;
step A8, the B-edge video server 153 starts a live broadcast service, and acquires the video stream from the a-edge video server 152 based on the access address of the a-edge video server 152;
in step a9, when the B-edge video server 153 successfully acquires the video stream, a success message is sent to the central service server 151.
Step a10, the central service server 151 sends a video access address for acquiring a video stream from the B-edge video server 153 to the user terminal 155 according to the success message.
In this example, here, since the B-edge video server 153 is an edge video server within the area where the user terminal 155 is located, the a-edge video server 153 is an edge video server disposed near the user terminal 155. Thus, the time delay of the user terminal 155 in data interaction with the a-edge video server 153 is shorter. Here, since the video playback request message includes the access address of the a-edge video server 152, the B-edge video server 153 can acquire the video stream from the a-edge video server 152 based on the access address of the a-edge video server 152 after receiving the video playback request message. Here, when the B-edge video server 153 successfully acquires the video stream from the a-edge video server 152, the success message is sent to the central server 151, and after receiving the success message, the central service server 151 can timely notify the user terminal 155 to acquire the video access address of the video stream from the B-edge video server 153, so that the user terminal 155 can timely acquire the video stream from the B-edge video server 153 based on the video access address of the acquired video stream, thereby reducing the time delay.
Example 2:
in the application scene of the video conference, in the video call process, privacy protection needs to be performed on the conference background of the video object, the conference background is replaced by a single preset video conference background, and only a face image is displayed. Referring to fig. 17, the video conference system includes an a conference terminal 171, a B conference terminal 172, an a-ground edge computing platform 173, and a B-ground edge computing platform 174. Here, the a-site edge computing platform 173 is a second edge video server, and the B-site edge computing platform 174 is a first edge video server.
As shown in fig. 18, another embodiment of the present disclosure provides a video playing method applied to the video conference system, where the method includes:
step B1, the conference terminal a 171 initiates a conference, and the conference terminal B172 joins in a video conference and performs a video call by two parties.
Step B2, the user selects the function of replacing the conference background on the conference terminal a 171, and needs to use a prefabricated conference background template; assuming that the background template is a template M, the a-conference terminal 171 sends a request for replacing the background to the central service video server, and specifies the conference template as M.
In step B3, the central service video server sends a message that the conference context is set to M to the a-site edge computing platform 173 where the a-conference terminal 171 is located and the B-site edge computing platform 174 where the B-conference terminal 172 is located.
In step B4, the conference terminal B172 acquires the message with the background set as the template M, and automatically displays the template M on the conference terminal B172.
Step B5, the conference terminal a 171 pushes the original video stream to the edge computing platform a 173, and the edge computing platform a 173 processes the video stream, and segments the picture frames parsed from the video stream to segment the video information only including the portrait.
Step B6, the edge-a computing platform 173 pushes the video information containing only the portrait to the edge-B computing platform 174, and performs background replacement on the edge-a computing platform 173, and outputs the video stream after replacing the background.
Step B7, the a conference terminal 171 obtains the replaced and transcoded ultra high definition video stream from the a-site edge computing platform 173.
Step B8, the B-site edge computing platform 174 receives the video stream containing only the portrait, performs image synthesis on the B site, obtains a processed complete ultra high definition video stream, and notifies the B-conference terminal 172 to perform stream pulling.
In step B9, the B-conference terminal 172 pulls the processed video stream from the B-edge computing platform 174, and switches the terminal interface from background to conference video stream.
In this example, the video stream sent by the conference terminal a may be pulled to the B-site edge computing platform after being processed by the a-site edge computing platform, and sent to the conference terminal B after being processed by the B-site edge computing platform. According to the end-to-end technical scheme, time delay caused by the fact that the video stream needs to be sent to the cloud server for processing is reduced, and user experience of the video conference system is improved.
Example 3:
in this example, a live scene is taken as an example, and a video stream is pushed by itself from a live terminal or pulled by the system to a second edge video server. And each path of video stream received by the second edge video server has a corresponding identity. A user can select a certain video stream to be analyzed by calling an intelligent analysis interface based on requirements. For the analysis request of each video stream, the system can independently create a video analysis process to analyze the video.
As shown in fig. 19, an embodiment of the present disclosure provides a flow of video stream processing in the second edge video server, where the flow of video stream processing sequentially includes: video stream access C1, video decoding C2, video pre-processing (video stream format conversion C3 and video stream size scaling C4), intelligent analytics C5, video post-processing (video image redrawing C6, video stream format conversion or size scaling C7), video encoding or transcoding C8, video stream emission. The following is described by a processing method for face position identification in a video stream, the method comprising the steps of:
in step S1, please refer to fig. 20, when the video stream access module of the second edge video server receives the video stream named h.264/HEC Bitstream, the video stream is decoded by the decoding module. The decoded picture frame format is NV12 format. In order to provide the required RGBA format picture frame for the subsequent processing module, in S1, format conversion and size scaling are performed on the video frame information of NV12, and the video frame information is converted into 224 × 224 RGBA picture frames, and the pictures are stored in the memory. At this time, in the metadata (metadata), it is necessary to add a tag for analysis and mark the current time. Wherein, the structure of the original data is as follows:
metadata1 bounding boxes; metadata1: boundary frame
Metadata2: timeframe; metadata2: time stamp
Metadata3 analytical labels; original data3: analysis label
In step S2, please refer to fig. 21, the RGBA image frames stored in the memory are input to the analysis module of the second edge video server. Note that processing time information (Metadata2) for each frame image is added to the Metadata. And carrying out face detection on the input image information, and outputting key face position information and a face identification result after video stream face comparison. Meanwhile, the face position information (the bounding box information Metadata1) and the face recognition result are added into the Metadata, and the video stream face information is updated. And the analysis module outputs an analysis result, and after the metadata information is changed, the image post-processing module in the video post-processing module is called back through a call-back interface.
It should be noted that the face recognition analysis module needs to perform initialization. Since model loading is time consuming, initialization can be done asynchronously in other threads. The initialization process mainly comprises the initialization of a derivation engine, the loading of a neural network model, the configuration of input and output of the neural network and the like.
Step S3, according to the processing capability of the analysis model of the second edge video server, for a video with 1080p picture frames and 30fps frames per second picture transmission, the analysis module can only process 20 frames of information per second, so live video stream analysis can bring live delay.
In order not to affect the live broadcast delay of the video, after the video post-processing module in the second edge video server receives the analysis result generated by the analysis module, the analysis result (for example, image feature information) of the first frame and the video information (picture frame information) of the first frame are overlaid and redrawn. And (4) according to the time delay of the model analysis, discarding the image information of the second frame and the third frame in the queue of the analysis module, and directly analyzing the information of the fourth frame. It should be noted that, here, only the video stream includes only 4 frames of image information as an example.
Since the second frame and the third frame are lost, the information of a part of the video frame is not analyzed, and therefore the part needs to be corrected in an image post-processing module of video post-processing. And after the metadata information generating the analysis result is obtained, the analysis result information in the original data is taken out. Since the time delay is within 60ms, the range in which the determination target moves is small, and the analysis results of the second frame image and the third frame image are not changed. And applying the metadata of the first frame image or the metadata of the fourth frame image to the second frame image and the third frame image for superposition redrawing. And after each frame of the picture is redrawn, zooming the picture to the size of the original video picture according to the size of the most original picture in the metadata.
And 4, referring to fig. 22, transmitting the redrawn video code stream data to a coding module of the video post-processing module, and coding the video stream information in different formats. And after the video is recoded, packaging and distributing the video stream through a video distribution module of the second edge video server.
The above description is only for the preferred embodiment of the present disclosure, and is not intended to limit the scope of the present disclosure. Any modification, equivalent replacement, and improvement made within the spirit and scope of the present disclosure are included in the protection scope of the present disclosure.

Claims (12)

1. A video playing method is applied to a central service server, and is characterized by comprising the following steps:
acquiring an access address of a first edge video server in an area range where a first terminal is located based on a video watching request message sent by the first terminal;
sending a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of a second edge video server;
receiving a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server;
and sending a video access address for acquiring a video stream from the first edge video server to the first terminal according to the success message.
2. The method according to claim 1, wherein before sending the video playing request message to the first edge video server based on the access address of the first edge video server, further comprising:
acquiring an access address of a second edge video server in the area range of a second terminal based on a video playing request message sent by the second terminal;
sending a start message to the second edge video server based on the access address of the second edge video server; the start message is used for triggering the second edge video server to acquire the video stream from the second terminal.
3. A video playing method is applied to a first edge video server, and is characterized by comprising the following steps:
receiving a video playing request message sent by a central service server; wherein the video playing request message includes: an access address of a second edge video server;
acquiring the video stream from the second edge video server based on the access address of the second edge video server;
and when the video stream is successfully acquired, sending a success message to the central service server.
4. The method of claim 3, further comprising:
and when the video stream is successfully acquired, sending a message for acquiring the video stream from the first edge video server to a first terminal.
5. A video playing method is applied to a second edge video server, and is characterized by comprising the following steps:
receiving a starting message sent by a central service server;
acquiring a first video stream from the second terminal according to the starting message;
and performing video analysis processing on the first video stream to obtain a processed second video stream.
6. The method of claim 5, wherein performing video analysis processing on the first video stream to obtain a processed second video stream comprises:
acquiring a first video stream according to the identity of the video stream;
and calling a video analysis process to process the first video stream to obtain a processed second video stream.
7. The method of claim 6, wherein the invoking a video analytics process to process the first video stream comprises:
decoding the first video stream, and sequentially obtaining picture frames carrying timestamps;
according to the timestamp, carrying out image characteristic analysis processing on the picture frame at the preset acquisition moment to obtain a first picture frame carrying image characteristic information;
determining image characteristic information of a second picture frame by using the image characteristic information of the first picture frame with the acquisition time difference with the second picture frame within a set threshold range according to the timestamp, and obtaining the second picture frame carrying the image characteristic information;
and coding a first picture frame carrying the image characteristic information and a second picture frame carrying the image characteristic information into the second video stream according to the time stamp.
8. A video playing device is applied to a central service server and is characterized by comprising a first obtaining module, a first receiving module and a first sending module; wherein the content of the first and second substances,
the first obtaining module is used for obtaining an access address of a first edge video server in the area range of the first terminal based on a video watching request message sent by the first terminal;
the first sending module is configured to send a video playing request message to the first edge video server based on the access address of the first edge video server; wherein the video playing request message includes: an access address of a second edge video server;
the first receiving module is configured to receive a success message sent by the first edge video server when the video stream is successfully acquired from the second edge video server;
the first sending module is further configured to send, to the first terminal, a video access address for acquiring a video stream from the first edge video server according to the success message.
9. A video playing device is applied to a first edge video server and is characterized by comprising a second receiving module, a second obtaining module and a second sending module; wherein the content of the first and second substances,
the second receiving module is used for receiving a video playing request message sent by the central service server; wherein the video playing request message includes: an access address of a second edge video server;
the second obtaining module is configured to obtain the video stream from the second edge video server based on an access address of the second edge video server;
and the second sending module is used for sending a success message to the central service server when the video stream is successfully acquired.
10. A video playing device is applied to a second edge video server and is characterized by comprising a third receiving module, a third obtaining module and a processing module; wherein the content of the first and second substances,
the third receiving module is used for receiving a starting message sent by the central service server;
the third obtaining module is configured to obtain a first video stream from the second terminal according to the start message;
and the processing module is used for performing video analysis processing on the first video stream to obtain a processed second video stream.
11. A video playback apparatus, comprising: a processor and a memory for storing a computer program capable of running on the processor, wherein,
the processor, when running the computer program, performs the steps of the method of any one of claims 1 to 2 or 3 to 4 or 5 to 7.
12. A storage medium having a computer program stored thereon, wherein the computer program, when executed by a processor, performs the steps of the method according to any one of claims 1 to 2 or 3 to 4 or 5 to 7.
CN201911360821.5A 2019-12-25 2019-12-25 Video playing method, device and storage medium Active CN113038254B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911360821.5A CN113038254B (en) 2019-12-25 2019-12-25 Video playing method, device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911360821.5A CN113038254B (en) 2019-12-25 2019-12-25 Video playing method, device and storage medium

Publications (2)

Publication Number Publication Date
CN113038254A true CN113038254A (en) 2021-06-25
CN113038254B CN113038254B (en) 2023-03-31

Family

ID=76458450

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911360821.5A Active CN113038254B (en) 2019-12-25 2019-12-25 Video playing method, device and storage medium

Country Status (1)

Country Link
CN (1) CN113038254B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105656876A (en) * 2015-11-26 2016-06-08 乐视云计算有限公司 Live video play method, device and system
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
US20180376177A1 (en) * 2013-10-23 2018-12-27 Vidillion, Inc. System and methods for individualized digital video program insertion
CN110177310A (en) * 2019-06-28 2019-08-27 三星电子(中国)研发中心 A kind of content distribution system and method
CN110267058A (en) * 2019-07-18 2019-09-20 世纪龙信息网络有限责任公司 Live broadcasting method, gateway, device clusters, system and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180376177A1 (en) * 2013-10-23 2018-12-27 Vidillion, Inc. System and methods for individualized digital video program insertion
CN105656876A (en) * 2015-11-26 2016-06-08 乐视云计算有限公司 Live video play method, device and system
CN107222468A (en) * 2017-05-22 2017-09-29 北京邮电大学 Augmented reality processing method, terminal, cloud server and edge server
CN110177310A (en) * 2019-06-28 2019-08-27 三星电子(中国)研发中心 A kind of content distribution system and method
CN110267058A (en) * 2019-07-18 2019-09-20 世纪龙信息网络有限责任公司 Live broadcasting method, gateway, device clusters, system and device

Also Published As

Publication number Publication date
CN113038254B (en) 2023-03-31

Similar Documents

Publication Publication Date Title
US11729465B2 (en) System and method providing object-oriented zoom in multimedia messaging
CN105991962B (en) Connection method, information display method, device and system
US20080235724A1 (en) Face Annotation In Streaming Video
CN107682714B (en) Method and device for acquiring online video screenshot
CN110430441B (en) Cloud mobile phone video acquisition method, system, device and storage medium
US20150208103A1 (en) System and Method for Enabling User Control of Live Video Stream(s)
US20140139619A1 (en) Communication method and device for video simulation image
US20140146877A1 (en) Method for dynamically adapting video image parameters for facilitating subsequent applications
CN103581705A (en) Method and system for recognizing video program
CN113542875B (en) Video processing method, device, electronic equipment and storage medium
US20210368214A1 (en) Method and mobile terminal for processing data
CN113225585A (en) Video definition switching method and device, electronic equipment and storage medium
CN111263183A (en) Singing state identification method and singing state identification device
CN110139128B (en) Information processing method, interceptor, electronic equipment and storage medium
CN112235600B (en) Method, device and system for processing video data and video service request
CN108881119B (en) Method, device and system for video concentration
CN114139491A (en) Data processing method, device and storage medium
CN113473165A (en) Live broadcast control system, live broadcast control method, device, medium and equipment
CN113038254B (en) Video playing method, device and storage medium
US20230328308A1 (en) Synchronization of multiple content streams
CN113068059B (en) Video live broadcasting method, device, equipment and storage medium
CN115514989A (en) Data transmission method, system and storage medium
Kim et al. Overview of Cloud-Based High Quality Media Production System
CN113766342B (en) Subtitle synthesizing method and related device, electronic equipment and storage medium
WO2023170679A1 (en) Synchronization of multiple content streams

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant