CN111787418A - Audio and video stream docking processing method based on artificial intelligence AI and related equipment - Google Patents

Audio and video stream docking processing method based on artificial intelligence AI and related equipment Download PDF

Info

Publication number
CN111787418A
CN111787418A CN202010585004.6A CN202010585004A CN111787418A CN 111787418 A CN111787418 A CN 111787418A CN 202010585004 A CN202010585004 A CN 202010585004A CN 111787418 A CN111787418 A CN 111787418A
Authority
CN
China
Prior art keywords
audio
control platform
video
interception
artificial intelligence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010585004.6A
Other languages
Chinese (zh)
Other versions
CN111787418B (en
Inventor
余强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Lian Intellectual Property Service Center
Shenzhen Siyou Technology Co ltd
Original Assignee
Ping An Puhui Enterprise Management Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Puhui Enterprise Management Co Ltd filed Critical Ping An Puhui Enterprise Management Co Ltd
Priority to CN202010585004.6A priority Critical patent/CN111787418B/en
Publication of CN111787418A publication Critical patent/CN111787418A/en
Application granted granted Critical
Publication of CN111787418B publication Critical patent/CN111787418B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/90Details of database functions independent of the retrieved data types
    • G06F16/95Retrieval from the web
    • G06F16/955Retrieval from the web using information identifiers, e.g. uniform resource locators [URL]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/02Protocols based on web technology, e.g. hypertext transfer protocol [HTTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/06Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/14Session management
    • H04L67/141Setup of application sessions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL

Abstract

The invention relates to the technical field of artificial intelligence, and provides an audio and video stream docking processing method based on artificial intelligence AI, which comprises the following steps: receiving an address acquisition request sent by a control platform; calling a load balancing interface, and determining the address of a target server in an idle state from a plurality of servers corresponding to the audio and video processing platform; sending an address to the control platform; receiving a URL address of an RTMP stream sent by a control platform; and sending a screenshot intercepting instruction to the control platform, wherein the screenshot intercepting instruction is used for instructing the control platform to perform picture interception and audio/video file interception on the real-time RTMP stream of the client side indicated by the URL address, and sending the intercepted target picture and the target audio/video file to the target server. The invention also relates to a block chain technology, and the control platform can upload the target picture and the target audio/video file to the block chain. The method can be applied to a smart government affair/smart community scene, so that the construction of a smart city is promoted.

Description

Audio and video stream docking processing method based on artificial intelligence AI and related equipment
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an audio and video stream docking processing method based on artificial intelligence AI and related equipment.
Background
In Artificial Intelligence (AI) video panel inspection, based on the wind control requirement, a variety of AI techniques including face detection, background detection, voiceprint recognition, live body detection, etc. need to be performed on the client. All these AI techniques are applied with a precondition: the audio and video stream of the client side is acquired, a plurality of videos are called in the AI video surface examination, a large amount of high-concurrency interaction exists in the audio and video stream of the client side, and interruption may occur in real-time interaction.
Therefore, in the AI video panel, how to interface with the audio/video stream of the client side to ensure the stability of real-time interaction is an urgent technical problem to be solved.
Disclosure of Invention
In view of the foregoing, there is a need to provide a method and a related device for docking an audio/video stream based on an artificial intelligence AI, which can dock with an audio/video stream of a client side, and at the same time, ensure stability of real-time interaction.
The invention provides a method for processing audio and video stream butt joint based on artificial intelligence AI, which is applied to an audio and video processing platform, and comprises the following steps:
receiving an address acquisition request sent by a control platform;
calling a load balancing interface, and determining the address of a target server in an idle state from a plurality of servers corresponding to the audio and video processing platform;
sending the address to the control platform to enable the control platform to establish a hypertext transfer protocol (HTTP) connection with the target server according to the address;
receiving a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream sent by the control platform;
and sending a screenshot intercepting instruction to the control platform, wherein the screenshot intercepting instruction is used for indicating the control platform to perform picture interception and audio/video file interception on the real-time RTMP stream of the client side indicated by the URL address, and sending an intercepted target picture and a target audio/video file to the target server, and the picture interception frequency and the audio/video file interception frequency are different frequencies respectively set according to the lowest support intercepting frequency of the control platform.
In a possible implementation manner, the invoking a load balancing interface, and determining an address of a target server currently in an idle state from a plurality of servers corresponding to the audio/video processing platform includes:
acquiring current index parameters of a plurality of servers corresponding to the audio and video processing platform, wherein the index parameters comprise the current access video quantity, the memory and the CPU occupancy rate of a central processing unit;
according to a preset load balancing algorithm, performing weighted calculation on the current index parameter of each server to obtain a weighted value;
and determining a target server in an idle state from the plurality of servers according to the weighted value, and acquiring the address of the target server.
In a possible implementation manner, the method for processing audio/video stream docking based on artificial intelligence AI further includes:
acquiring the lowest supported interception frequency of the control platform;
setting a normal interception frequency and an abnormal interception frequency according to the lowest supported interception frequency;
the sending of the screenshot intercepting instruction to the control platform comprises:
and sending a screenshot interception instruction carrying the normal interception frequency and the abnormal interception frequency to the control platform, so that the control platform can carry out interception operation on the real-time RTMP flow with the face according to the normal interception frequency and carry out interception operation on the real-time RTMP flow without the face according to the abnormal interception frequency.
In a possible implementation manner, the method for processing audio/video stream docking based on artificial intelligence AI further includes:
receiving a DELETE request of HTTP sent by the control platform, wherein the DELETE request carries RTMP stream identification;
and sending a closing instruction to the control platform, wherein the closing instruction is used for instructing the control platform to close the screenshot interception operation of the audio and video corresponding to the RTMP stream identification.
The second aspect of the present invention provides an audio and video stream docking processing method based on artificial intelligence AI, which is applied to a control platform, and the audio and video stream docking processing method based on artificial intelligence AI includes:
after detecting the incoming call of the user, sending an address acquisition request to an audio and video processing platform;
receiving a server address returned by the audio and video processing platform, sending a connection request to a target server corresponding to the server address, and establishing a hypertext transfer protocol (HTTP) connection with the target server, wherein the connection request carries a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream;
when a screenshot interception instruction sent by the audio and video processing platform is received, responding to the screenshot interception instruction, and carrying out picture interception and audio and video file interception on the real-time RTMP stream of the client side indicated by the URL address;
and sending the intercepted target picture and the target audio and video file to the target server.
In a possible implementation manner, the method for processing audio/video stream docking based on artificial intelligence AI further includes:
acquiring the interaction duration of the audio and video stream of the control platform and the audio and video processing platform;
judging whether the interaction duration is greater than a preset duration or not;
if the interaction time length is longer than the preset time length, a new component is established in the control platform and used for separating the audio and video stream receiving function from the screenshot intercepting function;
and carrying out interception operation according to the received normal interception frequency and abnormal interception frequency, wherein the normal interception frequency and the abnormal interception frequency are set according to the lowest supported interception frequency of the control platform.
A third aspect of the present invention provides an audio/video processing platform, where the audio/video processing platform includes a processor and a memory, and the processor is configured to execute a computer program stored in the memory to implement the method for docking the audio/video stream based on artificial intelligence AI.
A fourth aspect of the present invention provides a control platform, where the control platform includes a processor and a memory, and the processor is configured to execute a computer program stored in the memory to implement the method for docking the audio/video stream based on artificial intelligence AI.
A fifth aspect of the present invention provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the method for interfacing audio and video streams based on artificial intelligence AI.
In the technical scheme, the audio and video stream between the control platform and the audio and video processing platform is butted in an HTTP + RTMP stream mode, the audio and video processing platform can acquire audio and video related pictures and files at a client side, various screenshot interception requirements in flexible processing services are met, and simultaneously, the pictures, the sounds and the video files can be acquired from the audio and video streams at the client side according to different frequencies, so that the stability of the control platform is ensured, and meanwhile, high concurrency and high availability of the services are well supported.
Drawings
Fig. 1 is a flowchart of a preferred embodiment of a method for docking audio/video streams based on artificial intelligence AI according to the present disclosure.
Fig. 2 is a flowchart of another preferred embodiment of the method for docking audio/video streams based on artificial intelligence AI disclosed in the present invention.
Fig. 3 is a functional block diagram of a docking processing apparatus according to a preferred embodiment of the present disclosure.
Fig. 4 is a functional block diagram of another preferred embodiment of the docking processing apparatus disclosed in the present invention.
Fig. 5 is a schematic structural diagram of an audio/video processing platform according to a preferred embodiment of the method for implementing artificial intelligence AI-based audio/video stream docking processing.
Fig. 6 is a schematic structural diagram of a control platform according to a preferred embodiment of the method for implementing artificial intelligence AI-based audio/video stream docking processing.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terms "comprises," "comprising," and "having," and any variations thereof, in the description and claims of this application, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a preferred embodiment of a method for docking audio/video streams based on artificial intelligence AI according to the present disclosure. The audio and video processing method based on artificial intelligence AI is applied to an audio and video processing platform, the sequence of steps in the flow chart can be changed according to different requirements, and some steps can be omitted.
And S11, receiving an address acquisition request sent by the control platform.
And S12, calling a load balancing interface, and determining the address of the target server in the idle state from a plurality of servers corresponding to the audio and video processing platform.
Wherein, because the characteristics of the audio and video stream are point-to-point schemes, it is necessary to ensure that only 2 upstream and downstream servers can interact during the video incoming call. I.e. once the client side audio and video stream reaches the a1 server in the control platform, the client side audio and video stream is transmitted to the B1 server of the audio and video processing platform. The video can not interact between the A2 server of the control platform and the B2 server of the audio-video processing platform. Therefore, a load balancing scheme needs to be adopted to balance a plurality of servers corresponding to the audio/video processing platform.
Specifically, the invoking of the load balancing interface and the determining of the address of the target server currently in the idle state from the plurality of servers corresponding to the audio/video processing platform includes:
acquiring current index parameters of a plurality of servers corresponding to the audio and video processing platform, wherein the index parameters comprise the current access video quantity, the memory and the CPU occupancy rate of a central processing unit;
according to a preset load balancing algorithm, performing weighted calculation on the current index parameter of each server to obtain a weighted value;
and determining a target server in an idle state from the plurality of servers according to the weighted value, and acquiring the address of the target server.
The video call accessed by each server of the audio and video processing platform can be ensured to be approximately the same through a load balancing scheme.
S13, sending the address to the control platform, so that the control platform establishes a hypertext transfer protocol (HTTP) connection with the target server according to the address.
And S14, receiving the URL address of the RTMP stream sent by the control platform.
Among them, the RTMP (Real Time Messaging Protocol) is a network Protocol designed for Real-Time data communication, and is mainly used for audio/video and data communication between the Flash/AIR platform and the streaming media/interactive server supporting the RTMP Protocol. The URL (Uniform Resource Locator), i.e., the network address, is the Uniform Resource Locator of the WWW.
And S15, sending a screenshot intercepting instruction to the control platform, wherein the screenshot intercepting instruction is used for instructing the control platform to intercept pictures and audio/video files of the real-time RTMP stream of the client side indicated by the URL address, and sending the intercepted target pictures and the intercepted target audio/video files to the target server, wherein the picture intercepting frequency and the audio/video file intercepting frequency are different frequencies respectively set according to the lowest support intercepting frequency of the control platform.
The screen capture and video capture operations can be executed by using a librtm library and a codec library of the FFmpeg technology through an open-source FFmpeg technology, specifically, the librtm library and the codec library of the FFmpeg can be called to capture and capture a video for a real-time RTMP stream, and a jpg/mp4 file is generated and stored in a target server.
And setting screenshot interception frequency according to the lowest supported interception frequency of the control platform, so that the stability of the control platform can be ensured. In addition, different frequencies are adopted for image interception and audio and video file interception, so that high concurrency and high availability of services can be well supported, and the stability of real-time interaction of audio and video streams is ensured.
The audio and video processing platform needs to store a large number of pictures, videos and audio files. Based on the subtle level of response time, the service scenes with more memory and less reading and the scenes that the size of a large number of files must be less than 500k, the scheme adopts a sharing NAS storage scheme and realizes read-write separation. And after receiving the target picture and the target audio and video file, the target server can send the target picture and the target audio and video file to a server in the NAS cluster for storage. By the storage mode, the security of the file can be ensured.
The screenshot intercepting instruction also carries operation attribute information, such as: within N seconds, M pictures are cut every second, N x M pictures are cut in total, and a video file (containing sound) of N seconds is cut.
The image/audio/video file can be directly used for AI face detection without intercepting audio/video stream, the intercepting efficiency of the image/audio/video file is higher than that of the audio/video stream, and meanwhile, the storage space can be saved.
The audio and video stream docking processing method based on artificial intelligence AI further comprises the following steps:
acquiring the lowest supported interception frequency of the control platform;
setting a normal interception frequency and an abnormal interception frequency according to the lowest supported interception frequency;
the sending of the screenshot intercepting instruction to the control platform comprises:
and sending a screenshot interception instruction carrying the normal interception frequency and the abnormal interception frequency to the control platform, so that the control platform can carry out interception operation on the real-time RTMP flow with the face according to the normal interception frequency and carry out interception operation on the real-time RTMP flow without the face according to the abnormal interception frequency.
In the AI face detection scene, AI face detection needs to be performed every second, which requires that pictures, videos and sound files of a video are acquired every second. Meanwhile, a plurality of video calls have a large number of high-concurrency interactions, and stability needs to be ensured.
If the AI face detection is normal, a fixed frequency is needed to obtain the picture, and if the AI face detection is abnormal, the picture needs to be obtained quickly, so that the frequency is increased, for example, to 1000 ms. In the test process, once the frequency is accelerated, the control platform is down.
Because the control platform can not well support the change of the detection frequency, the stability of the control platform during interaction needs to be ensured, the normal cut-off frequency and the abnormal cut-off frequency need to be set according to the lowest support cut-off frequency of the control platform, and particularly, the normal cut-off frequency and the abnormal cut-off frequency can be set to be multiples of the lowest support cut-off frequency. For example, the minimum support is detected every 2000ms, the normal detection frequency is 4000ms, and the abnormal detection frequency is 2000 ms.
In the embodiment, the picture, the sound and the video file are acquired from the audio and video stream of the client side according to different frequencies, so that the stability of the control platform is ensured, and high concurrency and high availability of services are better supported.
The audio and video stream docking processing method based on artificial intelligence AI further comprises the following steps:
receiving a manual transfer request carrying an RTMP flow identifier sent by the control platform;
and responding to the manual transfer request, and sending a screenshot interception ending instruction aiming at the real-time RTMP stream corresponding to the RTMP stream identification to the control platform so as to enable the control platform to stop screenshot interception operation.
After the control platform acquires the manual transferring event, the control platform informs the audio and video processing platform through an HTTP request after the customer successfully transfers manual. Specifically, the control platform sends the manual RTMP stream identifier and the timestamp to the audio and video processing platform. The audio and video processing platform can record the manual time conveniently and instruct the control platform to finish the screenshot interception of the video. It should be noted that, at this time, resources of the control platform are still occupied, and only the current control platform does not interact with the audio/video processing platform.
The audio and video stream docking processing method based on artificial intelligence AI further comprises the following steps:
receiving a DELETE request of HTTP sent by the control platform, wherein the DELETE request carries RTMP stream identification;
and sending a closing instruction to the control platform, wherein the closing instruction is used for instructing the control platform to close the screenshot interception operation of the audio and video corresponding to the RTMP stream identification.
When the screenshot is stopped, the control platform sends a DELETE request of HTTP to the audio and video processing platform, the RTMP stream identification is transmitted, the control platform receives a closing instruction, the video is closed, and at the moment, the screenshot stopping operation of the video is closed.
In the method flow described in fig. 1, an HTTP + RTMP manner is adopted to realize the docking of audio and video streams, and at the same time, a load balancing interface is provided, and different screenshot frequencies and audio and video interception frequencies are set according to the lowest supported cut-off frequency of the control platform, so that not only can the load of each server of the audio and video processing platform be balanced, but also high concurrency and high availability of services can be better supported, and the stability of real-time interaction of the audio and video streams is ensured. In addition, the picture/audio/video file is not intercepted, and compared with the audio/video stream, the picture/audio/video file is higher in intercepting efficiency, and meanwhile, the storage space can be saved.
Referring to fig. 2, fig. 2 is a flowchart of another preferred embodiment of the method for docking audio/video streams based on artificial intelligence AI according to the disclosure. The audio and video stream docking processing method based on the artificial intelligence AI is applied to a control platform, the sequence of steps in the flow chart can be changed according to different requirements, and some steps can be omitted.
And S21, after detecting the incoming call of the user, sending an address acquisition request to the audio and video processing platform.
And S22, receiving a server address returned by the audio and video processing platform, sending a connection request to a target server corresponding to the server address, and establishing a hypertext transfer protocol (HTTP) connection with the target server, wherein the connection request carries a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream.
Optionally, the connection request further carries a calling number (such as an extension number address), an application number (mainly service attribute information, such as a customer information number), an incoming call time, and a routing number.
And S23, responding to the screenshot interception instruction when receiving the screenshot interception instruction sent by the audio and video processing platform, and intercepting pictures and audio and video files of the real-time RTMP stream of the client side indicated by the URL address.
The screenshot intercepting instruction also carries operation attribute information, such as: within N seconds, M pictures are cut every second, N x M pictures are cut in total, and a video file (containing sound) of N seconds is cut. And the control platform cuts M pictures every second within N seconds, and cuts N x M pictures in total.
And S24, sending the intercepted target picture and the target audio and video file to the target server.
Optionally, the method further includes:
and uploading the target picture and the target audio and video file to the block chain.
In order to ensure the privacy and the security of data, the target picture and the target audio/video file can be uploaded to the block chain for storage.
Specifically, the control platform may encode the frame-extracted picture/video/audio file with base64, and send the encoded frame-extracted picture/video/audio file to a target server of the audio/video processing platform in JSON format.
And then, the target server sends the target picture and the target audio and video file to a server in the NAS cluster for storage. By the storage mode, the security of the file can be ensured.
The audio and video stream docking processing method based on artificial intelligence AI further comprises the following steps:
acquiring the interaction duration of the audio and video stream of the control platform and the audio and video processing platform;
judging whether the interaction duration is greater than a preset duration or not;
if the interaction time length is longer than the preset time length, a new component is established in the control platform and used for separating the audio and video stream receiving function from the screenshot intercepting function;
and carrying out interception operation according to the received normal interception frequency and abnormal interception frequency, wherein the normal interception frequency and the abnormal interception frequency are set according to the lowest supported interception frequency of the control platform.
The control platform and the audio and video processing platform interact to acquire pictures and audio and video files. In a test environment, the problems of image lag and slow response of the control platform occur, and the method cannot be applied to a production environment.
The interaction duration is the duration of issuing the instruction and acquiring the picture by the audio and video processing platform. If the interaction time length is longer than the preset time length, the response delay is indicated, and relevant measures need to be taken.
In the scheme, a new component is established in the control platform, the audio and video stream receiving function and the screenshot intercepting function are separated, the audio and video stream is received only once when a call is incoming, each current incoming call is kept, and the audio and video stream is not pulled in each screenshot as in the beginning.
If the AI face detection is normal, a fixed frequency is needed to obtain the picture, and if the AI face detection is abnormal, the picture needs to be obtained quickly, so that the frequency is increased, for example, to 1000 ms. In the test process, once the frequency is accelerated, the control platform is down.
Because the control platform can not well support the change of the detection frequency, the stability of the control platform during interaction needs to be ensured, and the audio and video processing platform needs to set the normal interception frequency and the abnormal interception frequency according to the lowest supported interception frequency of the control platform and send the set normal interception frequency and the set abnormal interception frequency to the control platform.
Specifically, the normal cutoff frequency and the abnormal cutoff frequency may be set to be multiples of the lowest supported cutoff frequency. For example, the minimum support is detected every 2000ms, the normal detection frequency is 4000ms, and the abnormal detection frequency is 2000 ms.
Through the measures, except that the interception delay of the initial screenshot is larger than 3000ms, the interception response time of each screenshot in other video call processes is controlled to be about 1000ms, the requirements of service scenes can be better supported, and high concurrency and high availability are better supported.
In the method flow described in fig. 2, the audio/video stream interfacing between the control platform and the audio/video processing platform is realized in an HTTP + RTMP stream manner, and the audio/video processing platform can acquire audio/video related pictures and files at the client side, thereby satisfying various screenshot interception requirements in a flexible processing service, and simultaneously acquiring pictures, sounds, and video files from the audio/video stream at the client side according to different frequencies, thereby ensuring the stability of the control platform and better supporting high concurrency and high availability of the service.
The above description is only a specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and it will be apparent to those skilled in the art that modifications may be made without departing from the inventive concept of the present invention, and these modifications are within the scope of the present invention.
Referring to fig. 3, fig. 3 is a functional block diagram of a docking processing apparatus according to a preferred embodiment of the present invention. In some embodiments, the docking processing means operates in an audio video processing platform. The docking processing means may comprise a plurality of functional modules consisting of program code segments. Program codes of each program segment in the docking processing apparatus may be stored in the memory and executed by at least one processor to perform part or all of the steps in the method for docking audio/video streams based on artificial intelligence AI described in fig. 1, for which reference is specifically made to the relevant description in fig. 1, which is not repeated herein.
In this embodiment, the docking processing apparatus may be divided into a plurality of functional modules according to the functions executed by the docking processing apparatus. The functional module may include: a receiving module 301, a determining module 302 and a sending module 303. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory.
The receiving module 301 is configured to receive an address obtaining request sent by the control platform.
The determining module 302 is configured to invoke a load balancing interface, and determine an address of a target server currently in an idle state from a plurality of servers corresponding to the audio/video processing platform.
A sending module 303, configured to send the address to the control platform, so that the control platform establishes a hypertext transfer protocol HTTP connection with the target server according to the address.
The receiving module 301 is further configured to receive a uniform resource locator URL address of an RTMP stream sent by the control platform.
The sending module 303 is further configured to send a screenshot intercepting instruction to the control platform, where the screenshot intercepting instruction is used to instruct the control platform to perform picture interception and audio/video file interception on the real-time RTMP stream of the client side indicated by the URL address, and send an intercepted target picture and an intercepted target audio/video file to the target server, where the picture interception frequency and the audio/video file interception frequency are different frequencies respectively set according to the lowest supported intercepting frequency of the control platform.
In the docking processing device described in fig. 3, the docking of the audio and video streams is realized by adopting an HTTP + RTMP method, and meanwhile, a load balancing interface is provided, and different screenshot frequencies and audio and video interception frequencies are set according to the lowest supported cut-off frequency of the control platform, so that not only can the load of each server of the audio and video processing platform be balanced, but also high concurrency and high availability of services can be better supported, and the stability of real-time interaction of the audio and video streams is ensured. In addition, the picture/audio/video file is not intercepted, and compared with the audio/video stream, the picture/audio/video file is higher in intercepting efficiency, and meanwhile, the storage space can be saved.
Referring to fig. 4, fig. 4 is a functional block diagram of another docking processing apparatus according to another preferred embodiment of the present disclosure. In some embodiments, the docking processing device runs in a control platform. The docking processing means may comprise a plurality of functional modules consisting of program code segments. Program codes of the program segments in the docking processing apparatus may be stored in the memory and executed by at least one processor to perform part or all of the steps in the method for docking audio/video streams based on artificial intelligence AI described in fig. 2, for which reference is specifically made to the relevant description in fig. 2, which is not repeated herein.
In this embodiment, the docking processing apparatus may be divided into a plurality of functional modules according to the functions executed by the docking processing apparatus. The functional module may include: a transmission module 401, a setup module 402 and an intercept module 403. The module referred to herein is a series of computer program segments capable of being executed by at least one processor and capable of performing a fixed function and is stored in memory.
The transmission module 401 is configured to send an address acquisition request to the audio/video processing platform after detecting an incoming call of a user.
The transmission module 401 is further configured to receive a server address returned by the audio/video processing platform, and send a connection request to a target server corresponding to the server address.
A establishing module 402, configured to establish a hypertext transfer protocol HTTP connection with the target server, where the connection request carries a uniform resource locator URL address of an RTMP stream.
And an intercepting module 403, configured to respond to the screenshot intercepting instruction when receiving the screenshot intercepting instruction sent by the audio/video processing platform, and perform picture interception and audio/video file interception on the real-time RTMP stream on the client side indicated by the URL address.
The transmission module 401 is further configured to send the captured target picture and the target audio/video file to the target server.
In the docking processing device described in fig. 4, the docking of the audio and video stream between the control platform and the audio and video processing platform is realized in an HTTP + RTMP stream manner, the audio and video processing platform can acquire the audio and video related pictures and files at the client side, and meet the requirements of flexibly processing various screenshots in the service, and at the same time, the pictures, the sounds and the video files can be acquired from the audio and video stream at the client side according to different frequencies, which not only ensures the stability of the control platform, but also better supports the high concurrency and the high availability of the service.
As shown in fig. 5, fig. 5 is a schematic structural diagram of an audio/video processing platform according to a preferred embodiment of the method for implementing docking processing of audio/video streams based on artificial intelligence AI according to the present invention. The audio/video processing platform 5 comprises a memory 51, at least one processor 52, a computer program 53 stored in the memory 51 and executable on the at least one processor 52, and at least one communication bus 54.
Those skilled in the art will appreciate that the schematic diagram shown in fig. 5 is merely an example of the audiovisual processing platform 5, and does not constitute a limitation of the audiovisual processing platform 5, and may include more or less components than those shown, or combine some components, or different components, for example, the audiovisual processing platform 5 may further include an input/output device, a network access device, and the like.
The at least one Processor 52 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The processor 52 may be a microprocessor or the processor 52 may be any conventional processor, and the processor 52 is a control center of the audio/video processing platform 5 and connects various parts of the whole audio/video processing platform 5 by using various interfaces and lines.
The memory 51 may be configured to store the computer program 53 and/or the module/unit, and the processor 52 implements various functions of the audio/video processing platform 5 by running or executing the computer program and/or the module/unit stored in the memory 51 and calling data stored in the memory 51. The memory 51 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required by at least one function, and the like; the storage data area may store data (such as audio data) created according to the use of the audio-video processing platform 5, and the like. Further, the memory 51 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other non-volatile solid state storage device.
With reference to fig. 1, the memory 51 in the audio/video processing platform 5 stores a plurality of instructions to implement a method for interfacing audio/video streams based on artificial intelligence AI, and the processor 52 can execute the plurality of instructions to implement:
receiving an address acquisition request sent by a control platform;
calling a load balancing interface, and determining the address of a target server in an idle state from a plurality of servers corresponding to the audio and video processing platform;
sending the address to the control platform to enable the control platform to establish a hypertext transfer protocol (HTTP) connection with the target server according to the address;
receiving a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream sent by the control platform;
and sending a screenshot intercepting instruction to the control platform, wherein the screenshot intercepting instruction is used for indicating the control platform to perform picture interception and audio/video file interception on the real-time RTMP stream of the client side indicated by the URL address, and sending an intercepted target picture and a target audio/video file to the target server, and the picture interception frequency and the audio/video file interception frequency are different frequencies respectively set according to the lowest support intercepting frequency of the control platform.
Specifically, the processor 52 may refer to the description of the relevant steps in the embodiment corresponding to fig. 1, and details thereof are not repeated herein.
In the audio/video processing platform 5 described in fig. 5, an HTTP + RTMP method is adopted to realize the docking of audio/video streams, and meanwhile, a load balancing interface is provided, and different screenshot frequencies and audio/video interception frequencies are set according to the lowest supported cut-off frequency of the control platform, so that not only can the load of each server of the audio/video processing platform be balanced, but also high concurrency and high availability of services can be supported well, and the stability of real-time interaction of the audio/video streams is ensured. In addition, the picture/audio/video file is not intercepted, and compared with the audio/video stream, the picture/audio/video file is higher in intercepting efficiency, and meanwhile, the storage space can be saved.
As shown in fig. 6, fig. 6 is a schematic structural diagram of a control platform according to a preferred embodiment of the method for implementing artificial intelligence AI-based audio/video stream docking processing. The control platform 6 comprises a memory 61, at least one processor 62, a computer program 63 stored in the memory 61 and executable on the at least one processor 62, and at least one communication bus 64.
Those skilled in the art will appreciate that the schematic diagram shown in fig. 6 is merely an example of the control platform 6, and does not constitute a limitation of the control platform 6, and may include more or less components than those shown, or combine some components, or different components, for example, the control platform 6 may further include input and output devices, network access devices, and the like.
The at least one Processor 62 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The processor 62 may be a microprocessor or the processor 62 may be any conventional processor or the like, and the processor 62 is a control center of the control platform 6 and connects various parts of the entire control platform 6 by various interfaces and lines.
The memory 61 may be used for storing the computer programs 63 and/or modules/units, and the processor 62 may implement various functions of the control platform 6 by running or executing the computer programs and/or modules/units stored in the memory 61 and calling data stored in the memory 61. The memory 61 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data) created according to the use of the control platform 6, and the like. Further, the memory 61 may include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other non-volatile solid state storage device.
With reference to fig. 2, the memory 61 in the control platform 6 stores a plurality of instructions to implement a method for docking artificial intelligence AI-based audio/video streams, and the processor 62 can execute the plurality of instructions to implement:
after detecting the incoming call of the user, sending an address acquisition request to an audio and video processing platform;
receiving a server address returned by the audio and video processing platform, sending a connection request to a target server corresponding to the server address, and establishing a hypertext transfer protocol (HTTP) connection with the target server, wherein the connection request carries a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream;
when a screenshot interception instruction sent by the audio and video processing platform is received, responding to the screenshot interception instruction, and carrying out picture interception and audio and video file interception on the real-time RTMP stream of the client side indicated by the URL address;
and sending the intercepted target picture and the target audio and video file to the target server.
Specifically, the processor 62 may refer to the description of the relevant steps in the embodiment corresponding to fig. 2, which is not repeated herein.
In the control platform 6 described in fig. 6, the audio/video stream interfacing between the control platform and the audio/video processing platform is realized in an HTTP + RTMP stream manner, and the audio/video processing platform can acquire audio/video related pictures and files at the client side, thereby satisfying various screenshot interception requirements in a flexible processing service, and simultaneously acquiring pictures, sounds, and video files from the audio/video stream at the client side according to different frequencies, thereby ensuring the stability of the control platform and better supporting high concurrency and high availability of the service.
The integrated modules/units of the audio/video processing platform 5/control platform 6 may be stored in a computer readable storage medium if they are implemented in the form of software functional units and sold or used as independent products. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U-disk, removable hard disk, magnetic disk, optical disk, computer memory, and Read-only memory (ROM).
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is only one logical functional division, and other divisions may be realized in practice.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional module.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference signs in the claims shall not be construed as limiting the claim concerned. The units or means recited in the system claims may also be implemented by software or hardware.
Finally, it should be noted that the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting, and although the present invention is described in detail with reference to the preferred embodiments, it should be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions of the present invention.

Claims (10)

1. A docking processing method of audio and video stream based on artificial intelligence AI is applied to an audio and video processing platform, and is characterized in that the docking processing method of the audio and video stream based on artificial intelligence AI comprises the following steps:
receiving an address acquisition request sent by a control platform;
calling a load balancing interface, and determining the address of a target server in an idle state from a plurality of servers corresponding to the audio and video processing platform;
sending the address to the control platform to enable the control platform to establish a hypertext transfer protocol (HTTP) connection with the target server according to the address;
receiving a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream sent by the control platform;
and sending a screenshot intercepting instruction to the control platform, wherein the screenshot intercepting instruction is used for indicating the control platform to perform picture interception and audio/video file interception on the real-time RTMP stream of the client side indicated by the URL address, and sending an intercepted target picture and a target audio/video file to the target server, and the picture interception frequency and the audio/video file interception frequency are different frequencies respectively set according to the lowest support intercepting frequency of the control platform.
2. The artificial intelligence AI-based audio/video stream docking processing method as recited in claim 1, wherein said invoking a load balancing interface to determine the address of the target server currently in an idle state from a plurality of servers corresponding to the audio/video processing platform comprises:
acquiring current index parameters of a plurality of servers corresponding to the audio and video processing platform, wherein the index parameters comprise the current access video quantity, the memory and the CPU occupancy rate of a central processing unit;
according to a preset load balancing algorithm, performing weighted calculation on the current index parameter of each server to obtain a weighted value;
and determining a target server in an idle state from the plurality of servers according to the weighted value, and acquiring the address of the target server.
3. The method for processing the audio/video stream based on the artificial intelligence AI according to claim 1, wherein the method for processing the audio/video stream based on the artificial intelligence AI further comprises:
acquiring the lowest supported interception frequency of the control platform;
setting a normal interception frequency and an abnormal interception frequency according to the lowest supported interception frequency;
the sending of the screenshot intercepting instruction to the control platform comprises:
and sending a screenshot interception instruction carrying the normal interception frequency and the abnormal interception frequency to the control platform, so that the control platform can carry out interception operation on the real-time RTMP flow with the face according to the normal interception frequency and carry out interception operation on the real-time RTMP flow without the face according to the abnormal interception frequency.
4. The method for processing the audio/video stream based on the artificial intelligence AI according to claim 1, wherein the method for processing the audio/video stream based on the artificial intelligence AI further comprises:
receiving a manual transfer request carrying an RTMP flow identifier sent by the control platform;
and responding to the manual transfer request, and sending a screenshot interception ending instruction aiming at the real-time RTMP stream corresponding to the RTMP stream identification to the control platform so as to enable the control platform to stop screenshot interception operation.
5. The method for processing the audio/video stream based on the artificial intelligence AI according to claim 1, wherein the method for processing the audio/video stream based on the artificial intelligence AI further comprises:
receiving a DELETE request of HTTP sent by the control platform, wherein the DELETE request carries RTMP stream identification;
and sending a closing instruction to the control platform, wherein the closing instruction is used for instructing the control platform to close the screenshot interception operation of the audio and video corresponding to the RTMP stream identification.
6. A docking processing method of audio and video stream based on artificial intelligence AI is applied to a control platform, and is characterized in that the docking processing method of the audio and video stream based on artificial intelligence AI comprises the following steps:
after detecting the incoming call of the user, sending an address acquisition request to an audio and video processing platform;
receiving a server address returned by the audio and video processing platform, sending a connection request to a target server corresponding to the server address, and establishing a hypertext transfer protocol (HTTP) connection with the target server, wherein the connection request carries a Uniform Resource Locator (URL) address of a real-time message transfer protocol (RTMP) stream;
when a screenshot interception instruction sent by the audio and video processing platform is received, responding to the screenshot interception instruction, and carrying out picture interception and audio and video file interception on the real-time RTMP stream of the client side indicated by the URL address;
and sending the intercepted target picture and the target audio and video file to the target server.
7. The method for processing the audio/video stream based on the artificial intelligence AI according to claim 6, wherein the method for processing the audio/video stream based on the artificial intelligence AI further comprises:
acquiring the interaction duration of the audio and video stream of the control platform and the audio and video processing platform;
judging whether the interaction duration is greater than a preset duration or not;
if the interaction time length is longer than the preset time length, a new component is established in the control platform and used for separating the audio and video stream receiving function from the screenshot intercepting function;
and carrying out interception operation according to the received normal interception frequency and abnormal interception frequency, wherein the normal interception frequency and the abnormal interception frequency are set according to the lowest supported interception frequency of the control platform.
8. An audio-video processing platform, characterized in that the audio-video processing platform comprises a processor and a memory, wherein the processor is used for executing a computer program stored in the memory to realize the artificial intelligence AI based audio-video stream docking processing method as claimed in any one of claims 1 to 5.
9. A control platform, characterized in that the control platform comprises a processor and a memory, the processor is used for executing a computer program stored in the memory to realize the docking processing method of the artificial intelligence AI-based audio-video stream according to the claim 6 or 7.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores at least one instruction, which when executed by a processor, implements the artificial intelligence AI based audio-video stream docking processing method according to any one of claims 1 to 7.
CN202010585004.6A 2020-06-23 2020-06-23 Audio and video stream docking processing method based on artificial intelligence AI and related equipment Active CN111787418B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010585004.6A CN111787418B (en) 2020-06-23 2020-06-23 Audio and video stream docking processing method based on artificial intelligence AI and related equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010585004.6A CN111787418B (en) 2020-06-23 2020-06-23 Audio and video stream docking processing method based on artificial intelligence AI and related equipment

Publications (2)

Publication Number Publication Date
CN111787418A true CN111787418A (en) 2020-10-16
CN111787418B CN111787418B (en) 2023-09-22

Family

ID=72759692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010585004.6A Active CN111787418B (en) 2020-06-23 2020-06-23 Audio and video stream docking processing method based on artificial intelligence AI and related equipment

Country Status (1)

Country Link
CN (1) CN111787418B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793932A (en) * 2014-02-18 2014-05-14 优视科技有限公司 Method and device for storing image and text in mobile terminal browser
CN104023251A (en) * 2014-06-13 2014-09-03 腾讯科技(深圳)有限公司 Interaction method and system based on video
CN107124453A (en) * 2016-11-30 2017-09-01 西安大唐电信有限公司 Platform Interworking GateWay stacks the SiteServer LBS and video call method of deployment
CN107911737A (en) * 2017-11-28 2018-04-13 腾讯科技(深圳)有限公司 Methods of exhibiting, device, computing device and the storage medium of media content

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103793932A (en) * 2014-02-18 2014-05-14 优视科技有限公司 Method and device for storing image and text in mobile terminal browser
CN104023251A (en) * 2014-06-13 2014-09-03 腾讯科技(深圳)有限公司 Interaction method and system based on video
CN107124453A (en) * 2016-11-30 2017-09-01 西安大唐电信有限公司 Platform Interworking GateWay stacks the SiteServer LBS and video call method of deployment
CN107911737A (en) * 2017-11-28 2018-04-13 腾讯科技(深圳)有限公司 Methods of exhibiting, device, computing device and the storage medium of media content

Also Published As

Publication number Publication date
CN111787418B (en) 2023-09-22

Similar Documents

Publication Publication Date Title
EP3417380B1 (en) Recording web conferences
US10244204B2 (en) Dynamic projection of communication data
WO2019114330A1 (en) Video playback method and apparatus, and terminal device
CN108200444B (en) Video live broadcast method, device and system
CN111614954A (en) Index acquisition processing method and device for streaming media, computer and storage medium
CN110113298B (en) Data transmission method, device, signaling server and computer readable medium
CN113766146B (en) Audio and video processing method and device, electronic equipment and storage medium
CN108337556B (en) Method and device for playing audio-video file
CN114257572B (en) Data processing method, device, computer readable medium and electronic equipment
CN113473165A (en) Live broadcast control system, live broadcast control method, device, medium and equipment
CN111541905B (en) Live broadcast method and device, computer equipment and storage medium
CN112565877A (en) Screen projection method and system, electronic equipment and storage medium
US11843569B2 (en) Filtering group messages
CN111885351A (en) Screen display method and device, terminal equipment and storage medium
CN111787418B (en) Audio and video stream docking processing method based on artificial intelligence AI and related equipment
CN113329080B (en) Video playing method and device based on WebSocket, electronic equipment and storage medium
JP7220859B2 (en) Systems and methods for exchanging ultra-short media content
CN112291573B (en) Live stream pushing method and device, electronic equipment and computer readable medium
CN114077409A (en) Screen projection method and device, electronic equipment and computer readable medium
CN105657442A (en) Video file generation method and system
CN110072149B (en) Data processing method and device for video network
KR20220132391A (en) Method, Apparatus and System of managing contents in Multi-channel Network
CN113411634A (en) Video stream operation method and device, storage medium and electronic device
CN115915382A (en) Communication method and related equipment for multimedia stream synchronization and communication system
KR20230115526A (en) A system for distributing a presentation that can be searched and reproduced by dividing it by semantic unit of the presentation and its operation method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230818

Address after: Building 1021, Building 3, Xunmei Science and Technology Plaza, No. 8 Keyuan Road, Science and Technology Park Community, Yuehai Street, Nanshan District, Shenzhen City, Guangdong Province, 518000

Applicant after: SHENZHEN SIYOU TECHNOLOGY CO.,LTD.

Address before: 518000 Room 202, block B, aerospace micromotor building, No.7, Langshan No.2 Road, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant before: Shenzhen LIAN intellectual property service center

Effective date of registration: 20230818

Address after: 518000 Room 202, block B, aerospace micromotor building, No.7, Langshan No.2 Road, Xili street, Nanshan District, Shenzhen City, Guangdong Province

Applicant after: Shenzhen LIAN intellectual property service center

Address before: 518000 Room 201, building A, No. 1, Qian Wan Road, Qianhai Shenzhen Hong Kong cooperation zone, Shenzhen, Guangdong (Shenzhen Qianhai business secretary Co., Ltd.)

Applicant before: PING AN PUHUI ENTERPRISE MANAGEMENT Co.,Ltd.

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant