Disclosure of Invention
In order to overcome at least the above disadvantages in the prior art, an object of the present invention is to provide a media data processing method and a big data platform based on remote interaction and cloud computing, which can update the coding control process of the current data coding control deep learning model based on a coding window with an association relationship, so that the coding process can be adaptively adjusted in the next coding control, and the coding effect is further improved.
In a first aspect, an embodiment of the present invention provides a media data processing method based on remote interaction and cloud computing, which is applied to a big data platform, where the big data platform is in communication connection with a plurality of online video service terminals and in communication connection with a video service interaction terminal for performing video service interaction at each online video service terminal, and the method includes:
performing data coding control on online interactive video information in a video interactive channel established between the online video service terminal and the corresponding video service interactive terminal through a data coding control deep learning model in a video coding plug-in of each interactive service element;
determining a first coding window and a second coding window in a coding video frame region of the online video service terminal currently performing online video interaction in a data coding control process, wherein the coding video frame region is a coding region corresponding to a preset interaction proceeding label in a video interaction process;
in the video interaction process, determining a third coding window corresponding to the first coding window and matched with the video interaction process, and a fourth coding window corresponding to the second coding window and matched with the video interaction process;
and determining coding linkage window information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, and updating the coding control process of the data coding control deep learning model according to the coding linkage window information in the video interaction process.
In a possible implementation manner of the first aspect, the step of determining, according to the first encoding window, the second encoding window, the third encoding window, the fourth encoding window, and the interaction progress label, encoding linkage window information in the video interaction process includes:
and determining coding linkage detection information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the coding video frame region, wherein the coding video frame region comprises an interaction proceeding label, and the coding linkage detection information comprises coding linkage window information.
In a possible implementation manner of the first aspect, the step of determining, according to the first encoding window, the second encoding window, the third encoding window, the fourth encoding window, and the encoded video frame region, encoding linkage detection information in the video interaction process includes:
updating the coding window of the coding video frame region according to a first coding incidence relation between the first coding window and the second coding window and a second coding incidence relation between the third coding window and the fourth coding window, so that the first coding incidence relation is equal to the second coding incidence relation;
and determining coding linkage detection information in the video interaction process according to the updated coding video frame region of the coding window, the third coding window and the fourth coding window.
In a possible implementation manner of the first aspect, the step of determining the coding linkage detection information in the video interaction process according to the updated coding video frame region of the coding window, the third coding window, and the fourth coding window includes:
calculating to obtain coding linkage detection information in the video interaction process according to the coding offset information corresponding to the first coding window and the second coding window in the coding video frame region after the updating of the coding window, the interaction progress label in the coding video frame region after the updating of the coding window, the edge point position of the coding video frame region after the updating of the coding window, the third coding window and the fourth coding window; or
Updating the coding video frame region after the coding window is updated in the video interaction process, so that the extension element corresponding to the first coding window in the coding video frame region after the coding window is updated is overlapped with the extension element corresponding to the third coding window, the extension element corresponding to the second coding window in the coding video frame region after the coding window is updated is overlapped with the extension element corresponding to the fourth coding window, and determining the coding linkage detection information in the video interaction process according to the offset information of the extension element.
In a possible implementation manner of the first aspect, the determining a first encoding window and a second encoding window in an encoding video frame region where the online video service terminal currently performs online video interaction in the data encoding control process includes:
and determining a first coding window and a second coding window which have a cooperative coding relation or a first coding window and a second coding window which have a coding time sequence front-rear relation in a coding video frame region of the online video service terminal currently performing online video interaction in the data coding control process.
In a possible implementation manner of the first aspect, the determining, in the video interaction process, a third encoding window corresponding to the first encoding window and matching with the video interaction process, and a fourth encoding window corresponding to the second encoding window and matching with the video interaction process includes:
in the video interaction process, determining a third encoding window of an encoding tag type corresponding to the encoding tag type of the first encoding window and matched with the video interaction process, and a fourth encoding window of an encoding tag type corresponding to the encoding tag type of the second encoding window and matched with the video interaction process.
In a possible implementation manner of the first aspect, the updating, according to the information of the coding linkage window in the video interaction process, the coding control process of the data coding control deep learning model includes:
acquiring a coding linkage window object represented by coding linkage window information in the video interaction process, and acquiring first reconstruction coding speed information of the coding control object of the coding control node aiming at the coding control object of each corresponding coding control node in the coding control process of the data coding control deep learning model, wherein the first reconstruction coding speed information is used for representing the coding mode configuration and the coding compression component configuration of the coding control sub-process of the coding control node;
the first reconstruction coding speed information is updated in a linkage mode according to each coding complexity characteristic associated with the coding linkage window object, and first coding mode linkage information and coding compression component linkage information corresponding to the first coding mode linkage information are obtained;
acquiring first coding control scene characteristics and scene variable characteristics of a coding control object of the coding control node, and extracting scene characteristic vectors of the first coding control scene characteristics, wherein the scene characteristic vectors of the first coding control scene characteristics comprise specific scene characteristic vectors;
acquiring specific scene feature vectors of historical coding control objects related to the coding control process of the data coding control deep learning model, and updating the specific scene feature vectors of the first coding control scene features according to the specific scene feature vectors, so that the correlation sequence between the specific scene feature vectors in the first coding control scene features is matched with the correlation sequence between the specific scene feature vectors in the preset historical coding control objects;
after the updating is finished, obtaining a scene feature vector of a second coding control scene feature, and generating a second coding control scene feature according to the scene feature vector of the second coding control scene feature;
according to the scene variable feature and the scene feature vector of the second coding control scene feature, searching and obtaining coding compression component linkage information matched with the scene variable feature and first coding mode linkage information corresponding to the coding compression component linkage information, and updating the first coding mode linkage information corresponding to the coding compression component linkage information according to the scene feature vector of the second coding control scene feature to obtain second coding mode linkage information;
performing fusion processing on the second coding mode linkage information and the second coding control scene characteristics to obtain fusion coding reference characteristics matched with each coding complexity characteristic associated with the coding control object of the coding control node and the coding linkage window object;
linkage updating is carried out on the fusion coding reference characteristics of the coding control nodes to obtain linkage updating results of the coding control nodes, and the linkage updating results comprise linkage updating content information of the coding control objects of the coding control nodes matched with the coding complexity characteristics related to the coding linkage window objects;
and updating the linkage updating result of each coding control node to the coding control process of the data coding control deep learning model.
In a possible implementation manner of the first aspect, the step of performing data coding control on online interactive video information in a video interactive channel established between the online video service terminal and a corresponding video service interactive terminal through a data coding control deep learning model in a video coding plug-in of each interactive service element includes:
acquiring a video interaction service process which is sent by the video service interaction terminal and is established aiming at a target online video service terminal, and acquiring corresponding interaction service element information and interaction video stream information from the video interaction service process;
extracting a preset video capture response control of each interactive service element in the interactive service element information relative to the target online video service terminal, determining a data coding control deep learning model corresponding to the interactive video stream information according to the preset video capture response control, and after each data coding control deep learning model is respectively associated to the video coding plug-in of the corresponding interactive service element, transmitting the video coding configuration information of the video service interaction terminal to the target online video service terminal and enabling the target online video service terminal to record the video coding configuration information of the video service interaction terminal into a coding cooperation terminal list corresponding to a video service interaction list, the video service interaction list comprises a plurality of video sharing terminals which can be used for the video service interaction terminals to carry out coding cooperation;
when a video interaction service request which is sent by the video service interaction terminal and aims at a target interaction service element corresponding to the target online video service terminal is received, a video interaction channel between the video service interaction terminal and a target video sharing terminal corresponding to the video interaction service request is requested to be established from the target online video service terminal, and data coding control is carried out on online interaction video information in the video interaction channel through a data coding control deep learning model in a video coding plug-in of the target interaction service element.
In a possible implementation manner of the first aspect, the step of extracting a preset video capture response control of each interactive service element in the interactive service element information with respect to the target online video service terminal, and determining a data coding control deep learning model corresponding to the interactive video stream information according to the preset video capture response control includes:
extracting a preset video capture response control of each interactive service element in the interactive service element information relative to the target online video service terminal from a preset video capture response control library of the target online video service terminal;
determining a macro block encoding mode of the interactive video stream information in a video rendering thread control according to the preset video capturing response control;
and determining a data coding control deep learning model corresponding to the interactive video stream information according to the macro block coding mode and the intra-frame prediction direction of each video rendering thread node in the video rendering thread control.
In a second aspect, an embodiment of the present invention further provides a media data processing apparatus based on remote interaction and cloud computing, which is applied to a big data platform, where the big data platform is communicatively connected to a plurality of online video service terminals and is communicatively connected to a video service interaction terminal for performing video service interaction at each online video service terminal, and the apparatus includes:
the coding control module is used for controlling data coding of online interactive video information in a video interactive channel established between the online video service terminal and the corresponding video service interactive terminal through a data coding control deep learning model in the video coding plug-in of each interactive service element;
the system comprises a first determining module, a second determining module and a third determining module, wherein the first determining module is used for determining a first coding window and a second coding window in a coding video frame region of the online video service terminal for currently performing online video interaction in the data coding control process, and the coding video frame region is a coding region corresponding to a preset interaction performing label in the video interaction process;
a second determining module, configured to determine, in the video interaction process, a third encoding window corresponding to the first encoding window and matching with the video interaction process, and a fourth encoding window corresponding to the second encoding window and matching with the video interaction process;
and the updating module is used for determining coding linkage window information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, and updating the coding control process of the data coding control deep learning model according to the coding linkage window information in the video interaction process.
In a third aspect, an embodiment of the present invention further provides a media data processing system based on remote interaction and cloud computing, where the media data processing system based on remote interaction and cloud computing includes a big data platform and a plurality of online video service terminals in communication connection with the big data platform, and the big data platform is further in communication connection with a video service interaction terminal for performing video service interaction at each online video service terminal;
the big data platform is used for carrying out data coding control on online interactive video information in a video interactive channel established between the online video service terminal and the corresponding video service interactive terminal through a data coding control deep learning model in a video coding plug-in of each interactive service element;
the big data platform is used for determining a first coding window and a second coding window in a coding video frame region of the online video service terminal for performing online video interaction currently in the data coding control process, wherein the coding video frame region is a coding region corresponding to a preset interaction proceeding label in the video interaction process;
the big data platform is used for determining a third coding window corresponding to the first coding window and matched with the video interaction process and a fourth coding window corresponding to the second coding window and matched with the video interaction process in the video interaction process;
and the big data platform is used for determining coding linkage window information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, and updating the coding control process of the data coding control deep learning model according to the coding linkage window information in the video interaction process.
In a fourth aspect, an embodiment of the present invention further provides a big data platform, where the big data platform includes a processor, a machine-readable storage medium, and a network interface, where the machine-readable storage medium, the network interface, and the processor are connected through a bus system, the network interface is configured to be communicatively connected to at least one online video service terminal, the machine-readable storage medium is configured to store a program, an instruction, or a code, and the processor is configured to execute the program, the instruction, or the code in the machine-readable storage medium to perform the method for processing media data based on remote interaction and cloud computing in the first aspect or any possible design of the first aspect.
In a fifth aspect, an embodiment of the present invention provides a computer-readable storage medium, where instructions are stored, and when executed, cause a computer to perform the method for processing media data based on remote interaction and cloud computing in the first aspect or any one of the possible designs of the first aspect.
Based on any one of the above aspects, in the video interaction process, by determining a third coding window corresponding to the first coding window and matching with the video interaction process, and a fourth coding window corresponding to the second coding window and matching with the video interaction process, the coding linkage window information in the video interaction process is determined according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, and the coding control process of the data coding control deep learning model is updated accordingly. Therefore, the coding control process of the current data coding control deep learning model can be updated based on the coding window with the incidence relation, so that the coding process can be adaptively adjusted in the next coding control, and the coding effect is further improved.
Detailed Description
The present invention is described in detail below with reference to the drawings, and the detailed operation method in the following method embodiments can also be applied to the device embodiments or the system embodiments.
Fig. 1 is an interaction diagram of a media data processing system 10 based on remote interaction and cloud computing according to an embodiment of the present invention. The media data processing system 10 based on remote interaction and cloud computing may include a big data platform 100, a plurality of online video service terminals 200 (only two shown in fig. 1) communicatively connected to the big data platform 100, and the big data platform 100 is further communicatively connected to video service interaction terminals 300 (only two shown in fig. 1) for performing video service interaction at each online video service terminal 200. The remote interactive and cloud computing-based media data processing system 10 shown in fig. 1 is merely one possible example, and in other possible embodiments, the remote interactive and cloud computing-based media data processing system 10 may also include only some of the components shown in fig. 1 or may also include other components.
In this embodiment, the online video service terminal 200 may be configured to provide a video interaction channel between the video service interaction terminal 300 and a related video sharing terminal in a certain area range, so as to facilitate management of the video sharing terminals at an area level.
In this embodiment, the video service interaction terminal 300 may include a mobile device, a tablet computer, a laptop computer, or any combination thereof. In some embodiments, the mobile device may include a smart home device, a wearable device, a smart mobile device, a virtual reality device, an augmented reality device, or the like, or any combination thereof. In some embodiments, the smart home devices may include control devices of smart electrical devices, smart monitoring devices, smart televisions, smart cameras, and the like, or any combination thereof. In some embodiments, the wearable device may include a smart bracelet, a smart lace, smart glass, a smart helmet, a smart watch, a smart garment, a smart backpack, a smart accessory, or the like, or any combination thereof. In some embodiments, the smart mobile device may include a smartphone, a personal digital assistant, a gaming device, and the like, or any combination thereof. In some embodiments, the virtual reality device and/or the augmented reality device may include a virtual reality helmet, virtual reality glass, a virtual reality patch, an augmented reality helmet, augmented reality glass, an augmented reality patch, or the like, or any combination thereof. For example, the virtual reality device and/or augmented reality device may include various virtual reality products and the like.
In this embodiment, the big data platform 100, the online video service terminal 200, and the video service interaction terminal 300 in the media data processing system 10 based on remote interaction and cloud computing may cooperatively perform the media data processing method based on remote interaction and cloud computing described in the following method embodiments, and the detailed description of the method embodiments below may be referred to in the specific steps of the big data platform 100, the online video service terminal 200, and the video service interaction terminal 300.
Based on the inventive concept of the technical solution provided by the present application, the big data platform 100 provided by the present application can be applied to scenes such as smart medical care, smart city management, smart industrial internet, general service monitoring management, etc. in which a big data technology or a cloud computing technology is applied, and for example, can also be applied to scenes such as but not limited to new energy automobile system management, smart cloud office, cloud platform data processing, cloud game data processing, cloud live broadcast processing, cloud automobile management platform, block chain financial data service platform, etc., but not limited thereto.
To solve the technical problem in the foregoing background art, fig. 2 is a schematic flowchart of a media data processing method based on remote interaction and cloud computing according to an embodiment of the present invention, and the media data processing method based on remote interaction and cloud computing according to the embodiment may be executed by the big data platform 100 shown in fig. 1, and the media data processing method based on remote interaction and cloud computing is described in detail below.
Step S110, performing data coding control on the online interactive video information in the video interactive channel established between the online video service terminal 200 and the corresponding video service interactive terminal 300 through the data coding control deep learning model in the video coding plug-in of each interactive service element.
Step S120, determining a first coding window and a second coding window in a coding video frame region where the online video service terminal 200 performs online video interaction currently in the data coding control process.
Step S130, in the video interaction process, a third encoding window corresponding to the first encoding window and matching with the video interaction process, and a fourth encoding window corresponding to the second encoding window and matching with the video interaction process are determined.
And step S140, determining coding linkage window information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, and updating the coding control process of the data coding control deep learning model according to the coding linkage window information in the video interaction process.
In this embodiment, the encoded video frame region may be an encoded region corresponding to an interactive tag preset in a video interaction process, and the interactive tag may be a classification tag of a video service displayed in the video interaction service process.
In this embodiment, each encoding window may be used to represent a window region formed by a plurality of grid points in an encoding process, and may also refer to a grid point region pointed by a certain hardware encoder, which is not limited in detail herein.
Based on the steps, in the video interaction process, by determining a third coding window corresponding to the first coding window and matched with the video interaction process and a fourth coding window corresponding to the second coding window and matched with the video interaction process, according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, the coding linkage window information in the video interaction process is determined, and the coding control process of the data coding control deep learning model is updated accordingly. Therefore, the coding control process of the current data coding control deep learning model can be updated based on the coding window with the incidence relation, so that the coding process can be adaptively adjusted in the next coding control, and the coding effect is further improved.
In a possible implementation manner, for step S140, in the process of determining the coding linkage window information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the interaction progress label, the following exemplary sub-steps may be implemented, which are described in detail below.
And a substep S141 of determining coding linkage detection information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window and the coding video frame region.
As one possible example, the encoded video frame region may include an interactive progress flag, and the encoded linkage detection information may include encoded linkage window information.
For example, the encoding window update may be performed on the encoded video frame region according to a first encoding association relationship between the first encoding window and the second encoding window, and a second encoding association relationship between the third encoding window and the fourth encoding window, so that the first encoding association relationship is equal to the second encoding association relationship.
Then, the coding linkage detection information in the video interaction process can be determined according to the coding video frame region updated by the coding window, the third coding window and the fourth coding window.
For example, in a possible design, the coding linkage detection information in the video interaction process can be calculated according to the coding offset information corresponding to the first coding window and the second coding window in the coding video frame region after the updating of the coding window, the interaction progress tag in the coding video frame region after the updating of the coding window, the edge point position of the coding video frame region after the updating of the coding window, the third coding window and the fourth coding window.
For another example, in another possible design, the encoded video frame region after the update of the encoding window may also be updated in the video interaction process, so that the extension element corresponding to the first encoding window in the encoded video frame region after the update of the encoding window coincides with the extension element corresponding to the third encoding window, and the extension element corresponding to the second encoding window in the encoded video frame region after the update of the encoding window coincides with the extension element corresponding to the fourth encoding window, and the encoding linkage detection information in the video interaction process is determined according to the offset information of the extension element.
In a possible implementation manner, for step S120, in the process of determining the first coding window and the second coding window in the coding video frame region where the online video service terminal 200 currently performs the online video interaction in the data coding control process, it may be determined that the first coding window and the second coding window in a collaborative coding relationship or the first coding window and the second coding window in a coding timing context relationship exist in the coding video frame region where the online video service terminal 200 currently performs the online video interaction in the data coding control process.
It should be noted that the cooperative coding relationship is used to indicate that there is a cooperative common coding relationship, and the coding timing context is used to indicate that there is a coding timing context.
In a possible implementation manner, for step S120, in the process of determining a third encoding window corresponding to the first encoding window and matching with the video interaction process and a fourth encoding window corresponding to the second encoding window and matching with the video interaction process in the video interaction process, for example, a third encoding window corresponding to the encoding tag type of the first encoding window and matching with the video interaction process and a fourth encoding window corresponding to the encoding tag type of the second encoding window and matching with the video interaction process may be determined in the video interaction process.
In a possible implementation manner, still referring to step S140, in the process of updating the coding control process of the data coding control deep learning model according to the coding linkage window information in the video interaction process, the following exemplary sub-steps can be implemented, which are described in detail below.
And a substep S142, acquiring a coding linkage window object represented by the coding linkage window information in the video interaction process, and acquiring first reconstruction coding speed information of the coding control object of each corresponding coding control node in the coding control process of the data coding control deep learning model aiming at the coding control object of the coding control node.
For example, the first reconstruction coding speed information is used for characterizing the coding mode configuration and the coding compression component configuration of the coding control subprocess of the coding control node.
And a substep S143, performing linkage updating on the first reconstructed coding speed information according to each coding complexity characteristic associated with the coding linkage window object, and obtaining first coding mode linkage information and coding compression component linkage information corresponding to the first coding mode linkage information.
And a substep S144, acquiring a first coding control scene feature and a scene variable feature of the coding control object of the coding control node, and extracting a scene feature vector of the first coding control scene feature, wherein the scene feature vector of the first coding control scene feature comprises a specific scene feature vector.
And a substep S145, obtaining a specific scene feature vector of a historical coding control object associated with the coding control process of the data coding control deep learning model, and updating the specific scene feature vector of the first coding control scene feature according to the specific scene feature vector, so that the association sequence between each specific scene feature vector in the first coding control scene feature is matched with the association sequence between each specific scene feature vector in the preset historical coding control object.
And a substep S146, after the updating is finished, obtaining a scene feature vector of the second coding control scene feature, and generating the second coding control scene feature according to the scene feature vector of the second coding control scene feature.
And a substep S147, searching and obtaining the coding compression component linkage information matched with the scene variable characteristic and the first coding mode linkage information corresponding to the coding compression component linkage information according to the scene characteristic vector of the scene variable characteristic and the second coding control scene characteristic, and updating the first coding mode linkage information corresponding to the coding compression component linkage information according to the scene characteristic vector of the second coding control scene characteristic to obtain the second coding mode linkage information.
And a substep S148, performing fusion processing on the second coding mode linkage information and the second coding control scene characteristics to obtain fusion coding reference characteristics matched with each coding complexity characteristic associated with the coding control object of the coding control node and the coding linkage window object.
And step S149, performing linkage update on the fusion coding reference characteristics of the coding control nodes to obtain linkage update results of the coding control nodes, wherein the linkage update results comprise linkage update content information of the coding control objects of each coding control node matched with each coding complexity characteristic associated with the coding linkage window object.
And a substep S1491, updating the linkage updating result of each coding control node to the coding control process of the data coding control deep learning model.
In one possible implementation manner, for step S110, in the process of performing data coding control on online interactive video information in a video interactive channel established between the online video service terminal 200 and the corresponding video service interactive terminal 300 through a data coding control deep learning model in the video coding plug-in of each interactive service element, the following exemplary sub-steps can be implemented, which are described in detail below.
And a substep S111, step S110, obtaining a video interaction service process, which is sent by the video service interaction terminal 300 and is established for the target online video service terminal 200, and obtaining corresponding interaction service element information and interaction video stream information from the video interaction service process.
Step S112, extracting a preset video capture response control of each interactive service element in the interactive service element information relative to the target online video service terminal 200, determining a data coding control deep learning model corresponding to the interactive video stream information according to the preset video capture response control, and after associating each data coding control deep learning model with a video coding plug-in of the corresponding interactive service element, issuing video coding configuration information of the video service interactive terminal 300 to the target online video service terminal 200 and enabling the target online video service terminal 200 to record the video coding configuration information of the video service interactive terminal 300 into a coding cooperation terminal list corresponding to the video service interaction list.
Step S113, when receiving a video interaction service request for a target interaction service element corresponding to the target online video service terminal 200 and sent by the video service interaction terminal 300, requesting the target online video service terminal 200 to establish a video interaction channel between the video service interaction terminal 300 and a target video sharing terminal corresponding to the video interaction service request, and performing data coding control on online interaction video information in the video interaction channel through a data coding control deep learning model in a video coding plug-in of the target interaction service element.
In this embodiment, the video service interactive terminal 300 may select a certain target online video service terminal 200 to request the big data platform 100 to perform video service interactive use, and during the process of requesting to perform video service interactive use, it needs to select related interactive service element information and interactive video stream information. The interactive service element information may be used to represent a specific situation of the interactive service element selected by the video service interactive terminal 300, and the interactive service element may refer to an item related to the video sharing terminal, such as an interactive character item, an interactive background item, and the like, which is not limited herein. The interactive video stream information may refer to a transmitted interactive video stream generated online, and is not particularly limited herein.
In this embodiment, the video service interaction list may include a plurality of video sharing terminals, such as virtual reality devices, augmented reality devices, and smart medical terminals, which are controlled by the video service interaction terminal 300.
In this embodiment, the video interaction channel may be used as a communication channel of an online interaction video service, and provides a transmission channel of an online data video stream.
Based on the above steps, in this embodiment, by extracting the preset video capture response control of each interactive service element in the interactive service element information relative to the target online video service terminal 200, a data coding control deep learning model corresponding to the interactive video stream information is determined and is respectively associated to the video coding plug-ins of the corresponding interactive service elements, when interaction is needed, a request is made to the target online video service terminal 200 to establish a video interaction channel between the video service interactive terminal 300 and the target video sharing terminal, and data coding control is performed on the online interactive video information in the video interaction channel through the corresponding data coding control deep learning model. Therefore, data coding control of different interactive service elements in the video interactive channel process can be performed on the online video service terminal 200 in a more targeted manner, targeted control with the interactive service elements as data coding control objects is realized, and the real-time interactive effect is improved.
In one possible implementation manner, for step S111, online video service terminals 200 in different areas may be associated in advance in the big data platform 100, so that the online video service terminal list in the target subscription service requested by the video service interactive terminal 300 may be sent to the video service interactive terminal 300. On this basis, the target online video service terminal 200 determined by the video service interactive terminal 300 from the online video service terminal list can be obtained, and the interactive service element selection list and the interactive video stream information selection list of the target online video service terminal 200 are sent to the video service interactive terminal 300. Then, a video interactive service process initiated after the video service interactive terminal 300 performs a selection operation from the interactive service element selection list and the interactive video stream information selection list is obtained, and corresponding interactive service element information and interactive video stream information are obtained from the video interactive service process.
For example, it is assumed that the list of online video service terminals in the target subscription service requested by the video service interactive terminal 300 transmitted to the video service interactive terminal 300 includes an online video service terminal a, an online video service terminal B, an online video service terminal C, and an online video service terminal D, and if the target video service interactive terminal 300 selected by the video service interactive terminal 300 is the online video service terminal C, the interactive service element selection list and the interactive video stream information selection list of the online video service terminal C are transmitted to the video service interactive terminal 300.
In a possible implementation manner, with respect to step S112, in the data encoding control process, the present embodiment may specifically determine the data encoding control deep learning model based on the video rendering thread control. For example, a video rendering thread control may refer to a collection of related video rendering thread nodes in a rendering hardware architecture for video rendering control.
On this basis, in the embodiment, a preset video capture response control of each interactive service element in the interactive service element information relative to the target online video service terminal 200 can be extracted from a preset video capture response control library of the target online video service terminal 200, and then a macro block coding mode of the interactive video stream information in the video rendering thread control is determined according to the preset video capture response control, so that a data coding control deep learning model corresponding to the interactive video stream information can be determined according to the macro block coding mode and the intra-frame prediction direction of each video rendering thread node in the video rendering thread control.
For example, in a possible implementation manner, in the process of determining a data coding control deep learning model corresponding to interactive video stream information according to a macroblock coding mode and an intra-frame prediction direction of each video rendering thread node in a video rendering thread control, this embodiment may specifically obtain interactive service element information pre-associated with each video rendering thread node in the video rendering thread control, and determine whether the interactive service element information pre-associated with each video rendering thread node includes an interactive service entry component matched with a macroblock coding policy of the macroblock coding mode.
Illustratively, interactive service element information pre-associated with each video rendering thread node in the video rendering thread control can be obtained, video frame reference relation data with at least one set video frame reference relation in the interactive service element information pre-associated with each video rendering thread node is obtained, and then the video frame reference relation data with the set video frame reference relation are clustered according to different set video frame reference relations to obtain a plurality of first video frame reference relation clusters. For example, it should be understood that the first video frame reference relationship clusters are cluster lists containing video frame reference relationship data of the same set video frame reference relationship, and each set video frame reference relationship cluster corresponds to a different set video frame reference relationship.
Then, the target video frame reference relation data characteristics existing in the video frame reference relation data of each first video frame reference relation cluster can be determined according to the macro block coding strategy of the macro block coding mode, a plurality of second video frame reference relation clusters are obtained, and whether the macro block coding mode of each set video frame reference relation in the plurality of second video frame reference relation clusters covers the macro block coding mode or not is judged. And if the macro block coding mode of the video frame reference relation is set in each of the second video frame reference relation clusters to cover the macro block coding mode, judging that the interactive service element information pre-associated with each video rendering thread node comprises interactive service table entry components matched with the macro block coding strategy of the macro block coding mode. And if the macro block coding mode of each set video frame reference relation in the plurality of second video frame reference relation clusters is not larger than the macro block coding mode, judging that the interactive service element information pre-associated with each video rendering thread node does not comprise an interactive service table entry component matched with the macro block coding strategy of the macro block coding mode.
The macroblock coding strategy may refer to a component function that performs a protection function in a protocol response process.
In this way, when the interactive service element information pre-associated with each video rendering thread node does not include an interactive service table entry component matching the macroblock coding strategy of the macroblock coding mode, a plurality of pieces of interactive service element information including a matching macroblock coding strategy of the macroblock coding mode may be determined as a plurality of pieces of target interactive service element information.
Then, the intra-frame prediction direction of each target interactive service element information in the target interactive service element information can be obtained, the target interactive service element information is sequentially spliced according to the direction arrangement condition of the intra-frame prediction direction of each target interactive service element information, an updated interactive service element list is determined, and therefore a target data coding control deep learning model of the interactive service element information in the video rendering thread control can be determined according to the updated interactive service element list and the macro block coding mode.
For example, in one possible example, the interactive service element information in the preset interactive tag interval corresponding to the target online video service terminal 200 may be determined according to the updated interactive service element list and the macroblock coding mode, and then the target data coding control deep learning model of the interactive service element information in the video rendering thread control may be determined according to the interactive service element information of the online video service terminal 200 in the preset interactive tag interval.
For example, in an alternative implementation manner, first, data coding control deep learning models of video rendering thread controls corresponding to a plurality of online service enabling options of the online video service terminal 200 in a preset interaction tag interval may be obtained, and for a data coding control deep learning model of a preset interaction tag interval of a video rendering thread control corresponding to each online service enabling option, a data coding control deep learning model meeting a preset condition to be processed is determined from data coding control deep learning models that are not currently activated in the preset interaction tag interval of the online service enabling option, and is used as an undetermined data coding control deep learning model to be activated.
And then, determining rendering control parameters of the undetermined data coding control deep learning model on a rendering adaptation scene of the video rendering thread control, or until the data coding control deep learning model in the preset interactive label interval does not exist in the data coding control deep learning model which is not activated.
It is worth to be noted that, the process of determining the rendering control parameter of each to-be-determined data coding control deep learning model on the rendering adaptation scene of the video rendering thread control is as follows:
firstly, based on the starting level of the undetermined data coding control deep learning model in the online service enabling option, the first rendering control parameter on the rendering adaptation scene of the video rendering thread control, which is determined by the data coding control deep learning model in the rendering adaptation scene, and the second rendering control parameter on the rendering adaptation scene of the video rendering thread control, which is determined by the data coding control deep learning model in other online service enabling options and is the same as the rendering adaptation scene with the rendering adaptation scene, the rendering control parameter of the undetermined data coding control deep learning model on the rendering adaptation scene of the video rendering thread control is determined. Wherein, the other online service enabling options may be: and rendering control parameters of the undetermined data coding control deep learning model on a rendering adaptation scene of the video rendering thread control are rendering control parameters obtained after the first rendering control parameters and the rendering control parameters are fused on the basis of fusion parameters corresponding to the starting levels.
In this embodiment, the rendering adaptation scene may refer to an operating system environment when running with the video rendering thread control, and the rendering control parameter may refer to an instance call function specifically executed in the protection process.
On this basis, the associated encoding configuration parameter of the current online service enabling option may be determined according to the streaming media configuration information and the rendering control parameter of the current online service enabling option in the preset interactive tag interval, where the preset interactive tag interval corresponds to a plurality of online service enabling options, and the current online service enabling option is any one of the plurality of online service enabling options.
And then, performing associated coding configuration on the interactive service element information according to the associated coding configuration parameters of the current online service enabling option to obtain associated coding configuration information, and counting service dimension information comprising the current interactive service element information, global associated coding configuration parameters of the current online service enabling option and the service dimension information of the current online service enabling option according to the associated coding configuration information.
Then, according to the code window layout of the current online service enabling option, the preset associated code configuration parameters and the service dimension information, the associated code configuration parameters of the next online service enabling option bound to the current online service enabling option in the preset interactive label interval are determined, and the video frame reference relation data of the next online service enabling option bound to the current online service enabling option is determined by calculating the associated code configuration parameters of the next online service enabling option bound to the current online service enabling option.
For example, in one possible example, the set start level of the online service enabling option list in the preset interactive tag interval may be obtained, the set start level of the online service enabling option list in the preset interactive tag interval is determined as the set start level of the current online service enabling option, and then the weight start level of the current online service enabling option is obtained by calculating according to the coding window layout of the current online service enabling option and the set start level of the current online service enabling option, where the weight start level may be obtained by multiplying a coefficient corresponding to the coding window layout by the set start level.
And then, acquiring the actual service dimension parameter of the online service enabling option list in the preset interactive label interval, and updating the actual service dimension parameter of the online service enabling option list in the preset interactive label interval by calculation according to the weight starting level of the current online service enabling option, the preset associated coding configuration parameter of the current online service enabling option and the actual service dimension parameter of the online service enabling option list in the preset interactive label interval. Then, the target service dimension parameter of the online service enabling option list in the preset interactive label interval can be calculated according to the updated actual service dimension parameter, the preset initial service dimension parameter, the global online service enabling option of the online service enabling option list in the preset interactive label interval and the unit online service enabling option of the online service enabling option list in the preset interactive label interval. Then, the encoding code rate configuration parameter can be calculated according to the actual service dimension parameter of the preset interactive label interval online service enabling option list, the preset initial service dimension parameter, the global online service enabling option of the preset interactive label interval online service enabling option list and the unit online service enabling option of the preset interactive label interval online service enabling option list.
Therefore, after the target service dimension parameter of the online service enabling option list of the next online service enabling option bound in the preset interactive label interval is obtained through calculation according to the target service dimension parameter of the online service enabling option list of the preset interactive label interval, the actual service dimension parameter of the online service enabling option list of the preset interactive label interval and the configuration parameter of the coding code rate, determining a depth bit allocation parameter of a next online service enabling option bound in the preset interactive label interval, thereby according to the depth bit distribution parameter, the target service dimension parameter of the online service enabling option list of the next online service enabling option bound in the preset interactive label interval and the encoding code rate configuration parameter, and obtaining the associated coding configuration parameters of the next online service enabling option bound with the current online service enabling option in the preset interactive label interval through weighting calculation of the respective corresponding weighting parameters.
In this way, a target data coding control deep learning model of the interactive service element information in the video rendering thread control can be determined by accumulating the coding control attribute list formed by the video frame reference relation data of each determined online service enabling option, and the target data coding control deep learning model comprises a coding control attribute list.
In a possible implementation manner, for example, for step S113, the embodiment may identify online interactive video information in the video interactive channel, obtain video streaming information corresponding to the video streaming request when the online interactive video information is identified to be associated with the video streaming request, and perform data coding control on the online interactive video information in the video interactive channel through a data coding control deep learning model in the video coding plug-in of the target interactive service element after the video streaming information is verified.
In a possible implementation manner, for example, in the process of performing data coding control on online interactive video information in a video interactive channel through a data coding control deep learning model in a video coding plug-in of a target interactive service element, specifically, online interactive video feature recognition is performed on the online interactive video information in the video interactive channel at each data coding control node in each data coding control deep learning model to determine a coding strategy of a relevant video frame in each online interactive video, so as to perform corresponding coding control.
Based on the same inventive concept, please refer to fig. 3, which is a schematic diagram illustrating functional modules of a media data processing apparatus 400 based on remote interaction and cloud computing according to an embodiment of the present application, and the embodiment can divide the functional modules of the media data processing apparatus 400 based on remote interaction and cloud computing according to the above method embodiment. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. It should be noted that, in the embodiment of the present application, the division of the module is schematic, and is only one logic function division, and there may be another division manner in actual implementation. For example, in the case of dividing each functional module according to each function, the media data processing apparatus 400 based on remote interaction and cloud computing shown in fig. 3 is only a schematic apparatus. The remote interaction and cloud computing based media data processing device 400 may include an encoding control module 410, a first determination module 420, a second determination module 430, and an update module 440, and the functions of the functional modules of the remote interaction and cloud computing based media data processing device 400 are described in detail below.
The encoding control module 410 is configured to perform data encoding control on online interactive video information in a video interactive channel established between the online video service terminal 200 and the corresponding video service interactive terminal 300 through a data encoding control deep learning model in the video encoding plug-in of each interactive service element. The encoding control module 410 may be configured to execute the step S110, and the detailed implementation of the encoding control module 410 may refer to the detailed description of the step S110.
A first determining module 420, configured to determine a first encoding window and a second encoding window in an encoding video frame region where the online video service terminal 200 currently performs online video interaction in a data encoding control process, where the encoding video frame region is an encoding region corresponding to an interaction progress tag preset in a video interaction process. The first determining module 420 may be configured to perform the step S120, and the detailed implementation of the first determining module 420 may refer to the detailed description of the step S120.
A second determining module 430, configured to determine, in the video interaction process, a third encoding window corresponding to the first encoding window and matching with the video interaction process, and a fourth encoding window corresponding to the second encoding window and matching with the video interaction process. The second determining module 430 may be configured to perform the step S130, and the detailed implementation of the second determining module 430 may refer to the detailed description of the step S130.
An updating module 440, configured to determine coding linkage window information in the video interaction process according to the first coding window, the second coding window, the third coding window, the fourth coding window, and the interaction progress tag, and update the coding control process of the data coding control deep learning model according to the coding linkage window information in the video interaction process. The updating module 440 may be configured to perform the step S140, and the detailed implementation of the updating module 440 may refer to the detailed description of the step S140.
It should be noted that the division of the modules of the above apparatus is only a logical division, and the actual implementation may be wholly or partially integrated into one physical entity, or may be physically separated. And these modules may all be implemented in software invoked by a processing element. Or may be implemented entirely in hardware. And part of the modules can be realized in the form of calling software by the processing element, and part of the modules can be realized in the form of hardware. For example, the determining module 310 may be a processing element separately set up, or may be implemented by being integrated into a chip of the apparatus, or may be stored in a memory of the apparatus in the form of program code, and the function of the determining module 310 may be called and executed by a processing element of the apparatus. Other modules are implemented similarly. In addition, all or part of the modules can be integrated together or can be independently realized. The processing element described herein may be an integrated circuit having signal processing capabilities. In implementation, each step of the above method or each module above may be implemented by an integrated logic circuit of hardware in a processor element or an instruction in the form of software.
For example, the above modules may be one or more integrated circuits configured to implement the above methods, such as: one or more Application Specific Integrated Circuits (ASICs), or one or more microprocessors (DSPs), or one or more Field Programmable Gate Arrays (FPGAs), among others. For another example, when some of the above modules are implemented in the form of a processing element scheduler code, the processing element may be a general-purpose processor, such as a Central Processing Unit (CPU) or other processor that can call program code. As another example, these modules may be integrated together, implemented in the form of a system-on-a-chip (SOC).
Fig. 4 is a schematic diagram illustrating a hardware structure of a big data platform 100 for implementing the above-mentioned media data processing method based on remote interaction and cloud computing according to an embodiment of the present invention, and as shown in fig. 4, the big data platform 100 may include a processor 110, a machine-readable storage medium 120, a bus 130, and a transceiver 140.
In a specific implementation process, the at least one processor 110 executes computer-executable instructions stored in the machine-readable storage medium 120 (for example, the encoding control module 410, the first determination module 420, the second determination module 430, and the update module 440 included in the remote interaction and cloud computing-based media data processing apparatus 400 shown in fig. 3), so that the processor 110 may execute the remote interaction and cloud computing-based media data processing method according to the above method embodiment, where the processor 110, the machine-readable storage medium 120, and the transceiver 140 are connected through the bus 130, and the processor 110 may be configured to control transceiving actions of the transceiver 140, so as to transceive data with the online video service terminal 200.
For a specific implementation process of the processor 110, reference may be made to the above-mentioned various method embodiments executed by the big data platform 100, and implementation principles and technical effects thereof are similar, and details of this embodiment are not described herein again.
In the embodiment shown in fig. 4, it should be understood that the Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of a method disclosed in connection with the present invention may be embodied directly in a hardware processor, or in a combination of the hardware and software modules within the processor.
The machine-readable storage medium 120 may comprise high-speed RAM memory and may also include non-volatile storage NVM, such as at least one disk memory.
The bus 130 may be an Industry Standard Architecture (ISA) bus, a Peripheral Component Interconnect (PCI) bus, an Extended ISA (EISA) bus, or the like. The bus 130 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, the buses in the figures of the present application are not limited to only one bus or one type of bus.
In addition, the embodiment of the invention also provides a readable storage medium, wherein the readable storage medium stores computer execution instructions, and when a processor executes the computer execution instructions, the media data processing method based on remote interaction and cloud computing is realized.
Having thus described the basic concept, it will be apparent to those skilled in the art that the foregoing detailed disclosure is to be regarded as illustrative only and not as limiting the present specification. Various modifications, improvements, and offset processing may occur to those skilled in the art, though not expressly stated herein. Such modifications, improvements, and offset processing are suggested in this specification and still fall within the spirit and scope of the exemplary embodiments of this specification.
Also, the description uses specific words to describe embodiments of the description. Such as "one possible implementation," "one possible example," and/or "exemplary" means that a particular feature, structure, or characteristic described in connection with at least one embodiment of the specification is included. Therefore, it is emphasized and should be appreciated that two or more references to "one possible implementation," "one possible example," and/or "exemplary" in various places throughout this specification are not necessarily referring to the same embodiment. Furthermore, some features, structures, or characteristics of one or more embodiments of the specification may be combined as appropriate.
Moreover, those skilled in the art will appreciate that aspects of the present description may be illustrated and described in terms of several patentable species or contexts, including any new and useful combination of processes, machines, manufacture, or materials, or any new and useful improvement thereof. Accordingly, aspects of this description may be performed entirely by hardware, entirely by software (including firmware, resident software, micro-code, etc.), or by a combination of hardware and software. The above hardware or software may be referred to as "data block," module, "" engine, "" unit, "" component, "or" system. Furthermore, aspects of the present description may be represented as a computer product, including computer readable program code, embodied in one or more computer readable media.
The computer storage medium may comprise a propagated data signal with the computer program code embodied therewith, for example, on baseband or as part of a carrier wave. The propagated signal may take any of a variety of forms, including electromagnetic, optical, etc., or any suitable combination. A computer storage medium may be any computer-readable medium that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code located on a computer storage medium may be propagated over any suitable medium, including radio, cable, fiber optic cable, RF, or the like, or any combination of the preceding.
Computer program code required for the operation of various portions of this specification may be written in any one or more programming languages, including an object oriented programming language such as Java, Scala, Smalltalk, Eiffel, JADE, Emerald, C + +, C #, VB.NET, Python, and the like, a conventional programming language such as C, Visual Basic, Fortran 2003, Perl, COBOL 2002, PHP, ABAP, a dynamic programming language such as Python, Ruby, and Groovy, or other programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or on a large data platform. In the latter scenario, the remote computer may be connected to the user's computer through any network format, such as a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet), or in a cloud computing environment, or as a service, such as a software as a service (SaaS).
Additionally, the order in which the elements and lists are processed, the use of alphanumeric characters, or other designations in this specification is not intended to limit the order in which the processes and methods of this specification are performed, unless otherwise specified in the claims. While various presently contemplated embodiments of the invention have been discussed in the foregoing disclosure by way of example, it is to be understood that such detail is solely for that purpose and that the appended claims are not limited to the disclosed embodiments, but, on the contrary, are intended to cover all modifications and equivalent arrangements that are within the spirit and scope of the embodiments herein. For example, although the system components described above may be implemented through interactive services, they may also be implemented through software-only solutions, such as installing the described system on an existing large data platform or mobile device.
Similarly, it should be noted that in the preceding description of embodiments of the present specification, various features are sometimes grouped together in a single embodiment, figure, or description thereof for the purpose of streamlining the disclosure aiding in the understanding of one or more of the embodiments. This method of disclosure, however, is not intended to imply that more features than are expressly recited in a claim. Indeed, the embodiments may be characterized as having less than all of the features of a single embodiment disclosed above.
It is to be understood that the descriptions, definitions and/or uses of terms in the accompanying materials of this specification shall control if they are inconsistent or contrary to the descriptions and/or uses of terms in this specification.
Finally, it should be understood that the embodiments described herein are merely illustrative of the principles of the embodiments of the present disclosure. Other variations are also possible within the scope of the present description. Thus, by way of example, and not limitation, alternative configurations of the embodiments of the specification can be considered consistent with the teachings of the specification. Accordingly, the embodiments of the present description are not limited to only those embodiments explicitly described and depicted herein.