CN115687696A - Streaming media video playing method and related device for client - Google Patents

Streaming media video playing method and related device for client Download PDF

Info

Publication number
CN115687696A
CN115687696A CN202211355718.3A CN202211355718A CN115687696A CN 115687696 A CN115687696 A CN 115687696A CN 202211355718 A CN202211355718 A CN 202211355718A CN 115687696 A CN115687696 A CN 115687696A
Authority
CN
China
Prior art keywords
video
playing
preset
classification
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211355718.3A
Other languages
Chinese (zh)
Inventor
钱怡
杭云
黄莺
施唯佳
郭宁
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianyi Digital Life Technology Co Ltd
Original Assignee
Tianyi Digital Life Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianyi Digital Life Technology Co Ltd filed Critical Tianyi Digital Life Technology Co Ltd
Priority to CN202211355718.3A priority Critical patent/CN115687696A/en
Publication of CN115687696A publication Critical patent/CN115687696A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a streaming media video playing method and a related device for a client, wherein the method comprises the following steps: inputting a preset video frame image into a preset lightweight class classification model to perform video class classification operation to obtain a video classification result; searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, wherein the preset parameter matching table is an associated corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters; and inputting the playing audio-visual parameters into a playing controller to carry out video playing operation. The application can solve the technical problems that the existing manual auxiliary playing mode is time-consuming and long, and mistakes and omissions easily occur, so that the experience of the client video playing is lack.

Description

Streaming media video playing method and related device for client
Technical Field
The present application relates to the field of data processing technologies, and in particular, to a streaming media video playing method and a related apparatus for a client.
Background
Grouping users and setting differentiated audio-visual effects according to television video contents have become a main method for improving client stickiness for various streaming media operators and television equipment manufacturers. Referring to fig. 3, a common implementation method is to mark and classify video contents such as military type, emotion type, children type, drama type and the like at the same time when the streaming media management background uploads the video contents through manual review, store the video contents and video names into a streaming media video on-demand menu management database, and when a user requests the menu with a remote controller in front of a television or a set top box, a playing device obtains the category and audio-visual parameters of the video source by reading the classification field of the platform menu database, and further sets a corresponding audio-visual effect. For example, military sets the stereo surround sound effect, children sets the screen in a vision protection mode, i.e., the brightness is soft, the contrast is moderate and not dazzling, and opera sets the screen color saturation.
Therefore, the method determines the category of the film list after the content of the film source video is watched manually, and the film list is input into the platform database manually, so that the time consumption is long, the risk of error and leakage exists in manual input, and if the early-stage stored video does not have the category marked manually at that time, the corresponding audio-visual parameters cannot be obtained at the playing end, and the audio-visual playing controller cannot be adjusted, so that the loss of user experience is caused.
Disclosure of Invention
The application provides a streaming media video playing method and a related device for a client, which are used for solving the technical problems that the existing manual auxiliary playing mode is long in time consumption and easy to miss, so that the experience of playing the video of the client is lack.
In view of the above, a first aspect of the present application provides a streaming video playing method for a client, including:
inputting a preset video frame image into a preset lightweight class classification model to perform video class classification operation to obtain a video classification result;
searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, wherein the preset parameter matching table is a related corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters;
and inputting the playing audio-visual parameters into a playing controller to carry out video playing operation.
Preferably, the inputting a preset video frame image into a preset lightweight class classification model to perform a video class classification operation to obtain a video classification result, before further comprising:
and acquiring a plurality of frames of video images in the video stream based on a preset sampling period to obtain a preset video frame image.
Preferably, the generation process of the preset lightweight class model is as follows:
carrying out fine tuning optimization on a preset initial classification network model based on a maximum pooling method to obtain an optimized classification network model;
and performing structural pruning operation on the optimized classification network model, and performing model training according to historical video frame images to obtain a preset lightweight class classification model.
Preferably, the searching for the corresponding playing audiovisual parameter in a preset parameter matching table based on the video classification result further includes:
performing video category division operation on an unknown video frame image based on a preset video category table to obtain a video drawing category, wherein the preset video category table is an association list of video names, video categories and category IDs;
and constructing an association relation between the video name corresponding to the unknown video frame image and the video drawing type and playing audio-visual parameters, and storing the association relation as a preset parameter matching table.
A second aspect of the present application provides a streaming video playing apparatus for a client, including:
the classification module is used for inputting a preset video frame image into a preset lightweight classification model to perform video classification operation to obtain a video classification result;
the parameter matching module is used for searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, the preset parameter matching table is a related corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters;
and the video playing module is used for inputting the playing audio-visual parameters into a playing controller to carry out video playing operation.
Preferably, the method further comprises the following steps:
and the image sampling module is used for acquiring a plurality of frames of video images in the video stream based on a preset sampling period to obtain a preset video frame image.
Preferably, the generation process of the preset lightweight class model is as follows:
carrying out fine tuning optimization on a preset initial classification network model based on a maximum pooling method to obtain an optimized classification network model;
and performing structured pruning operation on the optimized classification network model, and performing model training according to historical video frame images to obtain a preset lightweight class classification model.
Preferably, the method further comprises the following steps:
the unknown classification module is used for carrying out video classification operation on unknown video frame images based on a preset video classification table to obtain video drawn classes, and the preset video classification table is an association list of video names, video classes and class IDs;
and the key construction module is used for constructing an incidence relation between the video name corresponding to the unknown video frame image and the video drawing category and the playing audiovisual parameter, and storing the incidence relation as a preset parameter matching table.
A third aspect of the present application provides a streaming video playing device for a client, where the device includes a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the method for playing streaming video on the client according to the first aspect, according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing program codes for executing the streaming video playing method for a client according to the first aspect.
According to the technical scheme, the embodiment of the application has the following advantages:
the application provides a streaming media video playing method for a client, which comprises the following steps: inputting a preset video frame image into a preset lightweight class classification model to perform video class classification operation to obtain a video classification result; searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, wherein the preset parameter matching table is an associated corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters; and inputting the playing audio-visual parameters into a playing controller to carry out video playing operation.
The streaming media video playing method for the client side provided by the application is characterized in that video frame images are classified at the client side, playing audio-visual parameters of videos of the type are found in a preset parameter matching table according to video classification results, video playing is directly completed based on the parameters, corresponding playing parameters do not need to be obtained through repeated interaction with a video playing platform, a playing terminal does not need to be replaced, classification accuracy can be improved by adopting a light-weight classification model, and meanwhile, small calculation amount and high processing speed can be guaranteed. Therefore, the technical problems that the existing manual auxiliary playing mode is long in time consumption and easy to miss and miss, and the experience of the client video playing is lack are solved.
Drawings
Fig. 1 is a schematic flowchart of a streaming media video playing method for a client according to an embodiment of the present disclosure;
fig. 2 is a schematic structural diagram of a streaming video playing device for a client according to an embodiment of the present disclosure;
FIG. 3 is a schematic diagram illustrating a process for playing streaming media video of a client under the assistance of a human in the prior art provided in the background of the present application;
fig. 4 is a schematic flowchart of pruning compression optimization of a preset lightweight class model according to an embodiment of the present application;
FIG. 5 is a schematic view of a video frame image acquisition process and a preset parameter matching table generation process provided in an embodiment of the present application;
fig. 6 is a schematic view of a video playing flow of a user client according to an application example of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments obtained by a person of ordinary skill in the art based on the embodiments in the present application without making any creative effort belong to the protection scope of the present application.
For easy understanding, please refer to fig. 1, an embodiment of a streaming video playing method for a client provided by the present application includes:
step 101, inputting a preset video frame image into a preset lightweight class classification model to perform video class classification operation, so as to obtain a video classification result.
Further, the generation process of the preset lightweight class classification model is as follows:
carrying out fine tuning optimization on a preset initial classification network model based on a maximum pooling method to obtain an optimized classification network model;
and performing structured pruning operation on the optimized classification network model, and performing model training according to the historical video frame images to obtain a preset lightweight class model.
Further, step 101, before, further includes:
and acquiring a plurality of frames of video images in the video stream based on a preset sampling period to obtain a preset video frame image.
The preset video frame image is an image collected in a video stream which needs to be analyzed and played currently, and multiple frames, rather than a single frame, need to be collected in the video stream, so that what category the corresponding video stream belongs to can be distinguished according to the image. The video classification result generally refers to subject categories of videos, such as military war, emotion, kids, drama, and the like, and may include other categories in addition, which are defined specifically according to actual situations and are not described herein again.
The preset lightweight class classification model also has the advantage that the trained image classification model is preferred, the video classification result can be directly obtained, and the lightweight class classification model is a lightweight class model which is subjected to fine tuning optimization and structured pruning operation, so that the classification accuracy can be improved, the model calculation amount can be controlled, and the processing speed is guaranteed.
Most of the existing common video classification models are convolutional neural networks based on deep learning, the neural network structure is continuously improved along with the performance, and the video image identification can achieve a good effect; however, most of these networks have complex network layers, and most of them use GPU for training, which is not favorable for running on devices with lower hardware resources. According to the embodiment, unnecessary redundant parameters in the neural network model are removed by cutting and compressing the neural network model, and the calculation complexity is reduced, so that intelligent video classification and identification can be realized on lower hardware equipment. The convolutional network model processing data generally comprises a convolutional layer, a pooling layer and a full link layer, wherein the convolutional layer is mainly used for extracting image features, a video image is RGB3 channel data, assuming that a frame image data set obtained after sampling is a 3-channel picture of H1 × W1 × 3, and a convolutional kernel with the size of k × k is used for performing convolutional calculation to obtain image feature data, so as to obtain feature maps of L = (H1-k + 1) × (W1-k + 1), where, for example, H1=32, W1=32, k =2, the convolutional kernel is 2 × 2, S = H1 × W1/k × k =128 convolutional kernels, the number of convolutional calculations is L × S =31 × 128=123008, and the feature map output after the convolutional calculation is activated by an activation function to obtain an effective feature data set.
In order to cut out redundant parameters and reduce the calculated amount of the model, a pooling layer is added behind the convolution layer, the latitude of the matrix input parameters is reduced by adopting a maximum pooling method, and data with obvious characteristics recorded in characteristic values are reserved, so that the process is the process of fine tuning the model. The filter with the scale of 2 is used for pooling, a 2 multiplied by 2 area is selected, the step is 2 as the hyper-parameter of the maximum pooling, and H and W of the model after pooling can achieve the effect of compressing 1/2 respectively. Pruning the pooled model by a structured pruning method, placing non-0 parameters in the pooled feature data set at a specified position, performing channel-level pruning on the model, and if a kernel in the two-dimensional data is adopted
Figure BDA0003921061330000061
(l represents the ith layer, i represents the ith neuron) is cut out, the ith layer in which the ith layer is located reduces the output (H1-k + 1) x (W1-k + 1) feature map and (H1-k + 1) x (W1-k + 1) xk 2 operations, and the output (H1-k-kl +1+ 2) x (W-k-kl +1+ 2) xk +12 operations is reduced to the ith +1 layer, so that the cut convolution kernel of the ith layer can reduce all subsequent convolution calculations related to the first layer, and the size of the model is reduced to a large extent. Referring to fig. 4, the specific process of model improvement is mainly how to determine whether neurons in the network meet the clipping condition, and the input continuously adjusted network parameters are evaluated by the activation function (ReLU), the data set is propagated forward through the neural network, and for a certain neural network node, if a large number of values that pass through the activation function are 0 or are smaller than a certain threshold, the values are discarded. The compressing and cutting are realized by repeated operation. And repeatedly training the models by adopting the historical video frame images, and selecting the best training model as a preset lightweight class classification model after preset training times are reached.
In addition, the calculated amount can be reduced from the data quantization angle, the processing speed is improved, the image data after sampling and characteristic extraction is generally in a floating point type (32 bits), and in order to reduce the calculated amount, 8 bits of data are quantized and then input into a model. The method and the device realize the quantitative processing of the data, reduce the calculated amount and reduce the resource occupation of the local player.
In order to facilitate analysis of the video frame sampling process, in this embodiment, taking that 1 second in a video stream includes 25 frames as an example, because the feature difference between two consecutive frames of images before and after a video frame is not large, generally recognized in the industry that feature recognition of video images at least needs 10 discontinuous frames, 10 video frame images are extracted from the video images with 5 frames as a sampling period by taking 2 sampling periods as a unit (here, 2 seconds) until the video stream is played, and the 10 × n video frame images are continuously sent to a video image classification recognition model with 10 frames as a unit (the value of 10 × n (the number of sampling frames) is different according to the frame rate (fps) of the video stream, the sampling period and the sampling duration, where n = the sampling duration is minute × 60s × 10 frames/2).
Considering that the embodiment mainly analyzes that image recognition is performed at the playing local end, and the hardware processing capacity and the video image storage throughput capacity of playing equipment (a set top box or a television) are much smaller than those of a cloud end or a middle station, the existing large-scale video image classification recognition model is cut, and only 8-bit digital input is performed on the input end, the improved image recognition model recognition rate is ensured to be more than 95%, the occupied CPU and the memory are obviously reduced, and the matching of calculation capacity, recognition rate and the playing end hardware capacity is achieved.
And 102, searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, wherein the preset parameter matching table is a related corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters.
Further, step 102, before, further comprising:
performing video category division operation on an unknown video frame image based on a preset video category table to obtain a video drawing category, wherein the preset video category table is an association list of video names, video categories and category IDs;
and constructing an association relation between the video name corresponding to the unknown video frame image and the video drawing type and playing audio-visual parameters, and storing the association relation as a preset parameter matching table.
The content of the preset video category table refers to table 1, and the content of the preset parameter matching table refers to table 2, and in general, the two tables are used for association construction and storage of video information, and information association is performed between the two tables through video categories, so that complete query of information is facilitated. The preset video category table is used for carrying out category division on the unknown video frame images, and the preset parameter matching table is used for inquiring and playing audio-visual parameters according to the classified videos.
TABLE 1 Preset video category Table
Figure BDA0003921061330000071
TABLE 2 Preset parameter matching Table
Figure BDA0003921061330000072
Figure BDA0003921061330000081
It is understood that the video name is generally a title, the category is a category to which the video belongs, the category ID is a unique code based on the category of the video, and the playing audiovisual parameter is an adjustment parameter when the player plays the video. Referring to fig. 5, a specific preset parameter matching table forming process may be shown, wherein the category-audiovisual parameter setting table is a preset video category table, and the video title-category-play parameter table is a preset parameter matching table.
And 103, inputting the playing audio-visual parameters into the playing controller to carry out video playing operation.
It can be understood that the playing controller comprises an audio controller and a video controller, and the playing effect of different types of videos is adjusted by controlling the playing audiovisual parameters of the playing controller, so that the audiovisual display effect of the playing terminal is improved.
For convenience of understanding, the application example is provided, taking an example that a user uses a remote controller to order a certain menu, when the user selects a certain menu, namely a video name, the method in the application example is adopted, a video frame periodic sampler and a preset lightweight class classification model are loaded at a playing client (a set top box or a television), and a video category-playing parameter correspondence table and a menu name, category and playing parameter table are built in, when the user selects a certain menu, the player firstly inquires whether corresponding playing parameters exist in the local menu name, category and playing parameter table, if not, the player starts sampling the menu with 5 frames as sampling periods, and continuously sends the sampled video frame images into the local preset lightweight class classification model at the playing end every 2 sampling periods (2s and 10 frames) until the playing of the menu is finished, so that the output result of the preset lightweight class classification model is obtained, and the output result of the video category classification model is a category, a child class, a drama song and the like, and then the classification result of the local video frame name, the set classification table and the parameter table are stored in the video category-corresponding to obtain the final audiovisual parameter table. When the user requests the menu again, the player inquires the local 'menu name, category and playing parameter table' again to obtain corresponding playing parameters, and transmits the audio-visual playing parameters to the audio and video controller to complete the setting of the audio-visual effect, and for the user, the local player can seamlessly switch to audio-visual experience consistent with the cloud downlink; the specific process can be seen in fig. 6.
According to the streaming media video playing method for the client, the video frame images are classified at the client, the playing audiovisual parameters of the videos are found in the preset parameter matching table according to the video classification result, video playing is directly completed based on the parameters, the corresponding playing parameters do not need to be obtained through repeated interaction with a video playing platform, the playing terminal does not need to be replaced, and the classification accuracy can be improved by adopting a lightweight classification model, and meanwhile, the smaller calculated amount and the higher processing speed can be ensured. Therefore, the technical problems that the existing manual auxiliary playing mode is long in time consumption and prone to error and leakage, and the experience of playing the video at the client side is poor can be solved.
For easy understanding, please refer to fig. 2, the present application provides an embodiment of a streaming video playing apparatus for a client, including:
the category classification module 201 is configured to input a preset video frame image into a preset lightweight classification model to perform video category classification operation, so as to obtain a video classification result;
the parameter matching module 202 is configured to search a corresponding playing audiovisual parameter in a preset parameter matching table based on the video classification result, where the preset parameter matching table is an associated corresponding list of video categories and playing audiovisual parameters, and the playing audiovisual parameter includes display brightness, contrast, color saturation, and sound parameter;
the video playing module 203 is configured to input the playing audiovisual parameters into the playing controller to perform video playing operation.
Further, still include:
the image sampling module 204 is configured to collect multiple frames of video images in a video stream based on a preset sampling period to obtain a preset video frame image.
Further, the generation process of the preset lightweight class model is as follows:
carrying out fine tuning optimization on a preset initial classification network model based on a maximum pooling method to obtain an optimized classification network model;
and performing structural pruning operation on the optimized classification network model, and performing model training according to the historical video frame image to obtain a preset lightweight class classification model.
Further, still include:
the unknown classification module 205 is configured to perform a video classification operation on an unknown video frame image based on a preset video classification table to obtain a video drawn classification, where the preset video classification table is an association list of video names, video classifications, and classification IDs;
and the key construction module 206 is configured to construct an association relationship between a video name corresponding to an unknown video frame image and a video drawing category and a playing audiovisual parameter, and store the association relationship as a preset parameter matching table.
The application also provides streaming media video playing equipment for the client, wherein the equipment comprises a processor and a memory;
the memory is used for storing the program codes and transmitting the program codes to the processor;
the processor is used for executing the streaming media video playing method for the client in the above method embodiment according to the instructions in the program code.
The present application also provides a computer-readable storage medium for storing program codes, where the program codes are used to execute the streaming video playing method for the client in the foregoing method embodiments.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for executing all or part of the steps of the method described in the embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device). And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present application.

Claims (10)

1. A streaming media video playing method for a client is characterized by comprising the following steps:
inputting a preset video frame image into a preset lightweight class classification model to perform video class classification operation to obtain a video classification result;
searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, wherein the preset parameter matching table is a related corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters;
and inputting the playing audio-visual parameters into a playing controller to carry out video playing operation.
2. The method for playing streaming media video at the client according to claim 1, wherein the inputting of the preset video frame image into the preset lightweight class model for video category classification operation to obtain a video classification result further comprises:
and acquiring a plurality of frames of video images in the video stream based on a preset sampling period to obtain a preset video frame image.
3. The method for playing streaming media video on the client side according to claim 1, wherein the preset lightweight class model is generated by:
carrying out fine tuning optimization on a preset initial classification network model based on a maximum pooling method to obtain an optimized classification network model;
and performing structured pruning operation on the optimized classification network model, and performing model training according to historical video frame images to obtain a preset lightweight class classification model.
4. The method for playing streaming media video according to claim 1, wherein the step of searching a preset parameter matching table for corresponding playing audiovisual parameters based on the video classification result further comprises:
performing video category division operation on an unknown video frame image based on a preset video category table to obtain a video drawing category, wherein the preset video category table is an association list of video names, video categories and category IDs;
and constructing an association relation between the video name corresponding to the unknown video frame image and the video drawing type and playing audio-visual parameters, and storing the association relation as a preset parameter matching table.
5. A streaming video playing device for a client, comprising:
the classification module is used for inputting a preset video frame image into a preset lightweight classification model to perform video classification operation to obtain a video classification result;
the parameter matching module is used for searching corresponding playing audio-visual parameters in a preset parameter matching table based on the video classification result, the preset parameter matching table is an associated corresponding list of video categories and the playing audio-visual parameters, and the playing audio-visual parameters comprise display brightness, contrast, color saturation and sound parameters;
and the video playing module is used for inputting the playing audio-visual parameters into a playing controller to carry out video playing operation.
6. The streaming video playing apparatus for the client according to claim 5, further comprising:
and the image sampling module is used for acquiring a plurality of frames of video images in the video stream based on a preset sampling period to obtain a preset video frame image.
7. The streaming media video playing apparatus for the client according to claim 5, wherein the preset lightweight class model is generated by:
carrying out fine tuning optimization on a preset initial classification network model based on a maximum pooling method to obtain an optimized classification network model;
and performing structured pruning operation on the optimized classification network model, and performing model training according to historical video frame images to obtain a preset lightweight class classification model.
8. The streaming video playing device for the client according to claim 5, further comprising:
the unknown classification module is used for carrying out video classification operation on unknown video frame images based on a preset video classification table to obtain video drawn classes, and the preset video classification table is an association list of video names, video classes and class IDs;
and the key construction module is used for constructing an incidence relation between the video name corresponding to the unknown video frame image and the video drawing category and playing audio-visual parameters, and storing the incidence relation as a preset parameter matching table.
9. A streaming video playback device for a client, the device comprising a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the streaming video playing method for the client according to any one of claims 1 to 4 according to instructions in the program code.
10. A computer-readable storage medium for storing a program code for executing the streaming video playing method for a client according to any one of claims 1 to 4.
CN202211355718.3A 2022-11-01 2022-11-01 Streaming media video playing method and related device for client Pending CN115687696A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211355718.3A CN115687696A (en) 2022-11-01 2022-11-01 Streaming media video playing method and related device for client

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211355718.3A CN115687696A (en) 2022-11-01 2022-11-01 Streaming media video playing method and related device for client

Publications (1)

Publication Number Publication Date
CN115687696A true CN115687696A (en) 2023-02-03

Family

ID=85048245

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211355718.3A Pending CN115687696A (en) 2022-11-01 2022-11-01 Streaming media video playing method and related device for client

Country Status (1)

Country Link
CN (1) CN115687696A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117219003A (en) * 2023-11-09 2023-12-12 深圳市东陆科技有限公司 Content display method and device of LED display module

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117219003A (en) * 2023-11-09 2023-12-12 深圳市东陆科技有限公司 Content display method and device of LED display module
CN117219003B (en) * 2023-11-09 2024-03-12 深圳市东陆科技有限公司 Content display method and device of LED display module

Similar Documents

Publication Publication Date Title
US11605226B2 (en) Video data processing method and apparatus, and readable storage medium
CN109145784B (en) Method and apparatus for processing video
KR102082815B1 (en) Artificial intelligence based resolution improvement system
WO2021135983A1 (en) Video transcoding method and apparatus, server and storage medium
CN111954053B (en) Method for acquiring mask frame data, computer equipment and readable storage medium
WO2022184117A1 (en) Deep learning-based video clipping method, related device, and storage medium
CN107171932B (en) Picture style conversion method, device and system
CN110519636B (en) Voice information playing method and device, computer equipment and storage medium
CN110557659B (en) Video recommendation method and device, server and storage medium
CN110263215B (en) Video emotion positioning method and system
CN111428660B (en) Video editing method and device, storage medium and electronic device
US11762905B2 (en) Video quality evaluation method and apparatus, device, and storage medium
US20230353828A1 (en) Model-based data processing method and apparatus
US11451858B2 (en) Method and system of processing information flow and method of displaying comment information
CN109948721A (en) A kind of video scene classification method based on video presentation
CN115687696A (en) Streaming media video playing method and related device for client
WO2023045635A1 (en) Multimedia file subtitle processing method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN111432206A (en) Video definition processing method and device based on artificial intelligence and electronic equipment
JP2024511103A (en) Method and apparatus for evaluating the quality of an image or video based on approximate values, method and apparatus for training a first model, electronic equipment, storage medium, and computer program
CN113766268A (en) Video processing method and device, electronic equipment and readable medium
CN112383824A (en) Video advertisement filtering method, device and storage medium
CN116610646B (en) Data compression method, device, equipment and computer readable storage medium
CN113162895A (en) Dynamic coding method, streaming media quality determination method and electronic equipment
CN105872586A (en) Real time video identification method based on real time video streaming collection
US20230142432A1 (en) Content Generating Device, Content Distribution System, Content Generating Method, And Content Generating Program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination