CN117459764A - Multichannel intelligent screen display method, device, equipment and storage medium - Google Patents

Multichannel intelligent screen display method, device, equipment and storage medium Download PDF

Info

Publication number
CN117459764A
CN117459764A CN202311777139.2A CN202311777139A CN117459764A CN 117459764 A CN117459764 A CN 117459764A CN 202311777139 A CN202311777139 A CN 202311777139A CN 117459764 A CN117459764 A CN 117459764A
Authority
CN
China
Prior art keywords
signal
screen display
video playing
data
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202311777139.2A
Other languages
Chinese (zh)
Other versions
CN117459764B (en
Inventor
吴军
王博江
潘会湘
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Kontech Electronics Co ltd
Original Assignee
Shenzhen Kontech Electronics Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Kontech Electronics Co ltd filed Critical Shenzhen Kontech Electronics Co ltd
Priority to CN202311777139.2A priority Critical patent/CN117459764B/en
Publication of CN117459764A publication Critical patent/CN117459764A/en
Application granted granted Critical
Publication of CN117459764B publication Critical patent/CN117459764B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26208Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints
    • H04N21/26216Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists the scheduling operation being performed under constraints involving the channel capacity, e.g. network bandwidth
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/24Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth, upstream requests
    • H04N21/2402Monitoring of the downstream path of the transmission network, e.g. bandwidth available
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/266Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system, merging a VOD unicast channel into a multicast channel
    • H04N21/2662Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/44029Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44209Monitoring of downstream path of the transmission network originating from a server, e.g. bandwidth variations of a wireless network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44227Monitoring of local network, e.g. connection or bandwidth variations; Detecting new devices in the local network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities
    • H04N21/4621Controlling the complexity of the content stream or additional data, e.g. lowering the resolution or bit-rate of the video stream for a mobile client with a small screen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/64Addressing
    • H04N21/6405Multicasting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/647Control signaling between network components and server or clients; Network processes for video distribution between server and clients, e.g. controlling the quality of the video stream, by dropping packets, protecting content from unauthorised alteration within the network, monitoring of network load, bridging between two different networks, e.g. between IP and wireless
    • H04N21/64723Monitoring of network processes or resources, e.g. monitoring of network load
    • H04N21/64738Monitoring network characteristics, e.g. bandwidth, congestion level

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application relates to the technical field of screen display and discloses a multichannel intelligent screen display method, device and equipment and a storage medium. Comprising the following steps: performing video display signal channel quality analysis on a plurality of user intelligent display terminals to obtain signal channel quality data; group division is carried out on the multi-channel screen display combinations to obtain a plurality of first screen display combinations; calculating the data receiving rate to obtain target rate evaluation indexes, and performing video signal processing on each target video playing signal to obtain a coded video playing signal set corresponding to each target video playing signal; performing resource allocation on each target video playing signal to obtain a second screen display combination corresponding to each target video playing signal; and decoding the video display signals of the target video playing signals, transmitting the video display signals to the intelligent screen corresponding to each user intelligent display terminal for video playing, and further improving the accuracy of multi-channel intelligent screen display.

Description

Multichannel intelligent screen display method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of screen display technologies, and in particular, to a method, an apparatus, a device, and a storage medium for displaying a multichannel intelligent screen.
Background
With the continuous development of the internet and communication technology, the transmission and distribution of multimedia contents has become one of the main applications of the current network world. Video playing is one of the important application fields, and has a huge number of users, and the demand for high-quality and low-delay video transmission is continuously increasing. Conventional video transmission methods present challenges in meeting user demands, especially in cases of network congestion and simultaneous access by multiple users.
Researchers and engineers have taken various approaches to improve the efficiency and user experience of multimedia transmissions. Multicast technology is widely used because it can effectively reduce network congestion and bandwidth requirements, but conventional multicast technology has a problem of "bottleneck users". This means that within a multicast group, users with poor channel quality can limit the data rate of the entire group, thereby affecting the user experience.
Disclosure of Invention
The application provides a multichannel intelligent screen display method, device, equipment and storage medium, which improve the accuracy of multichannel intelligent screen display.
The first aspect of the present application provides a multi-channel intelligent screen display method, which includes:
Acquiring a plurality of user intelligent display terminals in a target area, and simultaneously, carrying out video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal;
constructing a multi-channel screen display combination according to a plurality of the user intelligent display terminals, and carrying out group division on the multi-channel screen display combination through signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations;
respectively carrying out data receiving rate calculation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals;
video signal processing is carried out on each target video playing signal respectively, and a coded video playing signal set corresponding to each target video playing signal is obtained;
performing resource allocation on each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal;
And decoding video display signals of a plurality of target video playing signals through second screen display combinations corresponding to the target video playing signals, and transmitting the video display signals to intelligent screens corresponding to the intelligent display terminals of the users to play the video.
A second aspect of the present application provides a multi-channel smart screen display device, comprising:
the acquisition module is used for acquiring a plurality of user intelligent display terminals in a target area, and simultaneously, carrying out video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal;
the construction module is used for constructing multi-channel screen display combinations according to a plurality of the user intelligent display terminals, and carrying out group division on the multi-channel screen display combinations through signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations;
the computing module is used for respectively carrying out data receiving rate computation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals;
The processing module is used for respectively carrying out video signal processing on each target video playing signal to obtain a coded video playing signal set corresponding to each target video playing signal;
the distribution module is used for distributing resources to each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal;
and the transmission module is used for decoding the video display signals of the plurality of target video playing signals through the second screen display combination corresponding to each target video playing signal and transmitting the video display signals to the intelligent screen corresponding to each user intelligent display terminal for video playing.
A third aspect of the present application provides a multi-channel smart screen display device, comprising: a memory and at least one processor, the memory having instructions stored therein; the at least one processor invokes the instructions in the memory to cause the multi-channel smart screen display device to perform the multi-channel smart screen display method described above.
A fourth aspect of the present application provides a computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the above-described multi-channel smart screen display method.
According to the technical scheme, the first screen display combination with different quality is built for the user according to the signal channel quality data, and the multicast resource is distributed, so that the system can ensure that the user receives the video quality suitable for the equipment and network conditions of the user. The system can automatically adapt to the transmission requirements under different network conditions by considering the signal channel quality data and the target rate evaluation index. This means that stable video transmission can be provided even in case of network congestion or poor channel quality. Scalable video coding allows the system to create a set of coded video playback signals of different quality levels for each target video playback signal. The multicast resource allocation algorithm intelligently allocates resources by comprehensively considering a plurality of factors, such as channel quality, data transmission rate, and resource fairness. By carrying out channel signal detection, group division and target rate evaluation index calculation on the intelligent user display terminal, resources can be effectively allocated according to the requirements of different users and channel quality, and efficient utilization of the resources is ensured. This helps to minimize resource waste and improve efficiency of multicast transmission.
Drawings
FIG. 1 is a schematic diagram of one embodiment of a multi-channel smart screen display method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an embodiment of a multi-channel smart screen display device according to an embodiment of the present application.
Detailed Description
The embodiment of the application provides a multichannel intelligent screen display method, device, equipment and storage medium, which improve the accuracy of multichannel intelligent screen display.
The terms "first," "second," "third," "fourth" and the like in the description and in the claims of this application and in the above-described figures, if any, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments described herein may be implemented in other sequences than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed or inherent to such process, method, article, or apparatus.
For ease of understanding, the following describes a specific flow of an embodiment of the present application, referring to fig. 1, and one embodiment of a multi-channel smart screen display method in an embodiment of the present application includes:
s101, acquiring a plurality of user intelligent display terminals in a target area, and simultaneously, carrying out video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal;
it can be understood that the execution body of the application may be a multi-channel smart screen display device, and may also be a terminal or a server, which is not limited herein. The embodiment of the present application will be described by taking a server as an execution body.
Specifically, the server acquires a plurality of user intelligent display terminals in the target area. This may be achieved by a network monitoring or device connected to a server. Video transmission protocol analysis is then performed. The server knows the video transmission protocol that each user smart display terminal is using, as different protocols have different effects on the signal channel quality data. For example, protocols such as HTTP, RTSP, UDP all have different characteristics. This analysis may be implemented using network packet analysis tools or protocol identification algorithms. The server then begins sending detected video signals to each user based on their video transmission protocol. This may be achieved by requesting a video stream or a specific test video. Meanwhile, the server also needs to collect video presentation data corresponding to each user. This includes capturing video content on a user screen or monitoring a network data stream to obtain video data. The server performs signal-to-noise ratio calculation. This step involves analyzing each video presentation data to calculate a signal to noise ratio. A high signal-to-noise ratio generally indicates good video quality, while a low signal-to-noise ratio means that there is noise or distortion in the video. In addition, the server performs packet loss rate calculation. This is to determine the extent to which the data packet is lost during transmission. High packet loss rates can lead to buffering and reduced picture quality in video playback. Meanwhile, signal intensity calculation is performed. This involves detecting the network signal strength or transmission signal strength of the user. A strong signal generally represents a more stable connection, while a weak signal may result in video interruption or low quality. Finally, the server performs video channel signal analysis. This is a comprehensive step covering signal-to-noise ratio, packet loss rate and signal strength data. By integrating these data, the server assigns a signal channel quality data score to each user's smart display terminal, thereby knowing which users require more bandwidth or signal enhancement.
S102, constructing a multi-channel screen display combination according to a plurality of user intelligent display terminals, and carrying out group division on the multi-channel screen display combination through signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations;
specifically, the server acquires a plurality of user intelligent display terminals in the target area. This may be achieved by network monitoring, device detection, or other means. When these users are determined, the server proceeds to the next step, i.e. video display signal channel quality analysis. In the video display signal channel quality analysis stage, the server performs video transmission protocol analysis on each user intelligent display terminal to determine the transmission protocol they are using. This helps to understand the way the user is transmitting data, e.g. HTTP, RTSP, UDP, etc. Based on the video transmission protocol of each user's smart display terminal, the server transmits a detection video signal to them and simultaneously collects their video presentation data. These data will be used in subsequent steps to evaluate the quality of the video signal. And the server calculates the signal-to-noise ratio of the video presentation data of each user intelligent display terminal. This process involves analyzing the image sharpness and noise level to determine signal quality. Meanwhile, the server calculates the data packet loss rate so as to know the stability of data transmission. High packet loss rates can lead to lost frames or picture stuck in the video, affecting the user experience. In addition, the server calculates the signal strength of the video presentation data of each user intelligent display terminal so as to evaluate the network connection quality of the video presentation data. This is one of the important factors to ensure smooth playback. The server performs video channel signal analysis by comprehensively analyzing the signal-to-noise ratio data, the data packet loss rate data and the signal intensity data of each user. This analysis takes into account various factors including signal quality, stability and network connectivity to determine the signal path quality data for each user. And the server constructs a multi-channel screen display combination for the user according to the signal channel quality data. This involves grouping users to ensure that users within the same group have similar signal quality. In this process, the server may also calculate location range data for each user and use the data for group division. This helps ensure that users with similar geographic locations are assigned to the same multi-channel screen display combination, thereby reducing latency. Based on the initial multi-channel screen display combination, the server performs playback parameter analysis for each group to take into account bandwidth, resolution, encoding, etc. This ensures that users within each group are able to obtain an optimal playback experience depending on their needs and network conditions. Finally, the server ensures that each user can enjoy high quality video playback by continually optimizing the distribution of playback parameters and multi-channel screen display combinations. This helps to provide a better user experience, especially in the case of multi-user, multi-channel.
S103, respectively carrying out data receiving rate calculation on each first screen display combination to obtain target rate evaluation indexes corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals;
specifically, the viewer is divided into a plurality of first screen display combinations according to a certain standard. These criteria may include geographic location, network bandwidth, device type, etc. The purpose of the grouping is to ensure that the users within each group have similar viewing conditions and requirements. For each first screen display combination, a target rate evaluation index thereof is calculated. This calculation needs to take into account the network bandwidth, the number of members of the first screen display combination, and the bandwidth requirements of each member. The target rate rating index determines at what rate each first screen display combination should receive data to ensure high quality video playback. And collecting video playing signal data from a plurality of preset video databases. These video databases may be different cameras or online streaming services. The target video playback signal is typically in a digital format and may be a video file, a real-time stream, or other media data. The collected target video playing signals need to be transmitted to the user equipment of each first screen display combination through a network. This may be achieved by multicast, unicast or other data transmission methods. The bandwidth allocation and data transfer should be adjusted according to the target rate assessment indicator to ensure that each first screen display combination is able to receive data on demand. At each first screen display combined user side, the received target video play signal needs to be decoded and played for presentation to the user. This requires a corresponding decoder and player application. For example, assume that a server is designing an online educational platform that aims to provide students with high-quality educational content. Students are divided into a plurality of first screen display combinations according to their geographical locations. Students may be grouped into groups of cities or regions to ensure similar network delay and bandwidth conditions. For each first screen display combination, a target rate evaluation index thereof is calculated. This calculation needs to take into account the network bandwidth, the number of members of the first screen display combination, and the bandwidth requirements of each member. The target rate evaluation index ensures that each group can receive educational video with high quality. An educational objective video play signal is obtained from an educational content provider. This includes educational video courses, lectures, presentations, and the like. The content may be recorded and stored in advance on a server, or may be provided by real-time streaming media. The educational objective video playback signals are transmitted to the student devices of each first screen display combination using a Content Delivery Network (CDN) or other delivery technology. Ensuring that the data transmission rate meets the target rate evaluation index for each subgroup. On each student's device, the received educational objective video playback signal is decoded and fed into a player application for viewing by the student. Students can choose to watch different educational content according to their own progress.
S104, respectively carrying out video signal processing on each target video playing signal to obtain a coded video playing signal set corresponding to each target video playing signal;
in particular, the multi-channel target video playback signal is obtained from various video databases, which sources may include different cameras, sensors, online video streams, or other video databases. And extracting key information from each target video playing signal. The purpose is to identify and extract key elements or objects in the video, such as people, objects, scenes, etc. And carrying out first-level coding on the key information of each target video playing signal to generate basic layer data. The base layer data contains the base frame, location, shape, and other key features of the video. This helps to ensure the basic quality and playability of the video. Video frames are compressed to reduce the amount of data using standard video coding techniques, such as h.264. After the first level encoding, the base layer data is second level encoded to generate enhancement layer data. The enhancement layer data contains more detail, color information, and other subtle features to enhance the sharpness and visual effects of the video. Advanced coding techniques, such as H.265 (HEVC), are used to further compress and optimize video data. And finally, combining the base layer data and the enhancement layer data of each video signal into a coded video playing signal set. This set contains multiple layers of encoded data of the original video signal for decoding and playback at the receiving end depending on the user's equipment and network conditions. For example, assuming that the server is developing an online video platform, the user may view various types of video content, including movies, television shows, and user-generated content, on a smart screen. The server is connected to a plurality of video databases including a movie library, a television program provider, and video uploaded by users. These video databases provide a multi-channel target video playback signal. For each target video playback signal, the server uses computer vision and voice processing algorithms to extract key information such as the main role, scene, dialog, and audio quality of the video. This helps the user obtain important information while watching the video. For the identified key information, the server performs a first level encoding to generate base layer data. The base layer data contains the base frame, position and color information of the video. This level of encoding ensures the basic playability of the video. And performing second-level coding on the basic layer data to generate enhancement layer data. The enhancement layer data contains more detail, color depth, and visual effect enhancement information to provide a higher quality viewing experience. And finally, combining the base layer data and the enhancement layer data of each video signal into a coded video playing signal set. This set contains multiple layers of encoded data for each video to be decoded and presented on the user's smart screen to provide a high quality video playback experience.
A target video playback signal is obtained from each video source and parsed into a series of video frames. Each video frame contains image information about the video. Each video frame is analyzed using a motion detection algorithm to capture motion and position information of the object. This helps detect dynamic elements in the video, such as people, vehicles, or other objects. And combining the image data of each video frame with the corresponding motion information data to form a first data set. This data set contains a still image of each video signal and motion information associated therewith. If the video signal contains audio tracks, audio data may be extracted. Sound track analysis may identify audio features such as speech, music, or ambient sound. Time code data is extracted from the video signal to determine a time stamp of the video frame. At the same time, color information data is extracted to describe color features of the image. The sound track data, the time code data and the color information data are combined to form a second data set. This data set contains audio, time information and color information of the video signal. And finally, merging the first data set and the second data set to obtain a final key data set. This set contains key information for each target video playback signal, including image, motion, sound, time, and color data. For example, assume that a server is developing a multi-picture video playback application that allows a user to view multiple video databases simultaneously. The server obtains the multi-channel target video play signal from different video databases and parses it into a series of video frames. Using motion detection algorithms, the server analyzes each video frame to capture motion therein. For example, in a sports game scenario, an algorithm may detect movement of a ball and activity of an athlete. The image data of each video frame is combined with the corresponding motion information data to form a first data set. This data set contains a still image of each video signal and motion information associated therewith. If the video signal contains an audio track, the server extracts the audio data. For example, in news videos, the sound of a news anchor may be analyzed and extracted. At the same time, the application extracts the time code data to ensure time synchronization of the video frames. In addition, it extracts color information of the image to maintain accurate display of the image. The sound track data, the time code data and the color information data are combined to form a second data set. This data set contains audio, time information and color information of the video signal. And finally, merging the first data set and the second data set to obtain a final key data set. This set contains key information for each target video playback signal, including image, motion, sound, time, and color data.
S105, performing resource allocation on each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal;
specifically, a rate threshold range is determined for each first screen display combination according to its target rate evaluation index. This range defines the minimum and maximum transmission rates required for the first screen display combination. For each set of encoded video playback signals, the video transmission rate thereof is calculated. This can be estimated from parameters such as compression rate, frame rate and resolution of video coding. And performing initial resource allocation on each target video playing signal based on the threshold range data of the first screen display combination to obtain candidate first screen display combinations. This process takes into account the transmission rate of each video signal and the requirements of the first screen display combination. A set of resource allocation nodes is extracted for each candidate first screen display combination. These nodes are responsible for managing the transmission and reception of video data. For each candidate first screen display combination resource allocation node set, calculating the maximum transmission efficiency thereof. This may be determined based on bandwidth, delay and congestion conditions between nodes. And correcting the resource allocation of each target video playing signal based on the maximum transmission efficiency so as to obtain a second screen display combination. This procedure ensures that the best transmission efficiency is achieved given the resource constraints. For example, suppose a server is developing a multi-user online video platform that allows users to view different video content while maintaining high quality video transmissions. For each first screen display combination, for example, a group of users watching a high definition movie and watching a normal video, a range of target rate evaluation indexes thereof is determined. High definition movies require higher transmission rates. For different video contents, the video transmission rate is calculated according to the coding parameters and the quality requirements. The transmission rate of high definition movies is higher, while the rate of normal video is lower. And performing initial resource allocation according to the rate threshold range of the combination of the video content selected by the user and the first screen display. For example, a higher bandwidth resource is allocated to a high definition movie user. A resource allocation node is determined that is appropriate for each first screen display combination using a multicast resource allocation algorithm. These nodes are servers or caches located in different geographical locations. For each first screen display combined resource allocation node set, calculating the maximum transmission efficiency thereof. This takes into account network congestion, bandwidth utilization, and transmission delay. Based on the maximum transmission efficiency, the resource allocation is adjusted to ensure that each first screen display combination achieves the best transmission efficiency while meeting its rate requirements. This may include dynamically adjusting bandwidth allocation or switching to a more optimal transmission node.
It should be noted that, the calculation mode of the multicast resource allocation algorithm is that, since different first screen display combinations require different rates of data transmission, the target rate evaluation index of each first screen display combination is calculated first. For example, a higher rate may be required for a high definition movie user group, while a lower rate may be acceptable for a standard definition movie user group, with the resource allocation nodes typically being servers or caches located in different geographic locations in the network, to determine the set of resource allocation nodes appropriate for each first screen display combination. And (5) for each resource allocation node set, further calculating the maximum transmission efficiency of each resource allocation node set. The maximum transmission efficiency means that data can be transmitted at the highest rate without exceeding the capacity limit of the resource node. Based on the calculated maximum transmission efficiency, the multicast resource allocation algorithm adjusts the resource allocation to ensure that each first screen display combination achieves the best transmission efficiency while meeting its rate requirements, so isomorphic dynamic allocation of more bandwidth to the first screen display combination of high rate requirements, or switching to a more optimal resource allocation node in case of network congestion.
For example, in an online live platform, multiple real-time live channels are provided, including High Definition (HD) and Standard Definition (SD) resolutions. At a particular point in time, hundreds of viewers watch these channels simultaneously. Different viewers have different bandwidths and devices, some of which can smoothly watch high-definition content, while others can only watch standard-definition content. A target rate rating index for each first screen display combination is first calculated. For high definition channels, the target rate requirement is 5 Mbps, while for standard definition channels, the target rate requirement is 2 Mbps.
The resource allocation nodes are servers or caches located in different geographical locations in the network. The algorithm determines a set of resource allocation nodes that fit each first screen display combination. For example, for a high definition channel, the algorithm selects 3 resource allocation nodes, while for a standard definition channel, 2 resource allocation nodes are selected. And calculating the maximum transmission efficiency of each resource allocation node set. This step takes into account factors such as network congestion, bandwidth utilization, and transmission delay. Maximum transmission efficiency means that data can be transmitted at the highest rate without exceeding the capacity limit of the resource node. Based on the calculated maximum transmission efficiency, the algorithm dynamically adjusts the resource allocation. For high definition channels, if a set of resource allocation nodes encounters congestion, the algorithm may dynamically allocate more bandwidth to the set to ensure that high definition content is transmitted at a rate of 5 Mbps. For standard definition channels, corresponding resource allocation adjustment is also performed.
The server builds a network topology model for each candidate first screen display combination of assigned node sets. This model will describe the connection relationships between nodes, as well as the bandwidth capacity of each connection. This may help the server understand the structure of the entire network. For each network topology model, a network transmission path traversal is performed. All paths from the data source to the receiving node are found, involving a plurality of intermediate nodes. The selection of these paths may be based on different strategies, such as shortest path or optimal path. For each network transmission path, the server extracts the transmission bandwidth information for each node on the path. This may be obtained by querying the network device or monitoring the data. The server knows the bandwidth capacity of each node and the current load situation. And finally, calculating the maximum transmission efficiency of each candidate first screen display combination based on the bandwidth data set corresponding to each network transmission path set. Maximum transmission efficiency may be defined as the ability to achieve the highest data transmission rate if bandwidth constraints are met. This may help the server determine the best transmission path and resource allocation to optimize the transmission of the video signal. For example, assume that one user is located in city a using a high-speed broadband connection, while another user is located in remote area B, having access to only the low-bandwidth internet. The server provides the same video content to both users, but needs to transmit according to their different needs and limitations. For users in city a, the maximum transmission efficiency calculation will choose to transmit video over the high speed channel to ensure a high definition and unbuffered viewing experience. For users in remote area B, the computation may choose to employ low bit rate and adaptive streaming to accommodate limited bandwidth and reduce video buffering latency. Through maximum transmission efficiency calculation, the server provides personalized video transmission for different users, thereby improving user satisfaction and effectively utilizing available network resources.
S106, decoding video display signals of the plurality of target video playing signals through the second screen display combination corresponding to each target video playing signal, and transmitting the video display signals to the intelligent screen corresponding to each user intelligent display terminal for video playing.
It should be noted that, the video signal is generally transmitted in a compression encoded form, such as the coding standards of h.264, h.265, and the like. At the receiving end, these compressed video data need to be decoded to be restored into playable video frames. In a multi-channel video transmission system, each target video playback signal is assigned to one or more first screen display combinations in order to optimize resource utilization and to meet the needs of different users. The determination of the second screen display combination is made based on the multicast resource allocation algorithm and the user requirements. This typically involves taking into account the performance of the user equipment, network bandwidth and required video quality. When the second screen display combinations are determined, the target video playback signal is transmitted based on the identifiers of these first screen display combinations. This may be achieved by a network transport protocol (e.g., HTTP, UDP, etc.). The goal of the transmission is to efficiently transfer the target video playback signal to the target user's device. After receiving the transmitted target video playing signal, the receiving device (such as a smart screen, a television, a mobile phone, etc.) first receives and decapsulates the received video data. This includes receiving, checking and reassembling the data packet to ensure the integrity and correctness of the data. The device decodes the received video data using a video decoder. The decoding process restores the compressed video data to the original sequence of video frames. If the audio stream is contained in the video signal, the device also needs to decode the audio data to restore the audio signal. The decoded video frames will be rendered on a screen for display to the user. At the same time, the audio signal will be delivered to an audio output device (e.g., speaker or headphones) for playback. For example, assume that the server has two users, user A and user B, both watching the same movie. According to the multicast resource allocation algorithm, user a and user B are allocated to different first screen display combinations because of the different devices and network bandwidths they use. User a is using a high performance smart tv, connected to the high speed internet. According to the multicast resource allocation algorithm, he is allocated to a first screen display combination to obtain high quality 1080p video. User B is using a smart phone to connect to a slower mobile network. He is assigned to another first screen display combination to obtain adaptive streaming to accommodate changes in network bandwidth. When the movie starts to play, the target video play signal is transmitted to the two first screen display combinations by multicast. The smart tv of user a receives the video data, decodes and renders it to display a high definition movie on a large screen. At the same time, user B's smartphone receives the same movie data, but due to the lower bandwidth, the phone will automatically reduce resolution to ensure unbuffered viewing. By transmitting video signals to different first screen display combinations according to user demand and resource allocation, the server provides a high quality video experience on different devices while efficiently utilizing network resources. This ensures that the user can enjoy a smooth video playback experience regardless of the device and network connection used.
In the embodiment of the application, a plurality of user intelligent display terminals in a target area are acquired, and meanwhile, video display signal channel quality analysis is carried out on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal; constructing a multi-channel screen display combination according to a plurality of user intelligent display terminals, and carrying out group division on the multi-channel screen display combination according to signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations; respectively carrying out data receiving rate calculation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals; video signal processing is carried out on each target video playing signal respectively, and a coded video playing signal set corresponding to each target video playing signal is obtained; performing resource allocation on each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal; and decoding video display signals of the plurality of target video playing signals through the second screen display combination corresponding to each target video playing signal, and transmitting the video display signals to the intelligent screen corresponding to each user intelligent display terminal for video playing. In the scheme, the system can ensure that the user receives the video quality suitable for the equipment and network conditions of the user by constructing first screen display combinations with different qualities for the user according to the signal channel quality data and performing multicast resource allocation. The system can automatically adapt to the transmission requirements under different network conditions by considering the signal channel quality data and the target rate evaluation index. This means that stable video transmission can be provided even in case of network congestion or poor channel quality. Scalable video coding allows the system to create a set of coded video playback signals of different quality levels for each target video playback signal. The multicast resource allocation algorithm intelligently allocates resources by comprehensively considering a plurality of factors, such as channel quality, data transmission rate, and resource fairness. By carrying out channel signal detection, group division and target rate evaluation index calculation on the intelligent user display terminal, resources can be effectively allocated according to the requirements of different users and channel quality, and efficient utilization of the resources is ensured. This helps to minimize resource waste and improve efficiency of multicast transmission.
In a specific embodiment, the process of executing step S101 may specifically include the following steps:
(1) Acquiring a plurality of user intelligent display terminals in a target area, and carrying out video transmission protocol analysis on each user intelligent display terminal to obtain a video transmission protocol corresponding to each user intelligent display terminal;
(2) Based on a video transmission protocol corresponding to each user intelligent display terminal, sending detection video signals to each user intelligent display terminal respectively, and collecting video display data corresponding to each user intelligent display terminal;
(3) The method comprises the steps of respectively carrying out signal-to-noise ratio calculation on video display data corresponding to each user intelligent display terminal to obtain signal-to-noise ratio data corresponding to each user intelligent display terminal;
(4) Respectively carrying out data packet loss rate calculation on video display data corresponding to each user intelligent display terminal to obtain data packet loss rate data corresponding to each user intelligent display terminal;
(5) Respectively carrying out signal intensity calculation on video display data corresponding to each user intelligent display terminal to obtain signal intensity data corresponding to each user intelligent display terminal;
(6) And carrying out video channel signal analysis on the plurality of user intelligent display terminals through the signal-to-noise ratio data corresponding to each user intelligent display terminal, the data packet loss rate data corresponding to each user intelligent display terminal and the signal intensity data corresponding to each user intelligent display terminal to obtain the signal channel quality data corresponding to each user intelligent display terminal.
Specifically, the server monitors and identifies the user smart display terminals within the target area, which may be implemented by a network monitoring system or a user management module of an application program. For each identified user smart display terminal, it is necessary to analyze the video transmission protocol it is using. This is to know that the user is using a protocol such as HTTP, RTSP, UDP for video transmission. The system transmits a specific detected video signal to each user based on the video transmission protocol used by each user. At the same time, the system collects video presentation data for the user, including frame rate, resolution, encoding format, etc. For each user intelligent display terminal, the system analyzes the video signals received by the system and calculates the signal-to-noise ratio of the signals. The signal-to-noise ratio is an important indicator of signal quality, which represents the ratio of useful information to noise in the signal. A high signal-to-noise ratio indicates a better signal quality and a low signal-to-noise ratio indicates a worse signal quality. The system analyzes whether data packets are lost in the video transmission process. This involves comparing the transmitted data packets with the received data packets to calculate the data packet loss rate. A lower packet loss rate indicates better data transmission quality. In addition to the signal-to-noise ratio and the packet loss rate, the system also measures the signal strength of the user equipment. This can be achieved by measuring the power of the received signal, a higher signal strength generally representing better signal reception. And finally, the server analyzes the signal channel quality data of each user intelligent display terminal by combining the signal-to-noise ratio, the data packet loss rate and the signal intensity data. This analysis process involves the use of some algorithm or model to calculate the composite quality score for the channel. For example, assume that the server operates an online video platform with two users a and B. User a is watching a high definition movie using a high speed broadband connection, while user B is watching the same content using a mobile network. For user a, the system analysis finds that he uses the HTTP protocol for video transmission, while user B uses the RTSP protocol. For user a, the system detects a high signal to noise ratio because his broadband connection is of good quality. For user B, the signal to noise ratio is low due to mobile network limitations. User a has little packet loss, while user B occasionally loses some packets during transmission. The device signal strength of user a is high and the device signal strength of user B is low. Based on these data, the server concludes: the signal channel quality data of user a is very good, while the signal channel quality data of user B is poor, requiring an adaptive adjustment to avoid buffering.
In a specific embodiment, the process of executing step S102 may specifically include the following steps:
(1) Constructing playing nodes of a plurality of intelligent user display terminals to obtain a plurality of video playing nodes;
(2) Constructing a multi-channel screen display combination through a plurality of video playing nodes to obtain the multi-channel screen display combination;
(3) Performing quality evaluation score calculation on the signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of quality evaluation scores;
(4) Extracting the position range of each user intelligent display terminal to obtain position range data corresponding to each user intelligent display terminal;
(5) Based on the position range data corresponding to each user intelligent display terminal, performing first group division on the multi-channel screen display combination through a plurality of quality evaluation scores to obtain a plurality of initial first screen display combinations;
(6) Respectively carrying out playing parameter analysis on each initial first screen display combination to obtain playing parameter data corresponding to each initial first screen display combination;
(7) And carrying out group adjustment on the plurality of initial first screen display combinations through play parameter data corresponding to each initial first screen display combination to obtain a plurality of first screen display combinations.
It should be noted that, first, a plurality of user intelligent display terminals are obtained, and a playing node is built for each user. These playback nodes may be servers, devices or applications for receiving and processing the target video playback signals. The server is in an online video platform where many users view different video content at the same time. Each user's device (e.g., smart phone, computer or television) may be considered a playback node. For the platform, these devices constitute a playback node for receiving and playing back video. A multi-channel screen display combination is constructed using the playback nodes. A multi-channel screen display combination is a collection of playback nodes that are used to multicast a target video playback signal to save bandwidth and resources. The server may divide 100 spectators into a multi-channel screen display combination if they are watching the same sports game. This multi-channel screen display combination includes 100 playback nodes that can share the same video stream, rather than transmitting one share individually for each viewer. And evaluating the signal channel quality data of each user intelligent display terminal. This may include calculation of metrics such as signal-to-noise ratio, packet loss rate, and signal strength. The server for a user watching video, the platform may monitor the quality of the video signal it receives. If the user's signal quality is poor, measures need to be taken to improve his viewing experience, such as reducing video quality to reduce bandwidth requirements. Position range extraction: and determining the position range data of each user intelligent display terminal. This helps to understand where the user is watching the video in order to better manage the resource allocation. And dividing the multi-channel screen display combination for the first time based on the signal channel quality data and the position range data to obtain a plurality of initial first screen display combinations. This division helps group users to better meet their needs. If a group of users are located in the same geographical area and have a better signal quality, they can be divided into the same first screen display combination in order to share a high quality video stream. And carrying out playing parameter analysis on each initial first screen display combination to know the requirements and characteristics of each first screen display combination. This may include information such as bandwidth requirements, video quality preferences, etc. The server may have some users that want high definition video for a first screen display combination, while others may accept standard definition video. The playback parameter analysis may help the platform determine how to allocate resources to meet various needs. And finally, carrying out group adjustment on a plurality of initial first screen display combinations according to the play parameter data so as to optimize resource allocation and meet the requirements of users. Server if a user in a certain first screen display combination needs more bandwidth to watch high quality video, the platform can adjust the resource allocation to ensure that they get the required bandwidth.
In a specific embodiment, the process of executing step S104 may specifically include the following steps:
(1) Extracting key information of each target video playing signal respectively to obtain a key data set corresponding to each target video playing signal;
(2) Performing first-level coding on the key data set corresponding to each target video playing signal to obtain a plurality of basic layer data corresponding to each key data set;
(3) Performing second-level coding on each key data set to obtain a plurality of enhancement layer data corresponding to each key data set;
(4) And combining the plurality of basic layer data corresponding to each key data set and the plurality of enhancement layer data corresponding to each key data set into an encoded video playing signal set corresponding to each target video playing signal.
It should be noted that, for each target video playing signal, first, key information extraction is performed to identify and extract key elements in the video. These elements include video frames, audio tracks, timecodes, color information, and the like. The server may extract key information for a movie video including extracting each frame of image, audio data, and time stamps and color information for each frame. And carrying out first-level coding on the extracted key information set. This step typically involves basic encoding methods such as JPEG compression of video frames, MP3 encoding of audio, etc., to generate a plurality of base layer data. For video frames, the first level encoding may employ JPEG encoding, compressing each frame of image into base layer data. The audio data may be MP3 encoded. Based on the first-level encoded base layer data, second-level encoding is performed to generate a plurality of enhancement layer data. This step may employ more advanced coding methods such as h.264, h.265, etc. to further compress the data and increase coding efficiency. The server may further compress the base layer data into enhancement layer data for the video frame using h.264 encoding. This helps to reduce the bandwidth requirements at the time of data transmission. And finally, merging the first-level encoded base layer data and the second-level encoded enhancement layer data into an encoded video playing signal set corresponding to each target video playing signal. This data set contains multiple levels of data that can be used for subsequent transmission and decoding. The server combines the first layer encoded JPEG image with the second layer encoded H.264 data to obtain a set of encoded video playback signals comprising base layer and enhancement layer data.
In a specific embodiment, the process of executing step S301 may specifically include the following steps:
(1) Extracting video frame data of each target video playing signal to obtain video frame data corresponding to each target video playing signal;
(2) Performing motion information analysis based on the video frame data corresponding to each target video playing signal to obtain motion information data corresponding to each target video playing signal;
(3) Carrying out first data combination on video frame data corresponding to each target video playing signal and motion information data corresponding to each target video playing signal to obtain a first data set;
(4) Performing sound track analysis on each target video playing signal to obtain sound track data corresponding to each target video playing signal;
(5) Performing time code extraction on each target video playing signal to obtain time code data corresponding to each target video playing signal, and simultaneously performing color information extraction on each target video playing signal to obtain color information data corresponding to each target video playing signal;
(6) Carrying out second data combination on sound track data corresponding to each target video playing signal, time code data corresponding to each target video playing signal and color information data corresponding to each target video playing signal to obtain a second data set;
(7) And carrying out third data combination on the first data set and the second data set to obtain a key data set.
Specifically, the server decodes each target video playback signal to extract video frame data therein. This may be achieved by a standard video decoder, such as an h.264 decoder. And analyzing the extracted video frame data by utilizing a computer vision technology to obtain motion information data in the video. This includes detecting movement of an object, tracking a trajectory of the object, and the like. The video frame data and the motion information data are combined into a first data set for subsequent processing. This may be accomplished by associating motion information data with each video frame. And analyzing the sound track of each target video playing signal to extract the audio information. This may be achieved by an audio decoder, such as an MP3 decoder. Time code data is extracted from the target video playback signal to determine the time stamp for each frame. The time code is typically embedded in the video data. Color analysis is performed on video frame data to extract information about color space and color depth. This helps to preserve the color quality of the video. The sound track data, the time code data and the color information data are combined into a second data set for later use. And finally, merging the first data set and the second data set into a key data set. This will include video frame data, motion information, sound tracks, time codes and color information, ready for subsequent processing and transmission. For example, assume that the server has a movie video to process. The server decodes the target video playing signal and extracts the video frames. Then, the server analyzes the video frames through computer vision algorithms, detecting and tracking the main character in the movie. The server extracts the audio data, decodes the sound track, and acquires the sound of the movie. At the same time, the server extracts the time code from the video to determine the time stamp of each frame and analyzes the color information to maintain the color quality of the video. Finally, all of these data are consolidated into a critical data set for further processing and transmission.
In a specific embodiment, the process of executing step S105 may specifically include the following steps:
(1) Respectively calibrating a speed threshold range of each first screen display combination through a target speed evaluation index corresponding to each first screen display combination to obtain threshold range data corresponding to each first screen display combination;
(2) Respectively carrying out data transmission rate calculation on the coded video playing signal sets corresponding to each target video playing signal to obtain video transmission rates corresponding to each coded video playing signal set;
(3) Based on the threshold range data corresponding to each first screen display combination, performing initial resource allocation on each target video playing signal through the video transmission rate corresponding to each coded video playing signal set to obtain candidate first screen display combinations corresponding to each target video playing signal;
(4) Extracting resource allocation nodes from candidate first screen display combinations corresponding to each target video playing signal to obtain an allocation node set corresponding to each candidate first screen display combination;
(5) Respectively carrying out maximum transmission efficiency calculation on the distribution node set corresponding to each candidate first screen display combination to obtain the maximum transmission efficiency corresponding to each distribution node set;
(6) And carrying out resource allocation correction on each target video playing signal through the maximum transmission efficiency corresponding to each allocation node set to obtain a second screen display combination corresponding to each target video playing signal.
Specifically, for each first screen display combination, a rate threshold range is defined according to its target rate evaluation index. This range will determine which target video playback signals are suitable for allocation to a particular first screen display combination. For example, a first screen display combination requiring high quality has a higher rate threshold, while a first screen display combination requiring lower quality has a lower rate threshold. For each set of encoded video playback signals, the data transmission rate thereof is calculated. This can be achieved by measuring the packet size and transmission time. Each set of encoded video playback signals has an associated rate. An initial resource allocation is performed based on the rate threshold range for each first screen display combination and the data transmission rate for each set of encoded video playback signals. This means that each target video playback signal is assigned to a candidate first screen display combination whose rate requirements match the target video playback signal rate and which is within the rate threshold. This process may be automated using algorithms. For each candidate first screen display combination, a multicast resource allocation algorithm is used to determine the appropriate set of resource allocation nodes. These nodes will be responsible for transmitting the target video playback signal to the corresponding first screen display combination member. For each set of allocation nodes, its maximum transmission efficiency is calculated. This may involve construction of a network topology model and network transmission path traversal to determine the optimal transmission path and bandwidth. The initial resource allocation is modified based on the maximum transmission efficiency of each set of allocation nodes. This ensures that the target video playback signal is distributed to the node having the greatest transmission efficiency to meet the target rate evaluation index for each first screen display combination. For example, consider an online video platform in which there are multiple first screen display combinations, each with different requirements. A higher rate is required for a high definition first screen display combination, while a lower rate is required for a standard definition first screen display combination. For a coded target video play signal of a movie, first, the data transmission rate is calculated. Then, different portions of the movie are assigned to different candidate first screen display combinations according to the rate threshold range for each first screen display combination. A set of resource allocation nodes for each candidate first screen display combination, and transmission paths and bandwidths therebetween, are determined using a multicast resource allocation algorithm. And finally, correcting the resource allocation by calculating the maximum transmission efficiency of each allocation node set so as to ensure that the film is transmitted to each first screen display combination in an optimal mode and meet the target rate evaluation index of the film. In this way, each first screen display combination can achieve the video quality they need while effectively utilizing network resources.
In a specific embodiment, the process of performing the step of calculating the maximum transmission efficiency for the allocation node set corresponding to each candidate first screen display combination to obtain the maximum transmission efficiency corresponding to each allocation node set may specifically include the following steps:
(1) Respectively constructing a network topology model for the distribution node set corresponding to each candidate first screen display combination to obtain a network topology model corresponding to each distribution node set;
(2) Traversing network transmission paths of the network topology models corresponding to each distribution node set to obtain network transmission path sets corresponding to each network topology model;
(3) Respectively extracting transmission bandwidths of network transmission path sets corresponding to each network topology model to obtain bandwidth data sets corresponding to each network transmission path set;
(4) And based on the bandwidth data set corresponding to each network transmission path set, respectively carrying out maximum transmission efficiency calculation on the distribution node set corresponding to each candidate first screen display combination to obtain the maximum transmission efficiency corresponding to each distribution node set.
Specifically, for each candidate first screen display combination, a network topology model is first constructed. This model includes location, connection and topology information for all distribution nodes. This may be achieved by a network topology mapping tool or network analysis software. For each set of distribution nodes, a network transmission path traversal is performed. This process involves determining the best transmission path from the video database to each distribution node, taking into account topology and network connections. The path may be a point-to-point or multi-hop transmission, depending on the network topology and transmission requirements. For each set of network transmission paths, its bandwidth data set is extracted. This includes bandwidth capacity information on the path to determine whether the path is sufficient to support transmission of the target video playback signal. Bandwidth data may be acquired by network measurement and monitoring tools. And based on the bandwidth data of each network transmission path set, respectively carrying out maximum transmission efficiency calculation on the distribution node set corresponding to each candidate first screen display combination. Maximum transmission efficiency refers to the ability to achieve the highest data transmission rate for a given bandwidth. For example, consider an online video platform having multiple candidate first screen display combinations, each requiring a different transmission path and bandwidth. A network topology model is built for each first screen display combination, including location and connection information of the distribution nodes. Then, for each first screen display combination, network transmission path traversal is performed, and an optimal transmission path is determined. For example, for a Gao Qingdi one-screen display combination, a transmission path with a high bandwidth needs to be selected to ensure high quality video transmission. Bandwidth information of each path is extracted to understand its capacity. Finally, by calculating the maximum transmission efficiency of each set of distribution nodes, an optimal transmission scheme can be determined to meet the requirements of each first screen display combination. In this way, the target video playback signal will be transmitted in the most efficient manner, ensuring a high quality video playback experience.
The method for displaying a multi-channel smart screen in the embodiment of the present application is described above, and the multi-channel smart screen display device in the embodiment of the present application is described below, referring to fig. 2, one embodiment of the multi-channel smart screen display device in the embodiment of the present application includes:
the acquisition module 201 is configured to acquire a plurality of user intelligent display terminals in a target area, and perform video display signal channel quality analysis on a plurality of the user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal;
the construction module 202 is configured to construct a multi-channel screen display combination according to a plurality of the user intelligent display terminals, and divide groups of the multi-channel screen display combination according to signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations;
the computing module 203 is configured to perform data receiving rate computation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collect video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals;
The processing module 204 is configured to perform video signal processing on each target video playing signal to obtain a set of encoded video playing signals corresponding to each target video playing signal;
the allocation module 205 is configured to allocate resources for each target video playing signal based on the set of encoded video playing signals corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination, so as to obtain a second screen display combination corresponding to each target video playing signal;
and the transmission module 206 is configured to decode video display signals of a plurality of target video playing signals through a second screen display combination corresponding to each target video playing signal, and transmit the decoded video display signals to an intelligent screen corresponding to each user intelligent display terminal for video playing. Acquiring a plurality of user intelligent display terminals in a target area through the cooperative cooperation of the components, and simultaneously, carrying out video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal; constructing a multi-channel screen display combination according to a plurality of user intelligent display terminals, and carrying out group division on the multi-channel screen display combination according to signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations; respectively carrying out data receiving rate calculation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals; video signal processing is carried out on each target video playing signal respectively, and a coded video playing signal set corresponding to each target video playing signal is obtained; performing resource allocation on each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal; and decoding video display signals of the plurality of target video playing signals through the second screen display combination corresponding to each target video playing signal, and transmitting the video display signals to the intelligent screen corresponding to each user intelligent display terminal for video playing. In the scheme, the system can ensure that the user receives the video quality suitable for the equipment and network conditions of the user by constructing first screen display combinations with different qualities for the user according to the signal channel quality data and performing multicast resource allocation. The system can automatically adapt to the transmission requirements under different network conditions by considering the signal channel quality data and the target rate evaluation index. This means that stable video transmission can be provided even in case of network congestion or poor channel quality. Scalable video coding allows the system to create a set of coded video playback signals of different quality levels for each target video playback signal. The multicast resource allocation algorithm intelligently allocates resources by comprehensively considering a plurality of factors, such as channel quality, data transmission rate, and resource fairness. By carrying out channel signal detection, group division and target rate evaluation index calculation on the intelligent user display terminal, resources can be effectively allocated according to the requirements of different users and channel quality, and efficient utilization of the resources is ensured. This helps to minimize resource waste and improve efficiency of multicast transmission.
The application also provides a multi-channel intelligent screen display device, which comprises a memory and a processor, wherein the memory stores computer readable instructions, and the computer readable instructions, when executed by the processor, cause the processor to execute the steps of the multi-channel intelligent screen display method in the above embodiments.
The present application also provides a computer readable storage medium, which may be a non-volatile computer readable storage medium, and may also be a volatile computer readable storage medium, where instructions are stored in the computer readable storage medium, when the instructions are executed on a computer, cause the computer to perform the steps of the multi-channel smart screen display method.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
The integrated units, if implemented in the form of software functional units and sold or passed as separate products, may be stored in a computer readable storage medium. Based on the understanding that the technical solution of the present application may be embodied in essence or a part contributing to the prior art or all or part of the technical solution in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, or other various media capable of storing program codes.
The above embodiments are merely for illustrating the technical solution of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the corresponding technical solutions.

Claims (10)

1. The multi-channel intelligent screen display method is characterized by comprising the following steps of:
acquiring a plurality of user intelligent display terminals in a target area, and simultaneously, carrying out video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal;
constructing a multi-channel screen display combination according to a plurality of the user intelligent display terminals, and carrying out group division on the multi-channel screen display combination through signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations;
Respectively carrying out data receiving rate calculation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals;
video signal processing is carried out on each target video playing signal respectively, and a coded video playing signal set corresponding to each target video playing signal is obtained;
performing resource allocation on each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal;
and decoding video display signals of a plurality of target video playing signals through second screen display combinations corresponding to the target video playing signals, and transmitting the video display signals to intelligent screens corresponding to the intelligent display terminals of the users to play the video.
2. The method for displaying a multi-channel intelligent screen according to claim 1, wherein the obtaining a plurality of user intelligent display terminals in a target area, and simultaneously, performing video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal, includes:
Acquiring a plurality of user intelligent display terminals in a target area, and carrying out video transmission protocol analysis on each user intelligent display terminal to obtain a video transmission protocol corresponding to each user intelligent display terminal;
based on a video transmission protocol corresponding to each user intelligent display terminal, sending detection video signals to each user intelligent display terminal respectively, and collecting video display data corresponding to each user intelligent display terminal;
the method comprises the steps of respectively carrying out signal-to-noise ratio calculation on video display data corresponding to each user intelligent display terminal to obtain signal-to-noise ratio data corresponding to each user intelligent display terminal;
respectively carrying out data packet loss rate calculation on video display data corresponding to each user intelligent display terminal to obtain data packet loss rate data corresponding to each user intelligent display terminal;
respectively carrying out signal intensity calculation on video display data corresponding to each user intelligent display terminal to obtain signal intensity data corresponding to each user intelligent display terminal;
and carrying out video channel signal analysis on a plurality of the user intelligent display terminals through the signal-to-noise ratio data corresponding to each user intelligent display terminal, the data packet loss rate data corresponding to each user intelligent display terminal and the signal intensity data corresponding to each user intelligent display terminal to obtain the signal channel quality data corresponding to each user intelligent display terminal.
3. The method for displaying a multi-channel smart screen according to claim 1, wherein the constructing a multi-channel screen display combination according to a plurality of the user smart display terminals, and grouping the multi-channel screen display combination according to signal channel quality data corresponding to each user smart display terminal to obtain a plurality of first screen display combinations, includes:
constructing playing nodes of a plurality of intelligent user display terminals to obtain a plurality of video playing nodes;
constructing a multi-channel screen display combination through a plurality of video playing nodes to obtain the multi-channel screen display combination;
performing quality evaluation score calculation on the signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of quality evaluation scores;
extracting the position range of each user intelligent display terminal to obtain position range data corresponding to each user intelligent display terminal;
based on the position range data corresponding to each user intelligent display terminal, performing first group division on the multi-channel screen display combination through a plurality of quality evaluation scores to obtain a plurality of initial first screen display combinations;
Respectively carrying out playing parameter analysis on each initial first screen display combination to obtain playing parameter data corresponding to each initial first screen display combination;
and carrying out group adjustment on a plurality of initial first screen display combinations through play parameter data corresponding to each initial first screen display combination to obtain a plurality of first screen display combinations.
4. The method for displaying a multi-channel intelligent screen according to claim 1, wherein the video signal processing is performed on each target video playing signal to obtain a set of encoded video playing signals corresponding to each target video playing signal, respectively, including:
extracting key information of each target video playing signal respectively to obtain a key data set corresponding to each target video playing signal;
performing first-level coding on the key data set corresponding to each target video playing signal to obtain a plurality of basic layer data corresponding to each key data set;
performing second-level coding on each key data set to obtain a plurality of enhancement layer data corresponding to each key data set;
and combining the plurality of basic layer data corresponding to each key data set and the plurality of enhancement layer data corresponding to each key data set into an encoded video playing signal set corresponding to each target video playing signal.
5. The method for displaying a multi-channel intelligent screen according to claim 4, wherein the extracting key information from each target video playing signal to obtain the key data set corresponding to each target video playing signal includes:
extracting video frame data of each target video playing signal to obtain video frame data corresponding to each target video playing signal;
performing motion information analysis based on the video frame data corresponding to each target video playing signal to obtain motion information data corresponding to each target video playing signal;
carrying out first data combination on video frame data corresponding to each target video playing signal and motion information data corresponding to each target video playing signal to obtain a first data set;
performing sound track analysis on each target video playing signal to obtain sound track data corresponding to each target video playing signal;
performing time code extraction on each target video playing signal to obtain time code data corresponding to each target video playing signal, and simultaneously performing color information extraction on each target video playing signal to obtain color information data corresponding to each target video playing signal;
Carrying out second data combination on sound track data corresponding to each target video playing signal, time code data corresponding to each target video playing signal and color information data corresponding to each target video playing signal to obtain a second data set;
and carrying out third data combination on the first data set and the second data set to obtain the key data set.
6. The method of claim 1, wherein the performing resource allocation on each target video playing signal based on the set of encoded video playing signals corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain the second screen display combination corresponding to each target video playing signal includes:
respectively calibrating a speed threshold range of each first screen display combination through a target speed evaluation index corresponding to each first screen display combination to obtain threshold range data corresponding to each first screen display combination;
respectively carrying out data transmission rate calculation on the coded video playing signal sets corresponding to each target video playing signal to obtain video transmission rates corresponding to each coded video playing signal set;
Based on the threshold range data corresponding to each first screen display combination, performing initial resource allocation on each target video playing signal through the video transmission rate corresponding to each coded video playing signal set to obtain candidate first screen display combinations corresponding to each target video playing signal;
extracting resource allocation nodes from candidate first screen display combinations corresponding to each target video playing signal to obtain an allocation node set corresponding to each candidate first screen display combination;
respectively carrying out maximum transmission efficiency calculation on the distribution node set corresponding to each candidate first screen display combination to obtain the maximum transmission efficiency corresponding to each distribution node set;
and carrying out resource allocation correction on each target video playing signal through the maximum transmission efficiency corresponding to each allocation node set to obtain a second screen display combination corresponding to each target video playing signal.
7. The method of claim 6, wherein the calculating the maximum transmission efficiency of the distribution node set corresponding to each candidate first screen display combination to obtain the maximum transmission efficiency corresponding to each distribution node set includes:
Respectively constructing a network topology model for the distribution node set corresponding to each candidate first screen display combination to obtain a network topology model corresponding to each distribution node set;
traversing network transmission paths of the network topology models corresponding to each distribution node set to obtain network transmission path sets corresponding to each network topology model;
respectively extracting transmission bandwidths of network transmission path sets corresponding to each network topology model to obtain bandwidth data sets corresponding to each network transmission path set;
and based on the bandwidth data set corresponding to each network transmission path set, respectively carrying out maximum transmission efficiency calculation on the distribution node set corresponding to each candidate first screen display combination to obtain the maximum transmission efficiency corresponding to each distribution node set.
8. A multi-channel smart screen display device, the multi-channel smart screen display device comprising:
the acquisition module is used for acquiring a plurality of user intelligent display terminals in a target area, and simultaneously, carrying out video display signal channel quality analysis on the plurality of user intelligent display terminals to obtain signal channel quality data corresponding to each user intelligent display terminal;
The construction module is used for constructing multi-channel screen display combinations according to a plurality of the user intelligent display terminals, and carrying out group division on the multi-channel screen display combinations through signal channel quality data corresponding to each user intelligent display terminal to obtain a plurality of first screen display combinations;
the computing module is used for respectively carrying out data receiving rate computation on each first screen display combination to obtain a target rate evaluation index corresponding to each first screen display combination, and collecting video playing signals from a plurality of preset video databases to obtain a plurality of corresponding target video playing signals;
the processing module is used for respectively carrying out video signal processing on each target video playing signal to obtain a coded video playing signal set corresponding to each target video playing signal;
the distribution module is used for distributing resources to each target video playing signal based on the coded video playing signal set corresponding to each target video playing signal and the target rate evaluation index corresponding to each first screen display combination to obtain a second screen display combination corresponding to each target video playing signal;
and the transmission module is used for decoding the video display signals of the plurality of target video playing signals through the second screen display combination corresponding to each target video playing signal and transmitting the video display signals to the intelligent screen corresponding to each user intelligent display terminal for video playing.
9. A multi-channel smart screen display device, the multi-channel smart screen display device comprising: a memory and at least one processor, the memory having instructions stored therein;
the at least one processor invoking the instructions in the memory to cause the multi-channel smart screen display device to perform the multi-channel smart screen display method of any of claims 1-7.
10. A computer readable storage medium having instructions stored thereon, which when executed by a processor, implement the multi-channel smart screen display method of any of claims 1-7.
CN202311777139.2A 2023-12-22 2023-12-22 Multichannel intelligent screen display method, device, equipment and storage medium Active CN117459764B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311777139.2A CN117459764B (en) 2023-12-22 2023-12-22 Multichannel intelligent screen display method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311777139.2A CN117459764B (en) 2023-12-22 2023-12-22 Multichannel intelligent screen display method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN117459764A true CN117459764A (en) 2024-01-26
CN117459764B CN117459764B (en) 2024-03-01

Family

ID=89584069

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311777139.2A Active CN117459764B (en) 2023-12-22 2023-12-22 Multichannel intelligent screen display method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117459764B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070008884A1 (en) * 2003-10-08 2007-01-11 Bob Tang Immediate ready implementation of virtually congestion free guarantedd service capable network
CN112565766A (en) * 2020-11-11 2021-03-26 航天银山电气有限公司 Video transmission method, device and storage medium
CN116126580A (en) * 2023-04-04 2023-05-16 深圳市康凌源科技有限公司 Touch display control method, device, equipment and storage medium
CN117219003A (en) * 2023-11-09 2023-12-12 深圳市东陆科技有限公司 Content display method and device of LED display module

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070008884A1 (en) * 2003-10-08 2007-01-11 Bob Tang Immediate ready implementation of virtually congestion free guarantedd service capable network
CN112565766A (en) * 2020-11-11 2021-03-26 航天银山电气有限公司 Video transmission method, device and storage medium
CN116126580A (en) * 2023-04-04 2023-05-16 深圳市康凌源科技有限公司 Touch display control method, device, equipment and storage medium
CN117219003A (en) * 2023-11-09 2023-12-12 深圳市东陆科技有限公司 Content display method and device of LED display module

Also Published As

Publication number Publication date
CN117459764B (en) 2024-03-01

Similar Documents

Publication Publication Date Title
USRE49990E1 (en) Use of objective quality measures of streamed content to reduce streaming bandwidth
CN103188522B (en) Method and system for providing and delivering a composite condensed stream
CN101917613B (en) Acquiring and coding service system of streaming media
US9264475B2 (en) Use of objective quality measures of streamed content to reduce streaming bandwidth
KR101594351B1 (en) Streaming of multimedia data from multiple sources
CN101917389B (en) Network television direct broadcasting system
CN110139113B (en) Transmission parameter distribution method and device for video resources
Bilal et al. Crowdsourced multi-view live video streaming using cloud computing
AU2008202703B2 (en) Apparatus and method for providing multimedia content
CN105187850A (en) Streaming Encoded Video Data
CN103814562A (en) Signaling characteristics of segments for network streaming of media data
Duanmu et al. Assessing the quality-of-experience of adaptive bitrate video streaming
Aguiar et al. Video quality estimator for wireless mesh networks
CN104796443B (en) QoE (quality of experience) correction method and server for mobile streaming media user
Bilal et al. QoE-aware distributed cloud-based live streaming of multisourced multiview videos
Rodriguez et al. QoE assessment of multi-view video and audio IP transmission
CN117459764B (en) Multichannel intelligent screen display method, device, equipment and storage medium
Yuan et al. IPTV video quality assessment model based on neural network
CN114885198B (en) Mixed network-oriented accompanying sound and video collaborative presentation system
Hossain et al. Quality of experience for video streaming: A contemporary survey
Hegde et al. QoE aware video adaptation for video streaming in 5G networks
CN114866763A (en) Video quality evaluation method and device, terminal equipment and storage medium
Canovas et al. A cognitive network management system to improve QoE in stereoscopic IPTV service
Pal et al. Model for mobile online video viewed on samsung galaxy note 5
Stensen Evaluating QoS and QoE Dimensions in Adaptive Video Streaming

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant