CN117176984A - Content self-adaptive video playing method, server, system and medium - Google Patents

Content self-adaptive video playing method, server, system and medium Download PDF

Info

Publication number
CN117176984A
CN117176984A CN202311052183.7A CN202311052183A CN117176984A CN 117176984 A CN117176984 A CN 117176984A CN 202311052183 A CN202311052183 A CN 202311052183A CN 117176984 A CN117176984 A CN 117176984A
Authority
CN
China
Prior art keywords
video
monitoring
playing
frame rate
client
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311052183.7A
Other languages
Chinese (zh)
Inventor
曹芝勇
陈炳枝
李树果
黄兆文
胡乐
巢云
肖俊强
周健龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Xinghai IoT Technology Co Ltd
Original Assignee
Shenzhen Xinghai IoT Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Xinghai IoT Technology Co Ltd filed Critical Shenzhen Xinghai IoT Technology Co Ltd
Priority to CN202311052183.7A priority Critical patent/CN117176984A/en
Publication of CN117176984A publication Critical patent/CN117176984A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a content self-adaptive video playing method, a server, a system and a medium, wherein the method comprises the following steps: receiving an original video stream reported by monitoring equipment, wherein the original video stream adopts a real-time streaming protocol; calculating and generating a video frame sequence according to the original video stream; analyzing the difference change of the moving object of the adjacent frames in the video frame sequence to obtain a differentiation index; setting the playing frame rate of the video frame sequence as a first playing frame rate according to the differentiation index; and transmitting the video frame sequence to the monitoring client according to the first playing frame rate so as to be played by the monitoring client. The method is used for adjusting the play frame rate of the video stream according to the dynamic change of the monitoring video and reducing the bandwidth consumption, so that one client can play more paths of videos concurrently in the same network environment, and monitoring personnel can check and analyze the videos of the same monitoring equipment more efficiently at the same time, thereby improving the monitoring efficiency.

Description

Content self-adaptive video playing method, server, system and medium
Technical Field
The present application relates to the field of intelligent security, and in particular, to a content adaptive video playing method, server, system, and medium.
Background
The intelligent video monitoring system is a basic service system in the field of intelligent security, and comprises a monitoring center and a plurality of monitoring devices, wherein cameras at the entrances and exits of a parking lot are one of the monitoring devices. As a source, the camera can provide a video stream of a streaming media control protocol such as RTSP, and after the monitoring center receives the video stream, the monitoring client displays and plays the video stream through a browser or an APP and other application programs, so that a property manager can monitor and manage the video stream.
The inventor finds that in daily monitoring, on one hand, most monitoring devices use less picture variation, particularly in specific time periods and specific scenes, such as office buildings in late night time periods, almost no personnel flow, the pictures are almost static, and monitoring personnel need to pay attention to videos with variation in a few frames, and other video streams are invalid, so that the efficiency of the monitoring personnel in analyzing the videos is low. On the other hand, the video stream transmission resolution and the frame rate of the monitoring device are generally fixed, so that a large amount of invalid video streams occupy bandwidth, the number of video paths simultaneously accessible on the monitoring client is limited greatly, and the monitoring efficiency is further low.
Disclosure of Invention
The application provides a content self-adaptive video playing method, device and medium, and aims to solve the problems that the monitoring efficiency is low due to the fact that the resolution and the frame rate are fixedly set during the transmission of videos in the intelligent security field and a large number of invalid video streams occupy bandwidth.
The technical proposal is as follows:
on the one hand, a video playing method with self-adaptive content is provided, the video playing method is applied to a monitoring center server, and the monitoring center server is in communication connection with monitoring equipment and a monitoring client; the content self-adaptive video playing method comprises the following steps:
receiving an original video stream reported by monitoring equipment, wherein the original video stream adopts a real-time streaming protocol;
calculating and generating a video frame sequence according to the original video stream;
analyzing the difference change of the moving object of the adjacent frames in the video frame sequence to obtain a differentiation index;
setting the playing frame rate of the video frame sequence as a first playing frame rate according to the differentiation index;
and transmitting the video frame sequence to the monitoring client according to the first playing frame rate so as to be played by the monitoring client.
The algorithm for obtaining the differentiation index is an inter-frame differentiation method.
The inter-frame difference method specifically comprises the following steps:
carrying out gray processing on each frame image in the video frame sequence to obtain gray images;
performing binarization processing on each pixel point in the gray level image to obtain a binarization matrix;
and carrying out differential operation on the binarization matrixes of two adjacent frames to obtain a differential index.
The gray scale processing adopts a weighted average method, and specifically comprises the following steps: y=0.299r+0.587g+0.114b, wherein RGB is the color representing three channels of red, green and blue, and Y is brightness.
The binarization process specifically comprises the following steps: and adding the values of all the pixels of the gray image, and taking an average value to obtain a first threshold value, wherein the value of each pixel of the gray image is set to be 0 or 1 according to the first threshold value.
The content self-adaptive video playing method further comprises the following steps: according to the differentiation index, setting the resolution of the video frame sequence as a first resolution; and transmitting the video frame sequence to the monitoring client according to the first resolution for playing by the monitoring client.
The content self-adaptive video playing method further comprises the following steps:
receiving a limiting instruction of the monitoring client, wherein the limiting instruction comprises a second playing frame rate;
setting the playing frame rate of the video frame sequence as a second playing frame rate;
and transmitting the video frame sequence to the monitoring client according to the second playing frame rate so as to be played by the monitoring client.
In another aspect, there is provided a monitoring center server comprising:
the device comprises a processor, a memory and a communication circuit, wherein the processor is respectively connected with the memory and the communication circuit;
wherein the communication circuit is for communication connection and the memory is for storing a computer program and the processor is for executing the computer program to implement the method of any of the above.
In still another aspect, an intelligent security system is provided, including the monitoring center server described above.
In yet another aspect, a computer-readable storage medium is provided, storing a computer program executable by a processor to implement the method of any of the above.
The beneficial effects of the application are as follows: by utilizing the scheme of the application, the original video stream is calculated to generate the video frame sequence, the difference change of the moving object of the adjacent frames in the video frame sequence is analyzed, and the playing frame rate of the video frame sequence is controlled according to the difference change, so that the playing frame rate of the video stream is adjusted according to the dynamic change of the monitoring video, and the bandwidth consumption is reduced, thereby, one client can play more paths of videos concurrently in the same network environment, and meanwhile, monitoring personnel can carry out more efficient checking and analysis on the videos of the same monitoring equipment, and the monitoring efficiency is improved.
Drawings
FIG. 1 is a schematic diagram of a video scheduling system for intelligent security in accordance with an embodiment of the present application;
FIG. 2 is a flowchart of a content adaptive video playback method according to an embodiment of the present application;
FIG. 3 is a schematic diagram illustrating a full flow of a content adaptive video playback method according to an embodiment of the present application;
FIG. 4 is a flowchart of a client-side adaptive video playback method according to an embodiment of the present application;
FIG. 5 is a flowchart of a client-side adaptive video request method according to an embodiment of the present application;
FIG. 6 is a flowchart illustrating a client-side adaptive video playback method according to an embodiment of the present application;
fig. 7 is a full flowchart of a client adaptive video playing method according to an embodiment of the present application;
fig. 8 is a schematic diagram of a hardware structure according to an embodiment of the application.
Detailed Description
In order that the application may be readily understood, a more particular description thereof will be rendered by reference to specific embodiments that are illustrated in the appended drawings. Preferred embodiments of the present application are shown in the drawings. This application may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used in the description of the application herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application.
Example 1
As shown in fig. 1, the present application provides a video scheduling system for intelligent security. Wherein, include three layer structure: the monitoring system comprises a monitoring client for viewing a monitoring video, a video monitoring management platform and monitoring equipment arranged at each property place.
The video monitoring management platform is generally arranged in a machine room of a monitoring center, and a large number of servers are used for data processing, and in the embodiment of the application, the video monitoring management platform can be also called a monitoring center server.
The monitoring equipment is generally a camera with a networking function, and can automatically transmit the shot video back to the video monitoring management platform in real time.
The client can be located in a monitoring center or other places so as to be convenient for property management personnel to use. The client modality includes, but is not limited to: PC clients, large screen clients, mobile clients, etc.
The video monitoring management platform is provided with the computer program for realizing the video playing method with self-adaption content, which is used for realizing the adjustment of the playing frame rate of the video stream according to the dynamic change of the monitoring video and reducing the bandwidth consumption, so that one client can play more paths of videos simultaneously in the same network environment.
As shown in fig. 2-3, the content adaptive video playing method of the present application specifically includes the following steps:
s10: receiving an original video stream reported by monitoring equipment, wherein the original video stream adopts a real-time streaming protocol;
s20: calculating and generating a video frame sequence according to the original video stream;
s30: analyzing the difference change of the moving object of the adjacent frames in the video frame sequence to obtain a differentiation index;
s40: setting the playing frame rate of the video frame sequence as a first playing frame rate according to the differentiation index;
s50: and transmitting the video frame sequence to the monitoring client according to the first playing frame rate so as to be played by the monitoring client.
Specifically, referring to FIG. 3, an embodiment of a full flow is provided.
The monitoring center server transmits stream information to the monitoring client in a frame-by-frame mode, and video frames are divided into I frames, P frames and B frames.
I-frames represent key frames, and typically, an image can be parsed by decoding I-frame data.
The P-frame represents a predicted frame and the decoded picture of the present frame must be superimposed with the previous I-frame or P-frame buffered pictures to generate a complete picture when decoding.
The B frame refers to its previous I frame or P frame and its following P frame to generate a complete picture.
Wherein the P frame can improve compression efficiency and image quality, and the B frame can greatly improve compression multiple.
Therefore, the differentiation index is obtained by analyzing the difference between the preceding and following frames of the video frame, and the differentiation index is divided into several different level sections, for example, a first level, a second level, and a third level, wherein the first level is the largest difference, and the third level is the smallest difference. When the difference index is judged to be the first level, setting the current frame rate to be the highest value; when the difference index is judged to be the third level, the current frame rate is set to the lowest value. Thereby achieving the following effects: when the difference between the front frame and the rear frame is not large, namely when the picture of the current monitoring video is basically static, the frame rate is reduced to the minimum, so that the current bandwidth consumption is reduced; when the difference between the front frame and the rear frame is large, that is, the picture of the current monitoring video is changed, for example, a person passes by, or a bird passes by, the frame rate is improved, so that more effective videos are presented to the user, and the user experience is improved.
The algorithm for obtaining the differentiation index is an inter-frame difference method, and specifically, the inter-frame difference method comprises:
carrying out gray processing on each frame image in the video frame sequence to obtain gray images;
performing binarization processing on each pixel point in the gray level image to obtain a binarization matrix;
and carrying out differential operation on the binarization matrixes of two adjacent frames to obtain a differential index.
The gray scale processing adopts a weighted average method, and specifically comprises the following steps: y=0.299r+0.587g+0.114b, wherein RGB is the color representing three channels of red, green and blue, and Y is brightness.
The binarization process specifically comprises the following steps: and adding the values of all the pixels of the gray image, and taking an average value to obtain a first threshold value, wherein the value of each pixel of the gray image is set to be 0 or 1 according to the first threshold value. Compared with other binarization processing algorithms, the method is simpler, consumes less calculation resources, and is verified to basically meet the application scene of the existing monitoring picture, so that the service efficiency is improved.
The content self-adaptive video playing method further comprises the following steps: according to the differentiation index, setting the resolution of the video frame sequence as a first resolution; and transmitting the video frame sequence to the monitoring client according to the first resolution for playing by the monitoring client. That is, in addition to adjusting the play frame rate (i.e., the number of images transmitted per unit time), the resolution of each frame of images may be adjusted, thereby providing a finer adjustment dimension, for example, the consumption of bandwidth may be further reduced, or the user experience may be further improved.
Referring to fig. 3, the content adaptive video playing method further includes:
receiving a limiting instruction of the monitoring client, wherein the limiting instruction comprises a second playing frame rate;
setting the playing frame rate of the video frame sequence as a second playing frame rate;
and transmitting the video frame sequence to the monitoring client according to the second playing frame rate so as to be played by the monitoring client.
In particular, in some special application scenarios, the monitoring client needs to specify the highest level of video quality (e.g. specify a high level of play frame rate), i.e. the monitoring client needs to guarantee the highest priority of video quality regardless of any changes in video pictures and network environment.
At this time, the method may receive a client-side limit instruction, where the limit instruction includes a second play frame rate, where the second play frame rate may be the specified highest level play frame rate. After receiving the instruction, the monitoring center server always plays the video to the monitoring client according to the second play frame rate, but cannot adjust according to the dynamic and static of the video picture.
Further, the second play frame rate may be any other play frame rate specified, and is not limited herein.
Further, the qualifying instruction may also include a second resolution.
Furthermore, the embodiment of the video playing method with the self-adaptive content can be operated on a video monitoring management platform. The computer software in this embodiment is used as the execution subject of the video playing method for executing the content adaptation of the present application.
In another aspect, the present application provides a monitoring center server, which is a computer device, specifically including:
the device comprises a processor, a memory and a communication circuit, wherein the processor is respectively connected with the memory and the communication circuit;
wherein the communication circuit is for communication connection and the memory is for storing a computer program and the processor is for executing the computer program to implement the method of any of the above.
In yet another aspect, the present application provides a readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the methods of the above embodiments.
Referring specifically to fig. 8, in practical application, fig. 8 is a schematic structural diagram of a hardware running environment related to each method of the present application.
As shown in fig. 8, the hardware runtime environment may include: a processor 1001, such as a CPU, a communication bus 1002, a user interface 1003, a network interface 1004, and a memory 1005. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Those skilled in the art will appreciate that the hardware architecture for each method operation shown in fig. 8 is not limiting as to each method operation device, and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 8, an operating system, a network communication module, a user interface module, and a computer program may be included in the memory 1005, which is a readable storage medium. The operating system is a management and control program and supports the operation of a network communication module, a user interface module, a computer program and other programs or software; the network communication module is used to manage and control the network interface 1004; the user interface module is used to manage and control the user interface 1003.
In the hardware configuration shown in fig. 8, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; the processor 1001 may call a computer program stored in the memory 1005 and perform the method described above.
Example two
Referring to fig. 4-6, the application further provides a client self-adaptive video playing method, the video playing method is applied to a monitoring center server, the monitoring center server is in communication connection with monitoring equipment and a monitoring client, and the monitoring center server stores an original video stream reported by the monitoring equipment and a plurality of subcode streams with different code rates converted from the original video stream.
Specifically, referring to fig. 4, the video playing method includes:
s100: receiving a connection request message of a monitoring client, generating and sending a Session ID to the monitoring client in response to the connection request message, wherein the Session ID is used for identifying the same Session between the subsequent monitoring client and a monitoring center server, and all messages in the same Session use the same Session ID;
s200: receiving an initial request message of the same session of the monitoring client, and transmitting an original video stream to the monitoring client in response to the initial request message;
s300: receiving a report message of the same session of the monitoring client, wherein the report message comprises a first mean value A indicating the current network condition of the monitoring client, and the first mean value A performs periodic calculation according to an entity data packet of an original video stream received in unit time;
and selecting an original video stream/sub-code stream according to the reported information, and sending the original video stream/sub-code stream to the monitoring client, wherein the code rate of the original video stream/sub-code stream is matched with the current network condition of the monitoring client.
According to the method, the same session communication connection is established between the monitoring center server and a certain monitoring client, and after an original video stream is sent to the monitoring client, the monitoring center server can continuously and periodically detect the network condition of the monitoring client through the reporting message of the same session, and correspondingly select to switch and send video streams with different code rates according to the network condition change, so that the video code streams are adjusted according to the actual bandwidth network condition of the monitoring client, and the smoothness of video service playing is ensured.
The method for calculating the first mean value A is as follows:
when each complete data packet is received, calculating and obtaining unit entity data packet M in unit time n N is a positive integer;
the N units of entity data packets M n Adding and averaging to obtain a first average value A, wherein
Unit entity data packet M n Is calculated as follows:
wherein L is n T is the size of each complete data packet n For the time of receipt of each complete data packet.
In step S300, an original video stream/subcode stream is selected according to the report message and sent to the monitoring client, which specifically includes:
selecting a corresponding original video stream/subcode stream to transmit according to a first average value A in the reported information;
and storing the corresponding relation among the Session ID in the reported information, the parameter information of the original video stream/subcode stream and the first mean value.
The correspondence may be used as empirical data for reference data for different bitstream video selections when the network conditions of the same client are subsequently changed. When the first average value A is obtained, the corresponding code stream is directly determined through empirical data, and the video stream to be transmitted is determined without comparing the first average value A with the code rate of each original video stream/sub-code stream.
Wherein, after step S300, the method further comprises the steps of:
s400: and buffering the original video stream/subcode stream which is successfully transmitted for other monitoring clients to use.
The cache can be used by not only the client of the request but also other subsequent clients, so that the response speed is improved.
The step S300 specifically includes:
when the monitoring client performs multipath playing, the corresponding original video stream/subcode stream is selected to be sent according to the playing path number and the first average value A.
For example, when a monitoring client switches 4-way playing to 8-way playing, the number of ways increases, which inevitably results in higher occupied bandwidth requirements, and the first average value a and the increased number of ways need to be comprehensively considered, so that a subcode stream with a lower code rate is selected for transmission, so as to meet the condition that 8 ways can be smoothly played at the same time.
On the other hand, referring to fig. 5, there is further provided a client-side adaptive video request method, where the video request method is applied to a monitoring client-side, and the monitoring client-side is communicatively connected with a monitoring center server; the monitoring center server is also in communication connection with the monitoring equipment, and converts each original video stream reported by the monitoring equipment into a plurality of subcode streams with different code rates; the video request method comprises the following steps:
s110: sending a connection request message to a monitoring center server, and receiving a Session ID generated by the monitoring center server aiming at the connection request message, wherein the Session ID is used for identifying the same Session between a subsequent monitoring client and the monitoring center server, and all messages in the same Session use the same Session ID;
s210: sending an initial request message of the same session to a monitoring center server, and receiving an original video stream/subcode stream sent by the monitoring center server aiming at the initial request message;
s310: and calculating a first average value A indicating the current network condition of the monitoring client according to the received original video stream/subcode stream, and sending a report message of the same session containing the first average value to the monitoring center server so that the monitoring center server can select the original video stream/subcode stream to send according to the first average value A.
Referring to fig. 6, this is an example of an end-to-end full process, specifically including the following:
1. the monitoring equipment reports the original video stream to the monitoring center server.
2. The monitoring center server transcodes the original video stream to generate a first subcode stream, a second subcode stream and a third subcode stream, wherein different subcode streams correspond to different code rates, including different resolutions, played frame rates and the like.
For example:
the first sub-code stream is 1920 x 1080 resolution, (120-200) KB/s/path;
the second sub code stream is 1280 x 720 resolution, (80-150) KB/s/path;
the third subcode stream is: 800 x 600 resolution, (50-90) KB/s/way;
when the network state of the monitoring client is detected to be bad, the third subcode stream with the lowest corresponding resolution and playing frame rate can be selected for playing, so that the occupation of network bandwidth is reduced, and the monitoring client can still smoothly play the monitoring video under the condition of network deterioration. When the network state of the monitoring client is detected to be good, the first subcode stream with the highest corresponding resolution and playing frame rate can be selected for playing, so that the user experience is improved.
3. And receiving connection request information of the monitoring client.
4. A session ID is assigned to indicate the same session, and the same session ID is used for subsequent reporting information.
5. Original request information of a monitoring client is received.
6. And selecting one video stream to send to the monitoring client. One video stream is generally selected by default for transmission, and because the network condition is not detected at this time, the network condition of the monitoring client cannot be known, and therefore only the default value can be transmitted. However, the default value may be set, for example, the first subcode stream, the third subcode stream, or the original video stream, which is not limited herein.
7. And receiving the periodic reporting information of the monitoring client.
8. And judging the current network state according to key information in the reported information, such as a first mean value A, so as to select a corresponding video stream to be sent, wherein the video stream is one of the network states which are the best match with the current monitoring client.
Specifically, the first average value a is calculated as follows:
when the monitoring client starts to receive the stream Entity (Entity) data packet, recording the start receiving time (StartTime), after the monitoring client interfaces with a complete Entity data packet, recording the end time (EndTime) and recording the Content-Length in the response detailed header information (response header) of the server, wherein the Content-Length is the size of the Entity data packet, the monitoring client uses the Content-Length to divide (EndTime-StartTime) to obtain the size of the Entity data packet acceptable by the monitoring client in unit time, and after the monitoring client repeatedly receives a plurality of Entity data packets, taking the average value (average) of the size of the Entity data packet acceptable by the monitoring client in unit time, and reporting the average value to the server in the form of heartbeat packets in the form of request header
The format of the message reported by the monitoring client is as follows:
Real Time Streaming Protocol
Request HEARTBEAT rtsp://172.0.0.1:554/RTSP/1.0\r\n
Session:SessionID
Average:average\r\n
\r\n
further, when the transmitted video stream is switched according to the changed first mean value A each time, the corresponding relation among the current transmitted video stream information, the sessionID and the first mean value A is stored for subsequent analysis and calling. May be stored in a database or may be in other file forms. The data may be used as statistics to influence the subsequent judgment as experience values, for example, the first average value a is divided into intervals according to the statistics, and each interval corresponds to one code stream. The statistical data may be analyzed periodically or feedback analysis may be performed.
Further, referring to fig. 7, an embodiment of an end-to-end connection framework and method flow is provided. As can be seen from the figure, the guest can respond more quickly to requests from different clients after buffering three sub-streams of different code rates into the transcoding buffer. In addition, as one server needs to be in butt joint with a plurality of clients at the same time, the scheme can improve the overall response performance of the server while meeting the clients with different network conditions. For example, in the idle time of a part of clients, the playing code stream of other clients can be further improved, so that the clients with video playing requests further improve the video playing definition and improve the experience.
In yet another aspect, a monitoring client is provided, including:
the device comprises a processor, a memory and a communication circuit, wherein the processor is respectively connected with the memory and the communication circuit;
the communication circuit is used for communication connection, the memory is used for storing a computer program, and the processor is used for executing the computer program to realize the client self-adaptive video request method.
In another aspect, there is provided a monitoring center server comprising:
the device comprises a processor, a memory and a communication circuit, wherein the processor is respectively connected with the memory and the communication circuit;
the communication circuit is used for communication connection, the memory is used for storing a computer program, and the processor is used for executing the computer program to realize the client self-adaptive video playing method.
In still another aspect, an intelligent security system is provided, including the monitoring client and the monitoring center server described above.
In yet another aspect, a computer-readable storage medium is provided, storing a computer program executable by a processor to implement the above client-adaptive video request method and client-adaptive video playback method.
The specific design of the hardware environment is described in reference to one embodiment, and will not be described in detail herein.
In summary, the application has the following beneficial effects:
1. the video playing method with the content self-adaption adjusts the playing frame rate of the video stream according to the dynamic change of the monitoring video, and reduces the bandwidth consumption, so that one client can play more paths of videos concurrently in the same network environment, and monitoring personnel can check and analyze the videos of the same monitoring equipment more efficiently, thereby improving the monitoring efficiency.
2. The method for playing and requesting the video by the client terminal in a self-adaptive way realizes that different code streams are selected to play according to the client terminal request and the dynamic change of the network, and reduces the bandwidth consumption, so that a server can be connected with more client terminals in the same network environment, and one client terminal can play more paths of video simultaneously.

Claims (10)

1. The content self-adaptive video playing method is characterized in that the content self-adaptive video playing method is applied to a monitoring center server, and the monitoring center server is in communication connection with monitoring equipment and a monitoring client; the content adaptive video playing method comprises the following steps:
receiving an original video stream reported by the monitoring equipment, wherein the original video stream adopts a real-time streaming protocol;
calculating and generating a video frame sequence according to the original video stream;
analyzing the difference change of the moving object of the adjacent frames in the video frame sequence to obtain a differentiation index;
setting the playing frame rate of the video frame sequence as a first playing frame rate according to the differentiation index;
and transmitting the video frame sequence to the monitoring client according to the first playing frame rate so as to be played by the monitoring client.
2. The content adaptive video playback method as set forth in claim 1, wherein the algorithm for obtaining the differentiation index is an inter-frame differentiation method.
3. The method for video playback of claim 2, wherein the inter-frame difference method specifically comprises:
carrying out gray processing on each frame image in the video frame sequence to obtain gray images;
performing binarization processing on each pixel point in the gray level image to obtain a binarization matrix;
and carrying out differential operation on the binarization matrixes of two adjacent frames to obtain a differential index.
4. The content adaptive video playing method according to claim 3, wherein the gray scale processing adopts a weighted average method, specifically: y=0.299r+0.587g+0.114b, wherein RGB is the color representing three channels of red, green and blue, and Y is brightness.
5. The method for video playback of claim 4, wherein the binarizing process specifically comprises: and adding the values of all the pixels of the gray image and taking an average value to obtain a first threshold value, and setting the value of each pixel of the gray image to be 0 or 1 according to the first threshold value.
6. The content adaptive video playback method as set forth in claim 5, further comprising: according to the differentiation index, setting the resolution of the video frame sequence as a first resolution; and transmitting the video frame sequence to the monitoring client according to the first resolution so as to be played by the monitoring client.
7. The content adaptive video playback method as recited in claim 6, further comprising:
receiving a limiting instruction of the monitoring client, wherein the limiting instruction comprises a second playing frame rate;
setting the playing frame rate of the video frame sequence as a second playing frame rate;
and transmitting the video frame sequence to the monitoring client according to the second playing frame rate so as to be played by the monitoring client.
8. A monitoring center server, comprising:
the device comprises a processor, a memory and a communication circuit, wherein the processor is respectively connected with the memory and the communication circuit;
wherein the communication circuit is for communication connection and the memory is for storing a computer program, and the processor is for executing the computer program to implement the method of any of claims 1-7.
9. An intelligent security system comprising the monitoring center server of claim 8.
10. A computer readable storage medium, characterized in that a computer program is stored, which computer program is executable by a processor to implement the method of any one of claims 1-7.
CN202311052183.7A 2023-08-18 2023-08-18 Content self-adaptive video playing method, server, system and medium Pending CN117176984A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311052183.7A CN117176984A (en) 2023-08-18 2023-08-18 Content self-adaptive video playing method, server, system and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311052183.7A CN117176984A (en) 2023-08-18 2023-08-18 Content self-adaptive video playing method, server, system and medium

Publications (1)

Publication Number Publication Date
CN117176984A true CN117176984A (en) 2023-12-05

Family

ID=88934717

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311052183.7A Pending CN117176984A (en) 2023-08-18 2023-08-18 Content self-adaptive video playing method, server, system and medium

Country Status (1)

Country Link
CN (1) CN117176984A (en)

Similar Documents

Publication Publication Date Title
US11503307B2 (en) System and method for automatic encoder adjustment based on transport data
WO2021244341A1 (en) Picture coding method and apparatus, electronic device and computer readable storage medium
US20070024705A1 (en) Systems and methods for video stream selection
US20130304934A1 (en) Methods and systems for controlling quality of a media session
US20090234919A1 (en) Method of Transmitting Data in a Communication System
CN109729437B (en) Streaming media self-adaptive transmission method, terminal and system
EP3993419A1 (en) Adaptive transcoding of profile ladder for videos
WO2011086952A1 (en) Video image coded data display method, device, and communications system
US20170142029A1 (en) Method for data rate adaption in online media services, electronic device, and non-transitory computer-readable storage medium
CN107333143B (en) 5G multi-access concurrent transmission control system and method
US11575894B2 (en) Viewport-based transcoding for immersive visual streams
CN110662114A (en) Video processing method and device, electronic equipment and storage medium
CN112866746A (en) Multi-path streaming cloud game control method, device, equipment and storage medium
US20120281757A1 (en) Scene change detection for video transmission system
CN113286146B (en) Media data processing method, device, equipment and storage medium
US20120281756A1 (en) Complexity change detection for video transmission system
CN108810468B (en) Video transmission device and method for optimizing display effect
CN103918258A (en) Reducing amount of data in video encoding
CN117176984A (en) Content self-adaptive video playing method, server, system and medium
CN117255177A (en) Client self-adaptive video playing and requesting method, device, system and medium
CN115883877A (en) Method and system for transmitting ultra-high-definition video stream
CN116962613A (en) Data transmission method and device, computer equipment and storage medium
CN114866763A (en) Video quality evaluation method and device, terminal equipment and storage medium
TW202207053A (en) Image quality assessment apparatus and image quality assessment method thereof
Hsu et al. Measuring Objective Visual Quality of Real-time Communication Systems in the Wild

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination