CN114025171A - Video processing method and device, terminal equipment and storage medium - Google Patents

Video processing method and device, terminal equipment and storage medium Download PDF

Info

Publication number
CN114025171A
CN114025171A CN202111288773.0A CN202111288773A CN114025171A CN 114025171 A CN114025171 A CN 114025171A CN 202111288773 A CN202111288773 A CN 202111288773A CN 114025171 A CN114025171 A CN 114025171A
Authority
CN
China
Prior art keywords
video
target
coding information
determining
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111288773.0A
Other languages
Chinese (zh)
Inventor
卢富士
杨伟中
安君超
王艳辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Visionvera Information Technology Co Ltd
Original Assignee
Visionvera Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Visionvera Information Technology Co Ltd filed Critical Visionvera Information Technology Co Ltd
Priority to CN202111288773.0A priority Critical patent/CN114025171A/en
Publication of CN114025171A publication Critical patent/CN114025171A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/177Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiment of the invention provides a video processing method, a video processing device, terminal equipment and a storage medium, wherein the video processing method comprises the following steps: receiving first video data sent by acquisition equipment, and acquiring first video coding information of the first video data; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.

Description

Video processing method and device, terminal equipment and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a video processing method, an apparatus, a terminal device, and a storage medium.
Background
In the process of video data communication between the video networking equipment and the internet equipment, the video data collected by the internet equipment is played on the video networking equipment, but in the actual transmission process, due to unstable network state, such as poor network quality, data packets of the video data are lost in transmission, and the video networking terminal has the phenomena of blocking or screen splash and the like when viewing the video data of the internet equipment.
Disclosure of Invention
In view of the above problems, embodiments of the present invention are proposed to provide a video processing method, apparatus, terminal device and storage medium that overcome or at least partially solve the above problems.
In a first aspect, an embodiment of the present invention provides a video processing method, where the method includes:
receiving first video data sent by acquisition equipment, and acquiring first video coding information of the first video data;
receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment;
determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information;
coding the first video data according to the target video coding information to obtain coded second video data;
and sending the second video data to a video networking terminal through a video networking protocol.
Optionally, the determining, by the first video coding information and the second video coding information, target video coding information corresponding to the current video networking link includes:
determining a target definition corresponding to the current view networking link based on the first definition and the second definition;
and/or
Determining a target fluency corresponding to the current view networking link based on the first fluency and the second fluency.
Optionally, the first definition comprises at least one or more of a first code rate and a first resolution, the first stream definition comprises a first frame rate, the second definition comprises at least one or more of a second code rate and a second resolution, and the second stream definition comprises a second frame rate;
the determining a target definition corresponding to the current view networking link from the first definition and the second definition comprises:
determining a target code rate corresponding to the current video networking link according to the first code rate and the second code rate;
and/or
Determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution;
the determining a target fluency corresponding to the current view networking link according to the first fluency and the second fluency comprises:
and determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate.
Optionally, the determining, according to the first code rate and the second code rate, a target code rate corresponding to the current video networking link includes:
determining the lower code rate of the first code rate and the second code rate as the target code rate;
the determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution includes:
determining the lower resolution of the first resolution and the second resolution as the target resolution;
determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate includes:
and determining the lower frame rate of the first frame rate and the second frame rate as the target frame rate.
Optionally, the encoding the first video data according to the target video encoding information to obtain encoded second video data includes:
fragmenting the first video data according to a preset splitting rule to obtain fragmented video data, wherein the preset splitting rule is determined according to a first resolution of the first video data;
and respectively carrying out H264 coding on each piece of video data according to the target video coding information to obtain coded second video data.
In a second aspect, an embodiment of the present invention provides a video processing apparatus, where the apparatus includes:
the first receiving module is used for receiving first video data sent by acquisition equipment and acquiring first video coding information of the first video data;
the second receiving module is used for receiving second video coding information of the current video networking link, which is sent by the video quality evaluation terminal equipment;
the determining module is used for determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information;
the encoding module is used for encoding the first video data according to the target video encoding information to obtain encoded second video data;
and the sending module is used for sending the second video data to a video networking terminal through a video networking protocol.
Optionally, the first video coding information at least includes a first definition and a first flow smoothness, the second video coding information at least includes a second definition and a second flow smoothness, and the determining module is configured to:
determining a target definition corresponding to the current view networking link based on the first definition and the second definition;
and/or
Determining a target fluency corresponding to the current view networking link based on the first fluency and the second fluency.
Optionally, the first definition comprises at least one or more of a first code rate and a first resolution, the first stream definition comprises a first frame rate, the second definition comprises at least one or more of a second code rate and a second resolution, and the second stream definition comprises a second frame rate;
the determining module is specifically configured to:
determining a target code rate corresponding to the current video networking link according to the first code rate and the second code rate;
and/or
Determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution;
the determining module is specifically configured to:
and determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate.
Optionally, the determining module is specifically configured to:
determining the lower code rate of the first code rate and the second code rate as the target code rate;
determining the lower resolution of the first resolution and the second resolution as the target resolution;
and determining the lower frame rate of the first frame rate and the second frame rate as the target frame rate.
Optionally, the encoding the first video data according to the target video encoding information to obtain encoded second video data includes:
fragmenting the first video data according to a preset splitting rule to obtain fragmented video data, wherein the preset splitting rule is determined according to a first resolution of the first video data;
and respectively carrying out H264 coding on each piece of video data according to the target video coding information to obtain coded second video data.
In a third aspect, an embodiment of the present invention provides a terminal device, including: at least one processor and memory;
the memory stores a computer program; the at least one processor executes the computer program stored by the memory to implement the video processing method provided by the first aspect.
In a fourth aspect, an embodiment of the present invention provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed, the computer program implements the video processing method provided in the first aspect.
The embodiment of the invention has the following advantages:
according to the video processing method, the video processing device, the terminal equipment and the storage medium, the first video data sent by the acquisition equipment are received, and the first video coding information of the first video data is acquired; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
Drawings
FIG. 1 is a flow chart of the steps of one embodiment of a video processing method of the present invention;
FIG. 2 is a flow chart of the steps of another video processing method embodiment of the present invention;
FIG. 3 is a schematic diagram of a video processing system according to an embodiment of the present invention;
FIG. 4 is a schematic block diagram of another embodiment of a video processing system according to the present invention;
FIG. 5 is a block diagram of a video processing apparatus according to an embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device of the present invention.
Detailed Description
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
The noun explains:
1) and (3) video networking: an underlying communications protocol that is different from the internet.
2) Internet: the network is a huge network formed by connecting networks in series, and the networks are connected by a group of universal protocols to form a logical single huge international network.
3) Tangula server: and the video networking monitoring networking management scheduling server is provided with a video networking monitoring networking management scheduling system.
4) Dithering: the variation of the time delay and the IP network jitter depend on the dynamic routing of the network, the time delay of the network equipment caused by congestion and other factors. The general audio/video decoder mainly aims at the stable code stream, if the code stream after line transmission has large jitter and exceeds the bearing range of the decoder, the decoder discards (or performs a large amount of cache), and finally shows packet loss (or delay), thereby finally influencing the final quality of the audio/video.
5) Packet loss: the percentage of lost messages in the network transmission process, the message loss caused by network equipment congestion in the actual IP network environment, and when there is network packet loss, the audio and video quality will be affected, for example: the images generate screens or mosaics, the sound is intermittent, and the like, and even the conference is interrupted when the images are serious.
6) Bandwidth: which refers to the average rate of application specific traffic flow between two nodes of the network, generally speaking, the higher the bandwidth, the more data transmission is allowed, and thus better audio/video QoS is provided.
7) Code stream/code rate: the code stream (Data Rate) refers to the Data traffic used by a video file in a unit time, also called code Rate or code flow Rate, and the popular understanding is the sampling Rate, which is the most important part in picture quality control in video coding, and the unit used by us is kb/s or Mb/s. Generally, at the same resolution, the larger the code stream of the video file, the smaller the compression ratio, and the higher the picture quality. The larger the code stream is, the larger the sampling rate in unit time is, the higher the precision of the data stream is, the closer the processed file is to the original file, the better the image quality is, the clearer the image quality is, and the higher the decoding capability of the playing device is required. Of course, the larger the code stream is, the larger the file volume is, and the calculation formula is that the file volume is time X code rate/8. For example, a 720P RMVB file with a 90-minute 1Mbps code stream, which is common on a network, has a volume of 5400 seconds × 1Mb/8 of 675 Mb.
8) Resolution ratio: i.e., frame size each frame is an image. 640 x 480 video, the suggested video code rate is set above 700, and the audio sampling rate 44100 is met. A file with an audio coding rate of 128Kbps and a video coding rate of 800Kbps has a total coding rate of 928Kbps, which means that the coded data needs to be represented by 928 Kbps. Calculating an output file size formula: (audio encoding rate (KBit is a unit)/8 + video encoding rate (KBit is a unit)/8) × total movie length (sec is a unit) ═ file size (MB is a unit).
9) Frame rate: the frame rate is also called an abbreviation of fps (frames perssecond) -frame/second. Refers to the number of frames of a picture refreshed per second, and can also be understood as the number of times the graphics processor can refresh per second. Higher frame rates can result in smoother, more realistic animations. The greater the Frames Per Second (FPS), the more fluid the displayed motion will be.
An embodiment of the present invention provides a video processing method for adjusting encoding parameters of video data. The execution subject of the embodiment is a video processing apparatus, and is provided on a video server.
Referring to fig. 1, a flowchart illustrating steps of an embodiment of a video processing method according to the present invention is shown, where the method may specifically include the following steps:
s101, receiving first video data sent by acquisition equipment, and acquiring first video coding information of the first video data;
the embodiment of the invention is particularly applied to a video processing system which comprises a collecting device, a video server, a video networking monitoring and networking management scheduling server, a video quality evaluation terminal and a video networking terminal, wherein the video server is respectively connected with the collecting device, the video quality evaluation terminal and the video networking monitoring and networking management scheduling server through the internet, and the video networking monitoring and networking management scheduling server is connected with the video networking terminal through a video networking protocol. Wherein, the collection equipment is a camera and the like.
If the remote acquisition equipment is required to be monitored through the video networking terminal, the video networking monitoring and managing and scheduling server sends a monitoring task instruction to the video networking terminal and sends an identification of the acquisition equipment to be called to the video server, so that the data acquired by the acquisition equipment is sent to the video networking terminal to be displayed.
In the embodiment of the invention, the video coding information comprises definition and fluency, wherein the definition comprises one or more of code rate and resolution, and the fluency comprises frame rate. Specifically, the acquisition device acquires first video data and sends the first video data to the video server, and the video server analyzes the first video data after acquiring the first video data to obtain first video coding information of the first video data, wherein the first video coding information comprises a first definition and a first flow smoothness.
S102, receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment;
specifically, if the acquired first video data is required to be sent to the video networking terminal for display, in order to prevent the video data played on the video networking terminal from being displayed on a screen, the video quality assessment terminal device acquires second video coding information of the current video networking link in real time and then sends the second video coding information to the video server, wherein the second video coding information comprises second definition and second flow smoothness.
S103, determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information;
specifically, the video server determines target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information, and determines video coding parameters which can be transmitted by the current video networking link through comparison of the first video coding information and the second video coding information so as to guarantee fluency and definition when the video networking terminal plays video data.
S104, coding the first video data according to the target video coding information to obtain coded second video data;
specifically, the video network server performs H264 or H265 encoding on the first video data according to the obtained target video encoding information, so as to obtain encoded second video data.
And S105, sending the second video data to the video networking terminal through the video networking protocol.
Specifically, the video server sends the second video data to the video networking monitoring and dispatching server, and the video networking monitoring and dispatching server sends the second video data to the video networking terminal through the video networking protocol, so that the phenomenon of blocking or screen splash is avoided when the video networking terminal plays the video data.
According to the video processing method provided by the embodiment of the invention, first video data sent by acquisition equipment is received, and first video coding information of the first video data is obtained; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
The present invention further provides a supplementary description of the video processing method provided in the above embodiment.
As shown in fig. 2, a flow chart of steps of another video processing method embodiment of the present invention is shown, the video processing method comprising:
s201, receiving first video data sent by a collecting device, and acquiring first video coding information of the first video data;
specifically, the first video coding information at least comprises a first definition and a first flow degree, the first definition at least comprises one or more of a first code rate and a first resolution, and the first flow degree comprises a first frame rate.
The first video data is YUV format data.
S202, fragmenting the first video data according to a preset splitting rule to obtain fragmented video data, wherein the preset splitting rule is determined according to a first resolution of the first video data;
specifically, after receiving first video data, the video server performs fragmentation on the first video data according to a preset splitting rule to obtain fragmented video data.
The preset splitting rule is determined according to the first resolution of the first video data;
illustratively, the first resolution includes a long resolution and a wide resolution, the preset splitting rule is a long resolution, a wide resolution, a first preset value/a second preset value, and the first preset value and the second preset value may be set as needed, which is not specifically limited in the embodiment of the present invention.
S203, receiving second video coding information of the current video networking link sent by the video quality evaluation terminal equipment;
the second video coding information at least comprises a second definition and a second flow smoothness, the second definition at least comprises one or more of a second code rate and a second resolution, and the second flow smoothness comprises a second frame rate;
s204, determining the definition of a target corresponding to the current video networking link according to the first definition and the second definition;
specifically, as an optional implementation manner, according to the first code rate and the second code rate, determining a target code rate corresponding to the current video networking link;
the video server compares the first code rate with the second code rate, and determines the smaller code rate of the first code rate and the second code rate as a target code rate;
that is, if the first code rate is greater than the second code rate, the second code rate is determined as the target code rate, and if the first code rate is less than or equal to the second code rate, the first code rate is determined as the target code rate.
As another optional implementation, determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution;
specifically, the lower resolution of the first resolution and the second resolution is determined as a target resolution;
the video server compares the first resolution with the second resolution, determines the second resolution as a target resolution if the first resolution is greater than the second resolution, and determines the first resolution as the target resolution if the first resolution is less than or equal to the second resolution.
In a specific implementation process, the two modes can be selected according to actual conditions.
And S205, determining the target fluency corresponding to the current video networking link according to the first fluency and the second fluency.
Specifically, a target frame rate corresponding to the current video networking link is determined according to the first frame rate and the second frame rate.
The video server compares the first frame rate with the second frame rate, and determines the lower frame rate of the first frame rate and the second frame rate as the target frame rate. That is, if the first frame rate is greater than the second frame rate, the second frame rate is determined as the target frame rate, and if the first frame rate is less than or equal to the second frame rate, the first frame rate is determined as the target frame rate.
And S206, respectively carrying out H264 coding on each piece of video data according to the target video coding information to obtain coded second video data.
Specifically, after acquiring the target video coding information, the video server performs H264 coding on each piece of video data, and then obtains second video data after coding of the piece.
And S207, sending the second video data to the video network terminal through the video network protocol.
Specifically, the video server sends the coded second video data of the fragments to the video network terminal through the video network protocol, and the video network terminal recombines the second video data of each fragment according to the packet sequence of the second video data after receiving the second video data, decodes the second video data, and plays the decoded video data.
Fig. 3 is a schematic structural diagram of an embodiment of a video processing system according to the present invention, and as shown in fig. 3, the video processing system includes a collection device, a video server, a video networking monitoring and networking management scheduling server, a video quality evaluation terminal, and a video networking terminal, where the video networking terminal includes a video networking terminal a, a video networking terminal B, and a video networking terminal C, where the video server is connected to the collection device, the video quality evaluation terminal, and the video networking monitoring and networking management scheduling server through the internet, and the video networking monitoring and networking management scheduling server is connected to the video networking terminal through a video networking protocol. The acquisition equipment is a camera and the like, namely a camera A, a camera B, a camera C and the like.
Fig. 4 is a schematic structural diagram of another embodiment of a video processing system according to the present invention, as shown in fig. 4. The camera in the video processing system is used for collecting video data;
the video quality evaluation terminal is used for detecting the network quality of the video networking link and the video quality acquired by the camera;
the Tanggula server is used for monitoring and managing the video networking terminal and pulling or pushing the video stream;
the video server is used for acquiring video adjusting codes, performing packet distribution and pushing and playing video streams;
and the video networking terminal is used for packaging the received second video data, decoding the video and playing the decoded video.
In the embodiment of the invention, the video server and the software are used for transcoding, a special coding and decoding chip or board card is not needed, and the processing of the data on the video server is completely controlled and completed by the software. The video server firstly puts the video data file needing to convert the coding mode into an external storage or a local storage device. Then the video file (first video data) is split into data segments suitable for being processed by a video server, the data segments are placed into a cache, a transcoding algorithm is provided by software, and the data segments are subjected to coding conversion by utilizing the processing capacity of the video server. And after the conversion is finished, sending the data fragments into a local storage device for storage, and simultaneously acquiring new data fragments by the cache region. And circulating the mode until all the split data fragments are transcoded, merging the transcoded video data file fragments, and outputting the second video data required to be obtained.
In the process of transcoding, the video server can select appropriate coding parameters, such as code rate, resolution, frame rate and the like, according to the current link network condition of the video network.
1) Implementation of dynamic code rates
According to the frame rate setting on a fixed time axis, the target code rate of a current frame is calculated, an Application Programming Interface (API) of an x264 transcoding coding library is called to realize that a YUV data frame, namely a data frame in sliced video data obtained by splitting first video data, is coded, and then a timestamp with a fixed interval is marked on the data frame, so that the technology of frame rate invariant and code rate dynamic adjustment is realized, namely sampling is carried out according to a preset time interval in the coding process, and then coding is carried out, so that the effects of frame rate invariant, namely fluency, and code rate (definition) variance, namely the effect of fluency priority, are achieved.
2) Implementation of dynamic frame rates
The target frame rate of the current frame is calculated, then the API of x264 is dynamically called on a time axis to realize the encoding of the YUV data frame, then a non-fixed dynamic timestamp is marked on the encoded frame, so that the code rate is unchanged, and a frame rate dynamic adjustment technology is realized, namely, a video server carries out sampling at different intervals according to the frame rates at different moments and then carries out encoding, namely, the sampling at the moment with a higher frame rate is more, the sampling at the moment with a lower frame rate is less, so that the effects of unchanged code rate (definition) and smoothness change are achieved, namely definition priority setting.
3) Implementation of dynamic resolution
And calculating the target resolution of the current video, for example, zooming to 480p for acquiring 720p videos, performing zooming calculation in the original implementation mode, and then performing one-frame data reading and writing operation again. The operation efficiency is low. In order to optimize the part, when the process of copying the source data to the target data is performed during the color space conversion, the resolution is modified simultaneously. In the case where the calculation amount is almost constant (the scaling calculation slightly increases some calculation amount, but the one-frame data read/write operation does not increase), the function of changing the resolution is realized.
The video processing method provided by the embodiment of the invention comprises the following steps:
<1> the video server acquires an rtsp protocol video stream, namely first video data, from the camera, extracts YUV data, and acquires a first coding rate, a first frame rate and a first resolution.
<2> according to the condition of the video networking network sent by the video quality evaluation terminal equipment, a user can set a target code rate (second code rate), compare a source code rate (first code rate), and if the target code rate is greater than the source code rate or equal to the source code rate, keep the source code rate unchanged; and if the target code rate is smaller than the source code rate, adopting the target code rate. And H264 coding is carried out on the YUV data frames by keeping the frame rate of the source data unchanged and adopting a fixed time axis unchanged.
<3> according to the network condition, setting a target frame rate (second frame rate), comparing a source frame rate (first frame rate), and if the target frame rate is greater than the source frame rate or equal to the source frame rate, keeping the source frame rate unchanged; and if the target frame rate is less than the source frame rate, adopting the target frame rate. And aiming at the source data, keeping the code rate unchanged, and performing H264 coding on the YUV data frame by adopting a non-fixed dynamic timestamp.
<4> according to the network condition, setting a target resolution (second resolution), comparing the source resolution (first resolution), and if the target resolution is set to be greater than the source resolution or equal to the source resolution, keeping the source resolution unchanged; if the target resolution is less than the source resolution, the target resolution is used. Aiming at the source data, the code rate is kept, the frame rate is unchanged, the data calculation mode is optimized, the data scaling calculation and the data color conversion are calculated simultaneously, two functions are completed through one-time calculation, the transcoding rate is improved, and the performance is improved.
And the video server performs packet transmission on the transcoded H264 second video data according to the video networking protocol, and transmits the second video data to the video networking terminal.
And <6> the video network terminal receives the second video data of the H264, performs packet packing and decoding, and then plays the second video data.
The embodiment of the invention can solve the problem that when the quality of the video network is poor, the real-time video of remote monitoring can be stably transmitted in the video network, and the terminal can smoothly play the monitoring video.
It should be noted that, for simplicity of description, the method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the illustrated order of acts, as some steps may occur in other orders or concurrently in accordance with the embodiments of the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred and that no particular act is required to implement the invention.
According to the video processing method provided by the embodiment of the invention, first video data sent by acquisition equipment is received, and first video coding information of the first video data is obtained; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
Another embodiment of the present invention provides a video processing apparatus, configured to execute the video processing method provided in the foregoing embodiment.
Referring to fig. 5, a block diagram of a video processing apparatus according to an embodiment of the present invention is shown, where the apparatus may be applied in a video network, and specifically may include the following modules: a first receiving module 501, a second receiving module 502, a determining module 503, an encoding module 504, and a transmitting module 505, wherein:
the first receiving module 501 is configured to receive first video data sent by a collection device, and acquire first video coding information of the first video data;
the second receiving module 502 is configured to receive second video coding information of the current video networking link sent by the video quality assessment terminal device;
the determining module 503 is configured to determine, according to the first video coding information and the second video coding information, target video coding information corresponding to the current video networking link;
the encoding module 504 is configured to encode the first video data according to the target video encoding information to obtain encoded second video data;
the sending module 505 is configured to send the second video data to the video networking terminal through the video networking protocol.
The video processing device provided by the embodiment of the invention receives the first video data sent by the acquisition equipment and acquires the first video coding information of the first video data; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
The present invention further provides a supplementary description of the video processing apparatus provided in the above embodiments.
Optionally, the first video coding information includes at least a first definition and a first flow smoothness, the second video coding information includes at least a second definition and a second flow smoothness, and the determining module is configured to:
determining the target definition corresponding to the current video networking link according to the first definition and the second definition;
and/or
And determining the target fluency corresponding to the current video networking link according to the first fluency and the second fluency.
Optionally, the first definition at least includes one or more of a first bitrate and a first resolution, the first stream smoothness includes a first frame rate, the second definition at least includes one or more of a second bitrate and a second resolution, and the second stream smoothness includes a second frame rate;
a determination module specifically configured to:
determining a target code rate corresponding to the current video networking link according to the first code rate and the second code rate;
and/or
Determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution;
a determination module specifically configured to:
and determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate.
Optionally, the determining module is specifically configured to:
determining the smaller code rate of the first code rate and the second code rate as a target code rate;
determining the lower resolution of the first resolution and the second resolution as a target resolution;
and determining the lower frame rate of the first frame rate and the second frame rate as the target frame rate.
Optionally, encoding the first video data according to the target video encoding information to obtain encoded second video data, including:
fragmenting the first video data according to a preset splitting rule to obtain fragmented video data, wherein the preset splitting rule is determined according to a first resolution of the first video data;
and respectively carrying out H264 coding on each piece of video data according to the target video coding information to obtain coded second video data.
It should be noted that the respective implementable modes in the present embodiment may be implemented individually, or may be implemented in combination in any combination without conflict, and the present application is not limited thereto.
For the device embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, refer to the partial description of the method embodiment.
The video processing device provided by the embodiment of the invention receives the first video data sent by the acquisition equipment and acquires the first video coding information of the first video data; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
Still another embodiment of the present invention provides a terminal device, configured to execute the video processing method provided in the foregoing embodiment.
Fig. 6 is a schematic structural diagram of a terminal device of the present invention, and as shown in fig. 6, the terminal device includes: at least one processor 601 and memory 602;
the memory stores a computer program; at least one processor executes the computer program stored in the memory to implement the video processing method provided by the above-described embodiments.
The terminal device provided by this embodiment receives the first video data sent by the acquisition device, and acquires first video coding information of the first video data; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
Yet another embodiment of the present application provides a computer-readable storage medium, in which a computer program is stored, and when the computer program is executed, the video processing method provided in any of the above embodiments is implemented.
According to the computer-readable storage medium of the embodiment, first video coding information of first video data is obtained by receiving the first video data sent by the acquisition equipment; receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment; determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information; coding the first video data according to the target video coding information to obtain coded second video data; the second video data are sent to the video network terminal through the video network protocol, the coding parameters of the video data are dynamically adjusted by acquiring the current video network quality, so that the video network terminal can adapt to different network states, then the coded video data are sent to the video network terminal, and the phenomenon of blocking or screen splash is avoided when the video network terminal plays the video data.
The embodiments in the present specification are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, apparatus, or computer program product. Accordingly, embodiments of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, embodiments of the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
Embodiments of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, electronic devices (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable packet processing electronic device to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable packet processing electronic device, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable packet processing electronics to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data packet processing electronic device to cause a series of operational steps to be performed on the computer or other programmable electronic device to produce a computer implemented process such that the instructions which execute on the computer or other programmable electronic device provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
While preferred embodiments of the present invention have been described, additional variations and modifications of these embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the embodiments of the invention.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or electronic device that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or electronic device. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or electronic device that comprises the element.
The foregoing detailed description of a video processing method and a video processing apparatus according to the present invention has been presented, and the principles and embodiments of the present invention are explained herein by using specific examples, which are only used to help understand the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present invention.

Claims (10)

1. A method of video processing, the method comprising:
receiving first video data sent by acquisition equipment, and acquiring first video coding information of the first video data;
receiving second video coding information of a current video networking link sent by video quality evaluation terminal equipment;
determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information;
coding the first video data according to the target video coding information to obtain coded second video data;
and sending the second video data to a video networking terminal through a video networking protocol.
2. The method of claim 1, wherein the first video coding information comprises at least a first definition and a first flow smoothness, wherein the second video coding information comprises at least a second definition and a second flow smoothness, and wherein determining the target video coding information corresponding to the current video networking link based on the first video coding information and the second video coding information comprises:
determining a target definition corresponding to the current view networking link based on the first definition and the second definition;
and/or
Determining a target fluency corresponding to the current view networking link based on the first fluency and the second fluency.
3. The video processing method of claim 2, wherein the first definition comprises at least one or more of a first bitrate and a first resolution, the first stream definition comprises a first frame rate, the second definition comprises at least one or more of a second bitrate and a second resolution, and the second stream definition comprises a second frame rate;
the determining a target definition corresponding to the current view networking link from the first definition and the second definition comprises:
determining a target code rate corresponding to the current video networking link according to the first code rate and the second code rate;
and/or
Determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution;
the determining a target fluency corresponding to the current view networking link according to the first fluency and the second fluency comprises:
and determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate.
4. The video processing method of claim 3, wherein the determining a target bitrate corresponding to the current video networking link according to the first bitrate and the second bitrate comprises:
determining the lower code rate of the first code rate and the second code rate as the target code rate;
the determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution includes:
determining the lower resolution of the first resolution and the second resolution as the target resolution;
determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate includes:
and determining the lower frame rate of the first frame rate and the second frame rate as the target frame rate.
5. The video processing method according to any of claims 1 to 4, wherein said encoding the first video data according to the target video coding information to obtain encoded second video data comprises:
fragmenting the first video data according to a preset splitting rule to obtain fragmented video data, wherein the preset splitting rule is determined according to a first resolution of the first video data;
and respectively carrying out H264 coding on each piece of video data according to the target video coding information to obtain coded second video data.
6. A video processing apparatus, characterized in that the apparatus comprises:
the first receiving module is used for receiving first video data sent by acquisition equipment and acquiring first video coding information of the first video data;
the second receiving module is used for receiving second video coding information of the current video networking link, which is sent by the video quality evaluation terminal equipment;
the determining module is used for determining target video coding information corresponding to the current video networking link according to the first video coding information and the second video coding information;
the encoding module is used for encoding the first video data according to the target video encoding information to obtain encoded second video data;
and the sending module is used for sending the second video data to a video networking terminal through a video networking protocol.
7. The video processing apparatus of claim 6, wherein the first video coding information comprises at least a first definition and a first flow smoothness, wherein the second video coding information comprises at least a second definition and a second flow smoothness, and wherein the determining module is configured to:
determining a target definition corresponding to the current view networking link based on the first definition and the second definition;
and/or
Determining a target fluency corresponding to the current view networking link based on the first fluency and the second fluency.
8. The video processing method of claim 7, wherein the first definition comprises at least one or more of a first bitrate and a first resolution, the first stream definition comprises a first frame rate, the second definition comprises at least one or more of a second bitrate and a second resolution, and the second stream definition comprises a second frame rate;
the determining module is specifically configured to:
determining a target code rate corresponding to the current video networking link according to the first code rate and the second code rate;
and/or
Determining a target resolution corresponding to the current video networking link according to the first resolution and the second resolution;
the determining module is specifically configured to:
and determining a target frame rate corresponding to the current video networking link according to the first frame rate and the second frame rate.
9. A terminal device, comprising: at least one processor and memory;
the memory stores a computer program; the at least one processor executes the computer program stored by the memory to implement the video processing method of any of claims 1-5.
10. A computer-readable storage medium, characterized in that a computer program is stored in the computer-readable storage medium, which computer program, when executed, implements the video processing method of any of claims 1-5.
CN202111288773.0A 2021-11-02 2021-11-02 Video processing method and device, terminal equipment and storage medium Pending CN114025171A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111288773.0A CN114025171A (en) 2021-11-02 2021-11-02 Video processing method and device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111288773.0A CN114025171A (en) 2021-11-02 2021-11-02 Video processing method and device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114025171A true CN114025171A (en) 2022-02-08

Family

ID=80060330

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111288773.0A Pending CN114025171A (en) 2021-11-02 2021-11-02 Video processing method and device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114025171A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103211A (en) * 2022-07-27 2022-09-23 广州迈聆信息科技有限公司 Data transmission method, electronic device, equipment and computer readable storage device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115103211A (en) * 2022-07-27 2022-09-23 广州迈聆信息科技有限公司 Data transmission method, electronic device, equipment and computer readable storage device
CN115103211B (en) * 2022-07-27 2023-01-10 广州迈聆信息科技有限公司 Data transmission method, electronic device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
CN108810636B (en) Video playing method, virtual reality equipment, server, system and storage medium
JP4690280B2 (en) Method, system and client device for streaming media data
JP4965059B2 (en) Switching video streams
CN110662100A (en) Information processing method, device and system and computer readable storage medium
JP4620041B2 (en) Stochastic adaptive content streaming
US8760490B2 (en) Techniques for a rate-adaptive video conference bridge
CN111669619B (en) Video stream data switching method, device, terminal and readable storage medium
KR100678891B1 (en) Method and apparatus for contents&#39; attribute adaptive buffer control in audio-video data receiving
CN104394484A (en) Wireless live streaming media transmission method
KR20130005873A (en) Method and apparatus for receiving contents in broadcast system
EP1679895A1 (en) Medium signal transmission method, reception method, transmission/reception method, and device
EP1187460A2 (en) Image transmitting method and apparatus and image receiving method and apparatus
KR101472032B1 (en) Method of treating representation switching in HTTP streaming
CN106791860A (en) A kind of adaptive video coding control system and method
KR100511034B1 (en) Mpeg video bit stream transmission apparatus and method
US6412013B1 (en) System for controlling data output to a network
US10924786B2 (en) Method for shaping video streams and set-up box using the method
CN114025171A (en) Video processing method and device, terminal equipment and storage medium
CN108810468B (en) Video transmission device and method for optimizing display effect
US20130007206A1 (en) Transmission apparatus, control method for transmission apparatus, and storage medium
CN113596112A (en) Transmission method for video monitoring
CN115665485B (en) Video picture optimization method and device, storage medium and video terminal
US20120269259A1 (en) System and Method for Encoding VBR MPEG Transport Streams in a Bounded Constant Bit Rate IP Network
CN103475906B (en) Measuring method and measurement apparatus for media stream
KR101795958B1 (en) Adaptive control method, apparatus and user device for providing video in real time network cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination