CN109168032B - Video data processing method, terminal, server and storage medium - Google Patents

Video data processing method, terminal, server and storage medium Download PDF

Info

Publication number
CN109168032B
CN109168032B CN201811337105.0A CN201811337105A CN109168032B CN 109168032 B CN109168032 B CN 109168032B CN 201811337105 A CN201811337105 A CN 201811337105A CN 109168032 B CN109168032 B CN 109168032B
Authority
CN
China
Prior art keywords
target area
video data
video image
area information
data stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811337105.0A
Other languages
Chinese (zh)
Other versions
CN109168032A (en
Inventor
黄书敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Kugou Computer Technology Co Ltd
Original Assignee
Guangzhou Kugou Computer Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Kugou Computer Technology Co Ltd filed Critical Guangzhou Kugou Computer Technology Co Ltd
Priority to CN201811337105.0A priority Critical patent/CN109168032B/en
Publication of CN109168032A publication Critical patent/CN109168032A/en
Application granted granted Critical
Publication of CN109168032B publication Critical patent/CN109168032B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The invention discloses a video data processing method, a terminal, a server and a storage medium, and belongs to the technical field of data processing. The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.

Description

Video data processing method, terminal, server and storage medium
Technical Field
The present invention relates to the field of data processing technologies, and in particular, to a method, a terminal, a server, and a storage medium for processing video data.
Background
With the continuous development of data processing technology, video data processing methods are more and more, for example, in order to adapt to different network bandwidths or processing capabilities of different terminals, transcoding processing needs to be performed on corresponding video data, and for different user requirements, mixed-flow processing may also need to be performed on corresponding video data. In the process of processing video data, the target region of the corresponding video image can be identified according to requirements, for example, the region of interest can be identified, so that in the process of video encoding, more code rates can be allocated to the region of interest, and the quality of video encoding is improved.
At present, a commonly used method for processing video data is as follows: according to a set identification rule, carrying out target area identification on at least one frame of original video image, and coding the at least one frame of original video image based on the identified target area, so that more code rates are allocated to the target area in the coding process to obtain a corresponding video data stream. And when the video data stream is transcoded, decoding the video data stream to obtain a corresponding video image, then carrying out target area identification on the video image again according to an identification rule, recoding the video image according to different target code rates on the basis of the target area identified again, and finally obtaining the target video data stream corresponding to the target code rate.
Based on the processing method of the video data, in the process of encoding and re-encoding at least one frame of original video image, the target area identification needs to be carried out on the video image for many times, and the process of identifying the target area is complex and takes long time, so that the system burden is greatly increased by carrying out the identification on the target area for many times.
Disclosure of Invention
The embodiment of the invention provides a video data processing method, a terminal, a server and a storage medium, which can solve the problem that target area identification needs to be carried out on video images for many times. The technical scheme is as follows:
in one aspect, a method for processing video data is provided, and the method includes:
acquiring at least one frame of original video image;
acquiring target area information of the at least one frame of original video image based on the at least one frame of original video image;
coding the at least one frame of original video image based on the target area information of the at least one frame of original video image to generate a video data stream, wherein the video data stream carries the target area information of the at least one frame of original video image;
and sending the video data stream to a second device.
In one possible implementation manner, the encoding the at least one original video image based on the target area information of the at least one original video image to generate a video data stream, where the video data stream carrying the target area information of the at least one original video image includes:
encoding target area information of the at least one frame of original video image and the at least one frame of original video image to generate at least one first data packet carrying at least one target area identifier, wherein the at least one target area identifier is obtained by encoding the at least one frame of original video image;
and generating the video data stream based on the at least one first data packet carrying at least one target area identifier.
In one possible implementation manner, the encoding the at least one original video image based on the target area information of the at least one original video image to generate a video data stream, where the video data stream carrying the target area information of the at least one original video image includes:
encoding target area information of the at least one frame of original video image to generate at least one second data packet;
encoding the at least one original video image to generate at least one first data packet;
and inserting a second data packet every other preset number of first data packets to generate the video data stream.
In one aspect, a method for processing video data is provided, and the method includes:
receiving a video data stream, wherein the video data stream carries target area information of at least one frame of original video image;
extracting target area information of the at least one frame of original video image from the video data stream;
decoding the video data stream to generate a video image corresponding to the video data stream;
and recoding the video image corresponding to the video data stream based on the target area information corresponding to the video data stream to generate a target video data stream.
In one possible implementation manner, the extracting, based on the video data stream, the target area information of the at least one original video image includes:
extracting at least one target area identification based on at least one field of at least one first data packet in the video data stream;
and decoding the at least one target area identifier to generate target area information of the at least one frame of original video image.
In one possible implementation manner, the extracting, based on the video data stream, the target area information of the at least one original video image includes:
and decoding second data packets after the preset number of first data packets every preset number of first data packets based on at least one first data packet and at least one second data packet in the video data stream to generate target area information of the at least one frame of original video image.
In one aspect, a method for processing video data is provided, and the method includes:
receiving at least two video data streams, wherein each video data stream carries target area information of at least one frame of original video image;
extracting target area information of at least one frame of original video image corresponding to each path of video data stream from the at least two paths of video data streams;
decoding each path of video data stream to generate video images corresponding to the at least two paths of video data streams;
merging the video images corresponding to the at least two video data streams to generate a target video image;
and recoding the target video image based on the target area information corresponding to the at least two video data streams to generate a target video data stream.
In a possible implementation manner, the extracting, based on the at least two video data streams, target area information of at least one frame of original video image corresponding to each video data stream includes:
extracting at least one target area identifier corresponding to the at least two paths of video data streams based on at least one field of at least one first data packet in each path of video data stream;
and decoding at least one target area identifier corresponding to each path of video data stream to generate target area information of at least one frame of original video image corresponding to the at least two paths of video data streams.
In a possible implementation manner, the extracting, based on the at least two video data streams, target area information of at least one frame of original video image corresponding to each video data stream includes:
and decoding the second data packets after the preset number of first data packets every other preset number of first data packets based on at least one first data packet and at least one second data packet in each path of video data stream, so as to generate target area information of at least one frame of original video image in at least two paths of video data streams.
In one aspect, an apparatus for processing video data is provided, the apparatus comprising:
the acquisition module is used for acquiring at least one frame of original video image;
the acquisition module is further configured to acquire target area information of the at least one frame of original video image based on the at least one frame of original video image;
the generating module is used for encoding the at least one frame of original video image based on the target area information of the at least one frame of original video image to generate a video data stream, and the video data stream carries the target area information of the at least one frame of original video image;
and the sending module is used for sending the video data stream to the second equipment.
In one possible implementation, the generating module is configured to:
encoding target area information of the at least one frame of original video image and the at least one frame of original video image to generate at least one first data packet carrying at least one target area identifier, wherein the at least one target area identifier is obtained by encoding the at least one frame of original video image;
and generating the video data stream based on the at least one first data packet carrying at least one target area identifier.
In one possible implementation, the generating module is configured to:
encoding target area information of the at least one frame of original video image to generate at least one second data packet;
encoding the at least one frame of original video image to generate at least one first data packet;
and inserting a second data packet every other preset number of first data packets to generate the video data stream.
In one aspect, an apparatus for processing video data is provided, the apparatus comprising:
the receiving module is used for receiving a video data stream, and the video data stream carries target area information of at least one frame of original video image;
the extraction module is used for extracting target area information of the at least one frame of original video image based on the video data stream;
the decoding module is used for decoding the video data stream to generate a video image corresponding to the video data stream;
and the recoding module is used for recoding the video image corresponding to the video data stream based on the target area information of the at least one frame of original video image to generate a target video data stream.
In one possible implementation, the extraction module is configured to:
extracting at least one target area identification based on at least one field of at least one first data packet in the video data stream;
and decoding the at least one target area identifier to generate target area information of the at least one frame of original video image.
In one possible implementation, the extraction module is configured to:
and decoding second data packets after the preset number of first data packets every preset number of first data packets based on at least one first data packet and at least one second data packet in the video data stream to generate target area information of the at least one frame of original video image.
In one aspect, an apparatus for processing video data is provided, the apparatus comprising:
the receiving module is used for receiving at least two paths of video data streams, and each path of video data stream carries target area information of at least one frame of original video image;
the extraction module is used for extracting target area information of at least one frame of original video image corresponding to each path of video data stream based on the at least two paths of video data streams;
the decoding module is used for decoding each path of video data stream to generate video images corresponding to the at least two paths of video data streams;
the merging module is used for merging the video images corresponding to the at least two video data streams to generate a target video image;
and the recoding module is used for recoding the target video image based on the target area information corresponding to the at least two video data streams to generate the target video data stream.
In one possible implementation, the extraction module is configured to:
extracting at least one target area identifier corresponding to the at least two paths of video data streams based on at least one field of at least one first data packet in each path of video data stream;
and decoding at least one target area identifier corresponding to each path of video data stream to generate target area information of at least one frame of original video image corresponding to the at least two paths of video data streams.
In one possible implementation, the extraction module is configured to:
and decoding the second data packets after the preset number of first data packets every other preset number of first data packets based on at least one first data packet and at least one second data packet in each path of video data stream, so as to generate target area information of at least one frame of original video image in at least two paths of video data streams.
In one aspect, a terminal is provided, and the terminal includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the processing method of video data.
In one aspect, a server is provided, and the server includes a processor and a memory, where at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the processing method for video data.
In one aspect, a computer-readable storage medium is provided, in which at least one instruction is stored, and the instruction is loaded and executed by a processor to implement the operations performed by the processing method of video data as described above.
The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 2 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 3 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 4 is a flowchart of a method for processing video data according to an embodiment of the present invention;
fig. 5 is a flowchart of encoding and transcoding a video image according to an embodiment of the present invention;
fig. 6 is a flowchart of a method for processing video data according to an embodiment of the present invention;
FIG. 7 is a flow chart of encoding and blending video images according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 9 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 10 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention;
fig. 11 is a block diagram of a terminal according to an embodiment of the present invention;
fig. 12 is a schematic structural diagram of a server according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart of a method for processing video data according to an embodiment of the present invention, where the method for processing video data can be applied to a first device. Referring to fig. 1, the embodiment includes:
101. at least one frame of original video image is acquired.
102. And acquiring target area information of the at least one frame of original video image based on the at least one frame of original video image.
103. And coding the at least one frame of original video image based on the target area information of the at least one frame of original video image to generate a video data stream, wherein the video data stream carries the target area information of the at least one frame of original video image.
104. The video data stream is transmitted to a second device.
In some embodiments, the encoding the at least one original video image based on the target area information of the at least one original video image to generate a video data stream, the video data stream carrying the target area information of the at least one original video image includes:
encoding target area information of the at least one frame of original video image and the at least one frame of original video image to generate at least one first data packet carrying at least one target area identifier, wherein the at least one target area identifier is obtained by encoding the at least one frame of original video image;
and generating the video data stream based on the at least one first data packet carrying at least one target area identifier.
In some embodiments, the encoding the at least one original video image based on the target area information of the at least one original video image to generate a video data stream, the video data stream carrying the target area information of the at least one original video image includes:
encoding target area information of the at least one frame of original video image to generate at least one second data packet;
encoding the at least one frame of original video image to generate at least one first data packet;
and inserting a second data packet every preset number of first data packets to generate the video data stream.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 2 is a flowchart of a method for processing video data according to an embodiment of the present invention, where the method for processing video data can be applied to a second device. Referring to fig. 2, the embodiment includes:
201. and receiving a video data stream, wherein the video data stream carries the target area information of at least one frame of original video image.
202. And extracting the target area information of the at least one frame of original video image from the video data stream.
203. And decoding the video data stream to generate a video image corresponding to the video data stream.
204. And recoding the video image corresponding to the video data stream based on the target area information carried by the video data stream to generate the target video data stream.
In some embodiments, the extracting the target area information of the at least one original video image based on the video data stream includes:
extracting at least one target area identification based on at least one field of at least one first data packet in the video data stream;
and decoding the at least one target area identifier to obtain the target area information of the at least one frame of original video image.
In some embodiments, the extracting the target area information of the at least one original video image based on the video data stream includes:
and decoding second data packets after the preset number of first data packets every preset number of first data packets based on at least one first data packet and at least one second data packet in the video data stream to generate target area information of the at least one frame of original video image.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 3 is a flowchart of a method for processing video data according to an embodiment of the present invention, where the method for processing video data can be applied to a second device. Referring to fig. 3, the embodiment includes:
301. at least two video data streams are received, and each video data stream carries target area information of at least one frame of original video image.
302. And extracting the target area information of at least one frame of original video image corresponding to each path of video data stream from the at least two paths of video data streams.
303. And decoding each path of video data stream to generate video images corresponding to the at least two paths of video data streams.
304. And merging the video images corresponding to the at least two video data streams to generate a target video image.
305. And recoding the target video image based on the target area information corresponding to the at least two video data streams to generate the target video data stream.
In some embodiments, the extracting, based on the at least two video data streams, the target area information of the at least one original video image corresponding to each video data stream includes:
extracting at least one target area identifier corresponding to the at least two paths of video data streams based on at least one field of at least one first data packet in each path of video data stream;
and decoding at least one target area identifier corresponding to each path of video data stream to obtain target area information of at least one frame of original video image corresponding to the at least two paths of video data streams.
In some embodiments, the extracting, based on the at least two video data streams, the target area information of the at least one original video image corresponding to each video data stream includes:
and decoding the second data packets after the preset number of first data packets every other preset number of first data packets based on at least one first data packet and at least one second data packet in each path of video data stream to generate target area information of at least one frame of original video image in at least two paths of video data streams.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 4 is a flowchart of a method for processing video data according to an embodiment of the present invention, which is described by taking an example of interaction between a first device and a second device, where the first device has an encoding function and the second device has a transcoding function. Referring to fig. 4, the embodiment includes:
401. the first device acquires at least one frame of original video image.
In the embodiment of the present invention, the first device has a video image acquisition function and an encoding function, and the first device can acquire at least one frame of original video image through the video image acquisition function. The at least one frame of original video image is a video image that is initially acquired by the first device without being encoded or the like.
Taking the first device as a terminal as an example, a multimedia client, such as a live client, may be installed on the terminal, and the multimedia client may acquire at least one frame of original video image in real time through a camera on the terminal. The terminal may first acquire the at least one frame of original video image, and then encode the at least one frame of original video image. Of course, the terminal may also encode one original video image through the corresponding encoding function every time the terminal acquires the original video image.
Of course, the first device may also be a server, and the server may receive at least one original video image sent by any terminal, and encode the received at least one original video image in real time based on an encoding function on the server. Of course, the server may also acquire at least one original video image first and then encode the at least one original video image. The embodiment of the present invention does not limit the specific form of the first device and the specific process of acquiring at least one frame of original video image.
402. The first device acquires target area information of the at least one frame of original video image based on the at least one frame of original video image.
In the embodiment of the present invention, the target region refers to an image region that needs to be processed in an emphasized manner on each original video image, and based on the target region, when the first device encodes each original video image, the first device may analyze the target region in an emphasized manner and allocate more code rates to the target region, so as to increase the encoding accuracy of the target region and improve the overall encoding quality. The target area information is information about the target area, and the target area information may be used to indicate whether a corresponding macroblock in each original video image belongs to the target area, and may also be used to indicate the importance of the corresponding macroblock in each original video image or the offset value of the corresponding macroblock. Of course, the target area information may also be other information related to the target area, and the specific content of the target area information is not limited in this embodiment of the present invention.
Specifically, the target region may be an interested region, which may be a region of important interest required by the user, or may be a main body portion on the corresponding image, for example, the interested region may be a human face, and of course, the target region may also be other set regions, which is not limited herein in the embodiment of the present invention. As shown in fig. 5, during the image processing, the first device may identify the at least one frame of original video image through a corresponding target area identification algorithm to identify a target area in each original video image. The first device may outline the identified target region in each original video image by a square, a circle, an irregular polygon, or the like. Further, the first device may extract target area information corresponding to each target area based on the identified target area in each original video image.
Taking a Selective Search (Selective Search) algorithm as an example, describing the extraction process of the target area information, the first device may run the Selective Search algorithm on the at least one frame of original video image to perform initial image segmentation on each original video image so as to segment each original video image into at least one smaller candidate area, and then screen and merge at least one candidate area corresponding to each original video image so as to delete the candidate areas which do not meet the requirements of the target area, and merge the candidate areas which meet the requirements of the target area.
For example, the similarity between the at least one candidate region may be calculated based on the parameters of color, texture, size, spatial overlap, and the like of the at least one candidate region, or the similarity between each candidate region and a target region in the database may be calculated, for example, the similarity between each candidate region and a face region stored in the database may be calculated to determine whether the candidate region is a desired face region. And finally, obtaining a target area corresponding to each original video image based on the candidate area with higher similarity. Further, target area information corresponding to each target area may be obtained based on each target area, for example, if the similarity between one target area and the target areas stored in the database is high, the one target area may be determined as a more important target area, and the target area information corresponding to the one target area may be used to indicate that the one target area is an important area.
Of course, in other embodiments, the first device may also identify the target area in the at least one frame of original video image by using another target area identification algorithm and obtain corresponding target area information, where the target area information may also be other information.
It should be noted that, each time the first device acquires an original video image, the first device acquires target area information of the original video image through a corresponding target area recognition algorithm. Of course, the first device may also obtain part or all of the original video image to be processed first, and then obtain the target area information of the part or all of the original video image to be processed through a corresponding target area recognition algorithm, which is not limited herein in the embodiment of the present invention.
403. The first device encodes the target area information of the at least one frame of original video image to generate at least one target area identifier.
In the embodiment of the present invention, based on the target area information of the at least one original video image obtained in step 402, after the first device encodes the at least one frame of original video image, the video data stream generated by the encoding carries the target area information corresponding to each original video image, so that in the subsequent processing of the related video data stream, when the related device needs the corresponding target area information, the required target area information can be directly extracted from the video data stream, thereby avoiding a complex process of running a target area identification algorithm on the related video image again, greatly reducing the processing time of the video data, and reducing the processing load of the system.
In an embodiment, the first device may encode target area information corresponding to at least one frame of original video image to encode the generated at least one target area identifier into the finally generated video data stream, so as to achieve the purpose that the video data stream carries the target area information of the at least one frame of original video image.
Specifically, the first device may compress target area information corresponding to each original video image to convert the target area information into a corresponding binary number, where the corresponding binary number is a target area identifier corresponding to the target area information of each original video image. The target area identifier may be used to indicate the importance of the corresponding target area, for example, when the target area information indicates that the corresponding target area is the most important area, the target area identifier generated by encoding the target area information may be a number "1", and when the target area information indicates that the corresponding target area is a normal area, the target area identifier generated by encoding the target area information may be a number "0".
Of course, in other embodiments, the target area identifier may also be used to represent other target area information of the corresponding target area, and the corresponding target area information may also be identified in other manners.
404. The first device encodes the at least one frame of original video image to generate at least one first data packet.
In this embodiment of the present invention, based on the at least one frame of original video image obtained in step 401, the first device may encode each original video image, so as to compress the at least one frame of original video image with a large data volume into a video data stream with a small data volume, which is convenient for a transmission system to transmit and saves transmission time.
Specifically, the first device may remove redundant information of the at least one original video image through an encoding function, for example, the first device may remove spatial redundant information, temporal redundant information, visual redundant information, and the like of each original video image, so as to compress the at least one original video image, where the compressing process may specifically include: prediction, transformation, quantization, entropy coding, etc. by which the first device may obtain at least one code corresponding to each original video image.
Based on the obtained at least one code, the first device may arrange a set number of codes together according to a corresponding rule, and perform packetization, for example, NAL (Network abstraction layer) packetization on the codes to form a first packet for the at least one frame of original video image. And at least one corresponding first data packet can be obtained by encoding the generated at least one code. Each first data packet may include at least one code, and the number of codes in each first data packet is not limited in this embodiment of the present invention.
405. And the first device correspondingly inserts the at least one target area identifier into the at least one first data packet to generate the video data stream.
In this embodiment of the present invention, based on the at least one target area identifier obtained in step 403 and the at least one first data packet obtained in step 404, the first device may correspondingly fill the at least one target area identifier in the corresponding first data packet, so that the corresponding first data packet carries the corresponding target area identifier, and generate a video data stream based on the at least one target area identifier and the at least one first data packet, thereby achieving an object of carrying target area information of at least one frame of original video image in the video data stream.
Specifically, the at least one first data packet generated by the first device includes at least one first data packet generated based on the target area and at least one first data packet generated based on the non-target area, and the first device may correspondingly insert at least one target area identifier in the at least one first data packet generated based on the target area. For example, the first device may program each target area identifier into a first data packet corresponding to each target area identifier, and of course, the first device may also insert each target area identifier at the last position of the first data packet corresponding to each target area identifier, so that each first data packet generated based on the target area carries one corresponding target area identifier.
Based on the above process, after the first device sequentially inserts the generated at least one target area identifier into the corresponding position of the corresponding first data packet, the first device may perform a series of processes such as splicing and packaging based on the at least one target area identifier and the at least one first data packet, and finally generate a corresponding video data stream, where the video data stream carries the corresponding target area identifier. During the encoding process, the first device may allocate more code rates to a target region in at least one frame of original video image, so that the first device has higher encoding quality for the target region.
The above-mentioned steps 403 to 405 are a process in which the first device generates at least one corresponding target area identifier based on the target area information of at least one frame of original video image, generates at least one corresponding first data packet based on the at least one frame of original video image, and inserts the at least one target area identifier into the at least one first data packet correspondingly, so that the generated video data stream carries the corresponding target area information.
In addition to the processes involved in steps 403 to 405, another process that can make the generated video data stream carry the corresponding target area information is described as follows:
(1) the first device encodes the target area information of the at least one frame of original video image obtained in step 402 to generate at least one second data packet. Each second data packet is composed of at least one corresponding code, and the code is data obtained by compressing each original video image through the encoding function of the first device. Specifically, the first device may perform processes such as prediction, transformation, quantization, and entropy coding on target area information of at least one frame of original video image to remove relevant redundant information in the target area information, so as to obtain at least one code corresponding to each target area information, and further, the first device may arrange the at least one code corresponding to each target area information together according to a set rule, and perform packing to obtain at least one second data packet corresponding to the at least one target area information. Wherein, the invention does not limit the specific arrangement rule of the at least one character;
(2) the first device encodes the at least one frame of original video image obtained in step 401, and generates at least one first data packet. The specific process is the same as the above step 404, and the present invention is not described herein again;
(3) the first device inserts a second data packet into every preset number of first data packets based on the at least one first data packet to generate the video data stream. Specifically, based on the at least one second data packet obtained in step (1) and the at least one first data packet obtained in step (2), the first device may insert one second data packet at the last position of each preset number of first data packets, so that each preset number of first data packets carries one second data packet, where the preset number may be any positive integer set by the first device. Of course, it may be set that part of the first data packet does not carry the second data packet. The embodiment of the present invention does not limit the specific number of the predetermined number, and does not limit the specific first data packet carrying the second data packet.
Based on the above process, after the first device sequentially inserts the generated at least one second data packet into the corresponding position of each preset number of first data packets, the first device may perform a series of processes such as splicing and packaging based on the at least one second data packet and the at least one first data packet, and finally generate a corresponding video data stream, where the video data stream carries the corresponding second data packet. During the encoding process, the first device may allocate more code rates to a target region in at least one frame of original video image, so that the first device has higher encoding quality for the target region.
The above steps (1) to (3) are a process in which the first device generates at least one corresponding second data packet based on the target area information of at least one frame of original video image, and inserts the at least one second data packet into at least one first data packet generated based on at least one frame of original video image, so that the generated video data stream finally carries the corresponding target area information.
In other embodiments, in addition to the two methods, the generated video data stream may carry corresponding target area information, and the video data stream may also carry corresponding target area information in other manners.
It should be noted that the processes involved in steps 403 to 405 are processes of encoding the target area information of at least one frame of original video image and the at least one frame of original video image, and generating at least one first data packet carrying at least one target area identifier. In this process, the first device may generate at least one first data packet based on the at least one original video image and the corresponding target area information thereof, that is, the first device may insert at least one target area identifier in at least one code corresponding to the at least one original video image, so that the first device may package the at least one code and the at least one target area identifier at the same time to generate at least one first data packet. The embodiment of the present invention does not limit a specific generation manner of the at least one first data packet.
406. The first device transmits the video data stream to the second device.
In the embodiment of the present invention, as shown in fig. 5, based on the video data stream obtained in step 405, the first device may transmit the video data stream to any other second device. The first device may transmit the video data stream to the corresponding second device based on a corresponding transmission system, where the transmission system may be internet, terrestrial radio, satellite, or the like. Data are transmitted based on the form of video data streams, so that the data are faster and more convenient to store in the transmission process, and the burden of a transmission system is reduced.
It should be noted that the second device may have a storage function, a decoding function, and a re-encoding function, and the second device may be a terminal that can perform processing of decoding and re-encoding a video data stream by an application having the decoding function and the re-encoding function. The second device may also be a server, and the server may obtain the corresponding video data stream in real time and process the obtained video data stream in real time through the decoding and re-encoding processes on the server. The embodiment of the present invention does not limit the specific form of the second apparatus.
407. The second device receives a video data stream carrying target area information for at least one frame of the original video image.
In the embodiment of the present invention, based on steps 401 to 405, in the process of encoding based on at least one frame of original video image, the first device also encodes the target area information extracted from the at least one frame of original video image into the corresponding video data stream, so that the generated video data stream carries the target area information of the at least one frame of original video image, and therefore, the second device receives the video data stream from the first device and also receives the target area information of the at least one frame of original video image encoded into the video data stream.
It should be noted that the second device may receive the video data stream in real time, that is, the second device may receive the video data stream while synchronously processing the received video data stream. Of course, the second device may also receive all the video data streams sent by the first device first, and then perform corresponding processing on the received video data streams, which is not limited herein in the embodiment of the present invention.
408. The second device decodes at least one target area identifier in the video data stream to obtain target area information of the at least one frame of original video image.
In the embodiment of the present invention, as shown in fig. 5, the second device may transcode the received video data stream based on a decoding function and a re-encoding function, where transcoding refers to converting the video data stream into another video data stream based on the video data stream generated by the first device, so as to adapt to different network bandwidths, different terminal processing capabilities, different user requirements, and the like. For example, the second device may transcode the video data stream into video data streams of different video formats, for example, the second device may convert a video data stream of an MPEG-2(Moving Picture Experts Group) format into a video data stream of an h.264 format, the second device may further change a bit rate of the video data stream received from the first device to meet requirements of playing of different devices, and in addition, the second device may also transcode the received video data stream, so that a resolution of a video image corresponding to the video data stream before and after transcoding is changed, for example, a high definition video may be converted into a standard definition video, and the like. The embodiment of the present invention does not limit the specific use of the transcoding process.
The essence of the transcoding process is that the received video data stream is decoded first, and then the decoded data is re-encoded. As can be seen from the above steps 403 to 405, the video data stream received by the second device includes both data encoded by at least one frame of original video image and data encoded by the target area information corresponding to at least one frame of original video image. Therefore, the second device may extract the corresponding target area information based on the video data stream, where the process of extracting the target area information is a process of decoding the video data stream, and the decoding process is a process of decompressing related data in the video data stream.
In one embodiment, corresponding to step 405, the video data stream received by the second device may include at least one target area identifier, and the at least one target area identifier is compressed based on target area information in the corresponding at least one original video image. Therefore, when the second device needs corresponding target area information, decoding can be performed based on at least one target area identifier in the video data stream to extract the needed target area information.
Specifically, each first data packet in the video data stream includes at least one field, where the at least one field includes a data header and a data body part, where the data header may be a corresponding target area identifier, and the second device may, in a process of extracting target area information of at least one frame of original video image based on the video data stream, extract a target area identifier corresponding to the data header in the at least one field first, and then decode the target area identifier to extract target area information corresponding to each target area.
The above process is described by taking an example of decoding at least one target area identifier in the video data stream and extracting target area information of at least one corresponding frame of original video image, and another method for extracting target area information of at least one frame of original video image from the video data stream is described as follows:
corresponding to steps (1) to (3) in step 405, in an embodiment, the second device may decode, every preset number of first data packets, a second data packet following the preset number of first data packets based on at least one first data packet and at least one second data packet in the video data stream, and generate the target area information of the at least one frame of original video image. Specifically, the video data stream may include at least one first data packet and at least one second data packet, where the at least one second data packet is obtained by encoding target area information based on at least one frame of original video image, and the second device may decode the at least one second data packet to obtain the required target area information.
The second device may detect the video data stream, and may detect a corresponding second data packet every preset number of first data packets, specifically, the second device may detect that the (N + 1) th data packet is the second data packet every N first data packets, where N may be any positive integer. Of course, the corresponding second data packets may also be located at other positions of every preset number of first data packets, and the number of the first data packets detected by the second device between every two second data packets may also be any other number, which is not limited herein in the embodiment of the present invention. The second device may decompress the second data packet based on the decoding function, so as to restore the second data packet to the corresponding target area information, thereby achieving the purpose of extracting the target area information of at least one frame of the original video image.
Based on the process, the second device can extract the target area information carried in the video data stream more quickly, so that the second device is prevented from performing a target area recognition algorithm on the video image again in the subsequent processing to obtain the required target area information, the data processing time is greatly reduced, and the operation burden of the second device is reduced.
It should be noted that, in addition to the two methods described above for the second device to extract the corresponding target area information based on the received video data stream, the second device may also extract the corresponding target area information by using other methods.
409. And the second equipment decodes at least one first data packet in the video data stream to obtain a video image corresponding to the at least one first data packet.
In an embodiment of the present invention, the at least one first data packet is obtained by encoding, by the first device, the acquired at least one frame of original video image. The second device needs to decode based on the video data stream in the process of transcoding the received video data stream to restore at least one first data packet in the video data stream to a corresponding video image, and then processes the corresponding video image based on parameters such as resolution or format set by the second device to obtain a video image meeting the requirement.
In particular, corresponding to step 404, the second device may decode at least one first data packet in the video data stream through a corresponding decoding algorithm, for example, the second device may decode at least one first data packet in the video data stream through an h.264 decoding algorithm. The second device may call a related function in the decoding algorithm, obtain the encapsulation information in the video data stream, read and analyze at least one first data packet in the video data stream, find a header identifier of each first data packet, decode data between every two header identifiers, and finally obtain each video image corresponding to each data. Based on the above process, the second device may sequentially restore at least one first data packet in the video data stream to a corresponding at least one frame of video image, so as to achieve the purpose of decoding the video data stream.
The above steps 408 to 409 are processes of decoding the received video data stream by the second device to generate a video image corresponding to the video data stream, where the processes include a process of decoding the video data stream to obtain corresponding target area information, and a process of decoding a database in the video data stream to obtain a corresponding video image. Of course, in other embodiments, the second device may also decode the video data stream through other decoding algorithms, and the specific process of decoding the video data stream is not limited in this embodiment of the present invention.
It should be noted that, in the process that the first device involved in the above steps 408 to 409 decodes at least one first data packet, the first device may obtain a video image corresponding to the at least one first data packet and corresponding target area information at the same time. The order in which the first device obtains the video image and the corresponding target area information is not limited in the embodiments of the present invention.
410. And the second equipment recodes the video image corresponding to the video data stream based on the target area information of the at least one frame of original video image to generate a target video data stream.
In this embodiment Of the present invention, based on the target area information Of the at least one frame Of original video image obtained in step 408 and the corresponding video image obtained by decoding the video data stream in step 409, the second device may re-encode the corresponding video image, where the re-encoding is ROI (Region Of Interest) encoding, and allocate more code rates to the target area corresponding to the target area information in the video image during the re-encoding process according to the target area information, so as to generate a higher-quality target video data stream.
Specifically, similar to the encoding process in step 404, the second device may perform processes such as prediction, transformation, quantization, and entropy encoding on the video image corresponding to the video data stream according to a set target format or a set target resolution, so as to remove redundant information of the video image, and finally, the second device may compress the video image corresponding to the video data stream into at least one target code corresponding to the set target format or the set target resolution.
Based on the obtained at least one object code, the second device may arrange the at least one object code according to a corresponding rule, perform processes such as packaging, and finally generate a target video data stream corresponding to parameters such as a set target format or a target resolution, and implement a process of transcoding the video data stream corresponding to at least one frame of original video image.
It should be noted that, in the process of re-encoding the video image, the second device may also re-encode the video image based on other parameters, and the embodiment of the present invention does not limit the re-encoding parameters and the re-encoding specific process of the second device.
The foregoing steps 407 to 410 are processes of transcoding the received video data stream by the second device, as shown in fig. 5, in the transcoding process, the second device may directly extract corresponding target area information from the video data stream, so as to avoid a process of re-operating the target area identification algorithm, and greatly improve system performance. Of course, in addition to the above-mentioned transcoding process, the second device may also implement transcoding by other methods, as long as the second device can directly extract corresponding target area information from the video data stream, and the embodiment of the present invention is not limited herein.
The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.
The embodiment can be applied to a live video scene, and specifically, in live video, a live client can acquire at least one frame of original video image in real time through a camera of a terminal, and the terminal can perform target area identification on the at least one frame of original video image and encode the at least one frame of original video image based on the obtained target area information. The terminal can send the video data stream generated by the coding to the server, the server can decode the video data stream based on the target area information carried in the video data stream to obtain the corresponding video image and the target area information carried by the video image, and the video image is recoded to realize the purpose of transcoding the video data stream, so that the video resolution or the video format and the like corresponding to the target video data stream generated by transcoding are changed to adapt to different requirements of users. And the server can also send the target video data stream after the format conversion to other terminals so as to adapt to the video playing and processing capabilities of different terminals. Besides the video live broadcast scene, the transcoding process may also be applied to other scenes, and the specific use of the transcoding process is not limited herein in the embodiment of the present invention.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 6 is a flowchart of a method for processing video data according to an embodiment of the present invention, where the method for processing video data is described by a first device and a second device interacting, where the first device has an encoding function and the second device has a mixing function. Referring to fig. 6, the embodiment includes:
601. the first device acquires at least one frame of original video image.
602. The first device acquires target area information of the at least one frame of original video image based on the at least one frame of original video image.
603. The first device encodes the target area information of the at least one frame of original video image to generate at least one target area identifier.
604. The first device encodes the at least one frame of original video image to generate at least one first data packet.
605. And the first device correspondingly inserts the at least one target area identifier into the at least one first data packet to generate the video data stream.
606. The first device transmits the video data stream to the second device.
In the embodiment of the present invention, as shown in fig. 7, steps 601 to 606 are similar to steps 401 to 406, and are not repeated herein.
607. The second device receives at least two video data streams, each video data stream carrying target area information of at least one frame of original video image.
In this embodiment of the present invention, as shown in fig. 7, the second device may have a storage function, a decoding function, a merging function, and a re-encoding function, and the second device may receive at least two video data streams from at least one first device, and perform mixed flow processing on the at least two received video data streams, where the mixed flow processing refers to merging video images corresponding to the at least two video data streams with different sources, and finally merging the at least two video data streams into the same video data stream, so as to meet a user's requirement, that is, the essence of the mixed flow processing is a process of decoding, merging, and re-encoding the at least two video data streams.
The second device may be a server, the server may have a mixing function, and the server may receive at least two video data streams from different multimedia clients, and decode, combine, and re-encode the at least two video data streams to mix the at least two video data streams into one target video data stream. Of course, the second device may also be a terminal, and the terminal may receive at least two video data streams sent by any other device and combine the at least two video data streams into the same target video data stream. The embodiment of the present invention does not limit the specific form of the second apparatus.
The second device may receive at least two video data streams from different first devices in real time, and synchronously perform mixed flow processing on the at least two video data streams, that is, the second device may receive video data streams from different sources and perform mixed flow processing on the received video data streams. Of course, the second device may also receive all video data streams from different sources, and then perform mixed flow processing based on all the received video data streams.
It should be noted that the same second device may have both the mixed flow function and the transcoding function, for example, the same second device may have a mixed flow system and a transcoding system, where the mixed flow system may mix the received at least two video data streams, and the transcoding system may transcode each received video data stream. Certainly, the mixed flow system and the transcoding system may also be located on different second devices, respectively, where the second device with the mixed flow system may perform mixed flow processing on at least two received video data streams, and the second device with the transcoding system may perform transcoding processing on each received video data stream.
608. And the second equipment decodes at least one target area identifier in each video data stream to obtain the target area information of at least one frame of original video image in the at least two video data streams.
609. And the second equipment decodes at least one first data packet in each path of video data stream to obtain a video image corresponding to at least one first data packet in at least two paths of video data streams.
The steps 608 to 609 need to perform corresponding processing on all the video data streams received by the second device, as shown in fig. 7, wherein a processing process of each video data stream is the same as the processing process of the steps 408 to 409, and the embodiment of the present invention is not described herein again.
610. And the second equipment combines the video images corresponding to the at least two video data streams to generate a target video image.
In the embodiment of the present invention, as shown in fig. 7, based on the video image corresponding to each of the at least two video data streams obtained in step 609, the second device may combine the video images corresponding to the at least two video data streams through corresponding combining functions, so that the video images of the at least two video data streams are combined into a whole, that is, the corresponding target video image is generated based on at least one frame of video image.
Specifically, based on the at least one frame of video image corresponding to the at least two video data streams, the second device may correspondingly merge each video image at the same position in the at least two video data streams together, starting from the first video image in each video data stream. In addition, the second device may also correspondingly combine every N video images respectively corresponding to the at least two video data streams, where N is a positive integer. Further, the second device may perform left-back combination or top-bottom combination on at least one frame of video image corresponding to the at least two video data streams, and the second device may also perform a "large-frame small-picture" combination on the at least one frame of video image, so that the generated target video image is in a "picture-in-picture" form. Of course, in addition to the above video image merging method, the second device may also merge the video images corresponding to the at least two video data streams in other manners to generate the target video image, and the specific manner in which the second device generates the target video image is not limited in this embodiment.
611. And the second equipment recodes the target video image based on the target area information corresponding to the at least two video data streams to generate a target video data stream.
In the embodiment of the present invention, as shown in fig. 7, step 611 is similar to step 410, and the embodiment of the present invention is not described herein again.
The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.
The embodiment can be applied to a live video scene, specifically, in a live broadcast process, the mixed flow processing can be applied to video interaction and other processes between a main broadcast and other users, in the process, a server can receive video data streams sent by different multimedia clients, and the server can perform the mixed flow processing on the received video data streams with different sources, so that the video data streams with different sources are combined into a same path of target video data stream. Besides the live video scenes, the mixed flow processing process can also be applied to other scenes, and the specific application of the mixed flow processing is not limited in the embodiment of the invention.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
Fig. 8 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention. Referring to fig. 8, the apparatus includes: an acquisition module 801, a generation module 802, and a transmission module 803.
An obtaining module 801, configured to obtain at least one frame of original video image;
the obtaining module 801 is further configured to obtain target area information of the at least one frame of original video image based on the at least one frame of original video image;
a generating module 802, configured to encode the at least one frame of original video image based on the target area information of the at least one frame of original video image, and generate a video data stream, where the video data stream carries the target area information of the at least one frame of original video image;
a sending module 803, configured to send the video data stream to the second device.
In some embodiments, the generation module 802 is configured to:
encoding target area information of the at least one frame of original video image and the at least one frame of original video image to generate at least one first data packet carrying at least one target area identifier, wherein the at least one target area identifier is obtained by encoding the at least one frame of original video image;
and generating the video data stream based on the at least one first data packet carrying at least one target area identifier.
In some embodiments, the generation module 802 is configured to:
encoding target area information of the at least one frame of original video image to generate at least one second data packet;
encoding the at least one frame of original video image to generate at least one first data packet;
and inserting a second data packet every preset number of first data packets to generate the video data stream.
The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.
It should be noted that: in the processing apparatus for video data provided in the above embodiment, only the division of the above functional modules is taken as an example to illustrate when processing video data, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the video data processing apparatus and the video data processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 9 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention. Referring to fig. 9, the apparatus includes: a receiving module 901, an extracting module 902, a decoding module 903 and a recoding module 904.
A receiving module 901, configured to receive a video data stream, where the video data stream carries target area information of at least one frame of original video image;
an extracting module 902, configured to extract target area information of the at least one frame of original video image based on the video data stream;
a decoding module 903, configured to decode the video data stream to generate a video image corresponding to the video data stream;
the re-encoding module 904 is configured to re-encode the video image corresponding to the video data stream based on the target area information and the target code rate of the at least one frame of original video image, so as to generate a target video data stream.
In some embodiments, the extraction module 902 is configured to:
extracting at least one target area identification based on at least one field of at least one first data packet in the video data stream;
and decoding the at least one target area identifier to obtain the target area information of the at least one frame of original video image.
In some embodiments, the extraction module 902 is configured to:
and decoding second data packets after the preset number of first data packets every preset number of first data packets based on at least one first data packet and at least one second data packet in the video data stream to generate target area information of the at least one frame of original video image.
The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.
It should be noted that: in the processing apparatus for video data provided in the above embodiment, only the division of the above functional modules is taken as an example to illustrate when processing video data, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the video data processing apparatus and the video data processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 10 is a schematic structural diagram of a video data processing apparatus according to an embodiment of the present invention. Referring to fig. 10, the apparatus includes: a receiving module 1001, an extracting module 1002, a decoding module 1003, a merging module 1004, and a recoding module 1005.
A receiving module 1001, configured to receive at least two video data streams, where each video data stream carries target area information of at least one frame of original video image;
an extracting module 1002, configured to extract, based on the at least two video data streams, target area information of at least one frame of original video image corresponding to each video data stream;
a decoding module 1003, configured to decode each video data stream to generate video images corresponding to the at least two video data streams;
a merging module 1004, configured to merge video images corresponding to the at least two video data streams to generate a target video image;
a re-encoding module 1005, configured to re-encode the target video image based on the target area information corresponding to the at least two video data streams, so as to generate a target video data stream.
In some embodiments, the extraction module 1002 is to:
extracting at least one target area identifier corresponding to the at least two paths of video data streams based on at least one field of at least one first data packet in each path of video data stream;
and decoding at least one target area identifier corresponding to each path of video data stream to obtain target area information of at least one frame of original video image corresponding to the at least two paths of video data streams.
In some embodiments, the extraction module 1002 is to:
and decoding the second data packets after the preset number of first data packets every other preset number of first data packets based on at least one first data packet and at least one second data packet in each path of video data stream to generate target area information of at least one frame of original video image in at least two paths of video data streams.
The method and the device for acquiring the target area information of the at least one frame of original video image acquire the target area information of the at least one frame of original video image through the first device, and enable the generated video data stream to carry the corresponding target area information in the process of generating the video data stream through the first device, so that the second device can directly extract the required target area information from the video data stream after receiving the video data stream, the complex process that the second device acquires the target area information based on the related video image is avoided, the data processing time is greatly saved, and the system load is reduced.
It should be noted that: in the processing apparatus for video data provided in the above embodiment, only the division of the above functional modules is taken as an example to illustrate when processing video data, and in practical applications, the above functions may be distributed by different functional modules according to needs, that is, the internal structure of the device is divided into different functional modules to complete all or part of the above described functions. In addition, the video data processing apparatus and the video data processing method provided in the foregoing embodiments belong to the same concept, and specific implementation processes thereof are described in the method embodiments and are not described herein again.
Fig. 11 is a block diagram of a terminal 1100 according to an embodiment of the present invention. The terminal 1100 may be: a smart phone, a tablet computer, an MP3 player (Moving Picture Experts Group Audio Layer III, motion video Experts compression standard Audio Layer 3), an MP4 player (Moving Picture Experts Group Audio Layer IV, motion video Experts compression standard Audio Layer 4), a notebook computer, or a desktop computer. Terminal 1100 may also be referred to by other names such as user equipment, portable terminal, laptop terminal, desktop terminal, and so forth.
In general, terminal 1100 includes: a processor 1101 and a memory 1102.
Processor 1101 may include one or more processing cores, such as a 4-core processor, an 8-core processor, or the like. The processor 1101 may be implemented in at least one hardware form of a DSP (Digital Signal Processing), an FPGA (Field-Programmable Gate Array), and a PLA (Programmable Logic Array). The processor 1101 may also include a main processor and a coprocessor, where the main processor is a processor for Processing data in an awake state, and is also called a Central Processing Unit (CPU); a coprocessor is a low power processor for processing data in a standby state. In some embodiments, the processor 1101 may be integrated with a GPU (Graphics Processing Unit) that is responsible for rendering and drawing the content that the display screen needs to display. In some embodiments, the processor 1101 may further include an AI (Artificial Intelligence) processor for processing computing operations related to machine learning.
Memory 1102 may include one or more computer-readable storage media, which may be non-transitory. Memory 1102 can also include high-speed random access memory, as well as non-volatile memory, such as one or more magnetic disk storage devices, flash memory storage devices. In some embodiments, a non-transitory computer readable storage medium in memory 1102 is used to store at least one instruction for execution by processor 1101 to implement a method of processing video data as provided by a method embodiment of the present invention.
In some embodiments, the terminal 1100 may further include: a peripheral interface 1103 and at least one peripheral. The processor 1101, memory 1102 and peripheral interface 1103 may be connected by a bus or signal lines. Various peripheral devices may be connected to the peripheral interface 1103 by buses, signal lines, or circuit boards. Specifically, the peripheral device includes: at least one of radio frequency circuitry 1104, touch display screen 1105, camera 1106, audio circuitry 1107, positioning component 1108, and power supply 1109.
The peripheral interface 1103 may be used to connect at least one peripheral associated with I/O (Input/Output) to the processor 1101 and the memory 1102. In some embodiments, the processor 1101, memory 1102, and peripheral interface 1103 are integrated on the same chip or circuit board; in some other embodiments, any one or two of the processor 1101, the memory 1102 and the peripheral device interface 1103 may be implemented on separate chips or circuit boards, which is not limited by this embodiment.
The Radio Frequency circuit 1104 is used to receive and transmit RF (Radio Frequency) signals, also called electromagnetic signals. The radio frequency circuit 1104 communicates with communication networks and other communication devices via electromagnetic signals. The radio frequency circuit 1104 converts an electric signal into an electromagnetic signal to transmit, or converts a received electromagnetic signal into an electric signal. Optionally, the radio frequency circuit 1104 includes: an antenna system, an RF transceiver, one or more amplifiers, a tuner, an oscillator, a digital signal processor, a codec chipset, a subscriber identity module card, and so forth. The radio frequency circuit 1104 may communicate with other terminals via at least one wireless communication protocol. The wireless communication protocols include, but are not limited to: metropolitan area networks, various generation mobile communication networks (2G, 3G, 4G, and 5G), Wireless local area networks, and/or WiFi (Wireless Fidelity) networks. In some embodiments, the rf circuit 1104 may further include NFC (Near Field Communication) related circuits, which are not limited in the present invention.
The display screen 1105 is used to display a UI (User Interface). The UI may include graphics, text, icons, video, and any combination thereof. When the display screen 1105 is a touch display screen, the display screen 1105 also has the ability to capture touch signals on or over the surface of the display screen 1105. The touch signal may be input to the processor 1101 as a control signal for processing. At this point, the display screen 1105 may also be used to provide virtual buttons and/or a virtual keyboard, also referred to as soft buttons and/or a soft keyboard. In some embodiments, display 1105 may be one, providing the front panel of terminal 1100; in other embodiments, the display screens 1105 can be at least two, respectively disposed on different surfaces of the terminal 1100 or in a folded design; in still other embodiments, display 1105 can be a flexible display disposed on a curved surface or on a folded surface of terminal 1100. Even further, the display screen 1105 may be arranged in a non-rectangular irregular pattern, i.e., a shaped screen. The Display screen 1105 may be made of LCD (Liquid Crystal Display), OLED (Organic Light-Emitting Diode), and the like.
Camera assembly 1106 is used to capture images or video. Optionally, camera assembly 1106 includes a front camera and a rear camera. Generally, a front camera is disposed at a front panel of the terminal, and a rear camera is disposed at a rear surface of the terminal. In some embodiments, the number of the rear cameras is at least two, and each rear camera is any one of a main camera, a depth-of-field camera, a wide-angle camera and a telephoto camera, so that the main camera and the depth-of-field camera are fused to realize a background blurring function, and the main camera and the wide-angle camera are fused to realize panoramic shooting and VR (Virtual Reality) shooting functions or other fusion shooting functions. In some embodiments, camera assembly 1106 may also include a flash. The flash lamp can be a monochrome temperature flash lamp or a bicolor temperature flash lamp. The double-color-temperature flash lamp is a combination of a warm-light flash lamp and a cold-light flash lamp, and can be used for light compensation at different color temperatures.
The audio circuitry 1107 may include a microphone and a speaker. The microphone is used for collecting sound waves of a user and the environment, converting the sound waves into electric signals, and inputting the electric signals to the processor 1101 for processing or inputting the electric signals to the radio frequency circuit 1104 to achieve voice communication. For stereo capture or noise reduction purposes, multiple microphones may be provided, each at a different location of terminal 1100. The microphone may also be an array microphone or an omni-directional pick-up microphone. The speaker is used to convert electrical signals from the processor 1101 or the radio frequency circuit 1104 into sound waves. The loudspeaker can be a traditional film loudspeaker or a piezoelectric ceramic loudspeaker. When the speaker is a piezoelectric ceramic speaker, the speaker can be used for purposes such as converting an electric signal into a sound wave audible to a human being, or converting an electric signal into a sound wave inaudible to a human being to measure a distance. In some embodiments, the audio circuitry 1107 may also include a headphone jack.
Positioning component 1108 is used to locate the current geographic position of terminal 1100 for purposes of navigation or LBS (Location Based Service). The Positioning component 1108 may be a Positioning component based on the united states GPS (Global Positioning System), the chinese beidou System, the russian graves System, or the european union galileo System.
Power supply 1109 is configured to provide power to various components within terminal 1100. The power supply 1109 may be alternating current, direct current, disposable or rechargeable. When the power supply 1109 includes a rechargeable battery, the rechargeable battery may support wired or wireless charging. The rechargeable battery may also be used to support fast charge technology.
In some embodiments, terminal 1100 can also include one or more sensors 1110. The one or more sensors 1110 include, but are not limited to: acceleration sensor 1111, gyro sensor 1112, pressure sensor 1113, fingerprint sensor 1114, optical sensor 1115, and proximity sensor 1116.
Acceleration sensor 1111 may detect acceleration levels in three coordinate axes of a coordinate system established with terminal 1100. For example, the acceleration sensor 1111 may be configured to detect components of the gravitational acceleration in three coordinate axes. The processor 1101 may control the touch display screen 1105 to display a user interface in a landscape view or a portrait view according to the gravitational acceleration signal collected by the acceleration sensor 1111. The acceleration sensor 1111 may also be used for acquisition of motion data of a game or a user.
The gyro sensor 1112 may detect a body direction and a rotation angle of the terminal 1100, and the gyro sensor 1112 may cooperate with the acceleration sensor 1111 to acquire a 3D motion of the user with respect to the terminal 1100. From the data collected by gyroscope sensor 1112, processor 1101 may implement the following functions: motion sensing (such as changing the UI according to a user's tilting operation), image stabilization at the time of photographing, game control, and inertial navigation.
Pressure sensor 1113 may be disposed on a side bezel of terminal 1100 and/or on an underlying layer of touch display screen 1105. When the pressure sensor 1113 is disposed on the side frame of the terminal 1100, the holding signal of the terminal 1100 from the user can be detected, and the processor 1101 performs left-right hand recognition or shortcut operation according to the holding signal collected by the pressure sensor 1113. When the pressure sensor 1113 is disposed at the lower layer of the touch display screen 1105, the processor 1101 controls the operability control on the UI interface according to the pressure operation of the user on the touch display screen 1105. The operability control comprises at least one of a button control, a scroll bar control, an icon control and a menu control.
The fingerprint sensor 1114 is configured to collect a fingerprint of the user, and the processor 1101 identifies the user according to the fingerprint collected by the fingerprint sensor 1114, or the fingerprint sensor 1114 identifies the user according to the collected fingerprint. Upon recognizing that the user's identity is a trusted identity, the user is authorized by the processor 1101 to perform relevant sensitive operations including unlocking the screen, viewing encrypted information, downloading software, paying for and changing settings, etc. Fingerprint sensor 1114 may be disposed on the front, back, or side of terminal 1100. When a physical button or vendor Logo is provided on the terminal 1100, the fingerprint sensor 1114 may be integrated with the physical button or vendor Logo.
Optical sensor 1115 is used to collect ambient light intensity. In one embodiment, the processor 1101 may control the display brightness of the touch display screen 1105 based on the ambient light intensity collected by the optical sensor 1115. Specifically, when the ambient light intensity is high, the display brightness of the touch display screen 1105 is increased; when the ambient light intensity is low, the display brightness of the touch display screen 1105 is turned down. In another embodiment, processor 1101 may also dynamically adjust the shooting parameters of camera assembly 1106 based on the ambient light intensity collected by optical sensor 1115.
Proximity sensor 1116, also referred to as a distance sensor, is typically disposed on a front panel of terminal 1100. Proximity sensor 1116 is used to capture the distance between the user and the front face of terminal 1100. In one embodiment, the touch display screen 1105 is controlled by the processor 1101 to switch from a bright screen state to a dark screen state when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 is gradually decreasing; when the proximity sensor 1116 detects that the distance between the user and the front face of the terminal 1100 becomes gradually larger, the touch display screen 1105 is controlled by the processor 1101 to switch from a breath-screen state to a bright-screen state.
Those skilled in the art will appreciate that the configuration shown in fig. 11 does not constitute a limitation of terminal 1100, and may include more or fewer components than those shown, or may combine certain components, or may employ a different arrangement of components.
Fig. 12 is a schematic structural diagram of a server 1200 according to an embodiment of the present invention, where the server 1200 may generate a relatively large difference due to different configurations or performances, and may include one or more processors (CPUs) 1201 and one or more memories 1202, where the memory 1202 stores at least one instruction, and the at least one instruction is loaded and executed by the processor 1201 to implement the processing method of the video data provided by the above-mentioned method embodiments. Of course, the server may also have components such as a wired or wireless network interface, a keyboard, and an input/output interface, so as to perform input/output, and the server may also include other components for implementing the functions of the device, which are not described herein again.
In an exemplary embodiment, there is also provided a computer-readable storage medium, such as a memory, including instructions executable by a processor in a terminal to perform the method of processing video data in the above-described embodiments. For example, the computer-readable storage medium may be a Read-Only Memory (ROM), a Random Access Memory (RAM), a Compact Disc Read-Only Memory (CD-ROM), a magnetic tape, a floppy disk, an optical data storage device, and the like.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only exemplary of the present invention and should not be taken as limiting the invention, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (11)

1. A method for processing video data, applied to a first device, the method comprising:
acquiring at least one frame of original video image;
acquiring target area information of the at least one frame of original video image based on the at least one frame of original video image;
based on the target area information of the at least one frame of original video image, encoding the at least one frame of original video image to generate a video data stream, where the video data stream carries the target area information of the at least one frame of original video image, and the method includes: encoding target area information of the at least one frame of original video image to generate at least one target area identifier; encoding the at least one frame of original video image to generate at least one first data packet, wherein the at least one first data packet comprises at least one first data packet generated based on a target area and at least one first data packet generated based on a non-target area; correspondingly inserting the at least one target area identifier into at least one first data packet generated based on the target area to generate the video data stream;
sending the video data stream to a second device;
wherein the encoding the target area information of the at least one frame of original video image to generate at least one target area identifier includes: compressing the target area information corresponding to each original video image, converting the target area information into corresponding binary digits, and determining the corresponding binary digits as the target area identification corresponding to the target area information of each original video image.
2. The method according to claim 1, wherein said encoding the at least one original video image based on the target area information of the at least one original video image generates a video data stream, and the video data stream carries the target area information of the at least one original video image, further comprising:
encoding target area information of the at least one frame of original video image to generate at least one second data packet;
encoding the at least one frame of original video image to generate at least one first data packet;
and inserting a second data packet every other preset number of first data packets to generate the video data stream.
3. A method for processing video data, applied to a second device, the method comprising:
receiving a video data stream, wherein the video data stream carries target area information of at least one frame of original video image, and the generation process of the video data stream comprises the following steps: the first equipment encodes the target area information of the at least one frame of original video image to generate at least one target area identifier; encoding the at least one frame of original video image to generate at least one first data packet, wherein the at least one first data packet comprises at least one first data packet generated based on a target area and at least one first data packet generated based on a non-target area; correspondingly inserting the at least one target area identifier into at least one first data packet generated based on the target area to generate the video data stream, wherein the encoding of the target area information of the at least one frame of original video image to generate at least one target area identifier includes: compressing target area information corresponding to each original video image, converting the target area information into corresponding binary digits, and determining the corresponding binary digits as target area identifiers corresponding to the target area information of each original video image;
extracting target area information of the at least one frame of original video image based on the video data stream;
decoding the video data stream to generate a video image corresponding to the video data stream;
and recoding the video image corresponding to the video data stream based on the target area information of the at least one frame of original video image to generate a target video data stream.
4. The method of claim 3, wherein the extracting the target area information of the at least one original video image based on the video data stream comprises:
extracting at least one target area identification based on at least one field of at least one first data packet in the video data stream;
and decoding the at least one target area identifier to obtain the target area information of the at least one frame of original video image.
5. The method of claim 3, wherein the extracting the target area information of the at least one original video image based on the video data stream comprises:
and decoding second data packets after the preset number of first data packets every preset number of first data packets based on at least one first data packet and at least one second data packet in the video data stream to generate target area information of the at least one frame of original video image.
6. A method for processing video data, applied to a second device, the method comprising:
receiving at least two video data streams, wherein each video data stream carries target area information of at least one frame of original video image, and the generation process of the video data streams comprises the following steps: the first equipment encodes the target area information of the at least one frame of original video image to generate at least one target area identifier; encoding the at least one frame of original video image to generate at least one first data packet, wherein the at least one first data packet comprises at least one first data packet generated based on a target area and at least one first data packet generated based on a non-target area; correspondingly inserting the at least one target area identifier into at least one first data packet generated based on the target area to generate the video data stream, wherein the encoding of the target area information of the at least one frame of original video image to generate at least one target area identifier includes: compressing target area information corresponding to each original video image, converting the target area information into corresponding binary digits, and determining the corresponding binary digits as target area identifiers corresponding to the target area information of each original video image;
extracting target area information of at least one frame of original video image corresponding to each path of video data stream based on the at least two paths of video data streams;
decoding each path of video data stream to obtain video images corresponding to the at least two paths of video data streams;
merging the video images corresponding to the at least two video data streams to generate a target video image;
and recoding the target video image based on the target area information corresponding to the at least two video data streams to generate a target video data stream.
7. The method of claim 6, wherein the extracting target area information of at least one frame of original video image corresponding to each video data stream based on the at least two video data streams comprises:
extracting at least one target area identifier corresponding to the at least two paths of video data streams based on at least one field of at least one first data packet in each path of video data stream;
and decoding at least one target area identifier corresponding to each path of video data stream to obtain target area information of at least one frame of original video image corresponding to the at least two paths of video data streams.
8. The method of claim 6, wherein the extracting target area information of at least one frame of original video image corresponding to each video data stream based on the at least two video data streams comprises:
and decoding the second data packets after the preset number of first data packets every other preset number of first data packets based on at least one first data packet and at least one second data packet in each path of video data stream, so as to generate target area information of at least one frame of original video image in at least two paths of video data streams.
9. A terminal, characterized in that the terminal comprises a processor and a memory, in which at least one instruction is stored, the instruction being loaded and executed by the processor to implement the operations performed by the method for processing video data according to any one of claims 1 to 8.
10. A server, characterized in that the server comprises a processor and a memory, wherein at least one instruction is stored in the memory, and the instruction is loaded and executed by the processor to implement the operation performed by the method for processing video data according to any one of claims 1 to 8.
11. A computer-readable storage medium having stored therein at least one instruction which is loaded and executed by a processor to perform operations performed by a method of processing video data according to any one of claims 1 to 8.
CN201811337105.0A 2018-11-12 2018-11-12 Video data processing method, terminal, server and storage medium Active CN109168032B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811337105.0A CN109168032B (en) 2018-11-12 2018-11-12 Video data processing method, terminal, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811337105.0A CN109168032B (en) 2018-11-12 2018-11-12 Video data processing method, terminal, server and storage medium

Publications (2)

Publication Number Publication Date
CN109168032A CN109168032A (en) 2019-01-08
CN109168032B true CN109168032B (en) 2021-08-27

Family

ID=64877084

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811337105.0A Active CN109168032B (en) 2018-11-12 2018-11-12 Video data processing method, terminal, server and storage medium

Country Status (1)

Country Link
CN (1) CN109168032B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110019953B (en) * 2019-04-16 2021-03-30 中国科学院国家空间科学中心 Real-time quick-look system for effective load image data
CN110602398A (en) * 2019-09-17 2019-12-20 北京拙河科技有限公司 Ultrahigh-definition video display method and device
CN112468845A (en) * 2020-11-16 2021-03-09 维沃移动通信有限公司 Processing method and processing device for screen projection picture
CN113096201B (en) * 2021-03-30 2023-04-18 上海西井信息科技有限公司 Embedded video image deep learning method, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1291314A (en) * 1998-03-20 2001-04-11 马里兰大学 Method and apparatus for compressing and decompressing image
CN103024445A (en) * 2012-12-13 2013-04-03 北京百度网讯科技有限公司 Cloud video transcode method and cloud server
CN104365095A (en) * 2012-03-30 2015-02-18 阿尔卡特朗讯公司 Method and apparatus for encoding a selected spatial portion of a video stream
WO2015041652A1 (en) * 2013-09-19 2015-03-26 Entropic Communications, Inc. A progressive jpeg bitstream transcoder and decoder
CN105493509A (en) * 2013-08-12 2016-04-13 索尼公司 Transmission apparatus, transmission method, reception apparatus, and reception method
CN105917649A (en) * 2014-02-18 2016-08-31 英特尔公司 Techniques for inclusion of region of interest indications in compressed video data
CN107210041A (en) * 2015-02-10 2017-09-26 索尼公司 Dispensing device, sending method, reception device and method of reseptance
CN108429921A (en) * 2017-02-14 2018-08-21 北京金山云网络技术有限公司 A kind of video coding-decoding method and device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101742289B (en) * 2008-11-14 2013-10-16 北京中星微电子有限公司 Method, system and device for compressing video code stream
GB2509954B (en) * 2013-01-18 2016-03-23 Canon Kk Method of displaying a region of interest in a video stream
CN104185078A (en) * 2013-05-20 2014-12-03 华为技术有限公司 Video monitoring processing method, device and system thereof
CN104427337B (en) * 2013-08-21 2018-03-27 杭州海康威视数字技术股份有限公司 Interested area video coding method and its device based on target detection
CN105898313A (en) * 2014-12-15 2016-08-24 江南大学 Novel video synopsis-based monitoring video scalable video coding technology

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1291314A (en) * 1998-03-20 2001-04-11 马里兰大学 Method and apparatus for compressing and decompressing image
CN104365095A (en) * 2012-03-30 2015-02-18 阿尔卡特朗讯公司 Method and apparatus for encoding a selected spatial portion of a video stream
CN103024445A (en) * 2012-12-13 2013-04-03 北京百度网讯科技有限公司 Cloud video transcode method and cloud server
CN105493509A (en) * 2013-08-12 2016-04-13 索尼公司 Transmission apparatus, transmission method, reception apparatus, and reception method
WO2015041652A1 (en) * 2013-09-19 2015-03-26 Entropic Communications, Inc. A progressive jpeg bitstream transcoder and decoder
CN105917649A (en) * 2014-02-18 2016-08-31 英特尔公司 Techniques for inclusion of region of interest indications in compressed video data
CN107210041A (en) * 2015-02-10 2017-09-26 索尼公司 Dispensing device, sending method, reception device and method of reseptance
CN108429921A (en) * 2017-02-14 2018-08-21 北京金山云网络技术有限公司 A kind of video coding-decoding method and device

Also Published As

Publication number Publication date
CN109168032A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN109168032B (en) Video data processing method, terminal, server and storage medium
JP7085014B2 (en) Video coding methods and their devices, storage media, equipment, and computer programs
CN108966008B (en) Live video playback method and device
CN109874043B (en) Video stream sending method, video stream playing method and video stream playing device
CN108769738B (en) Video processing method, video processing device, computer equipment and storage medium
CN108616776B (en) Live broadcast analysis data acquisition method and device
CN110121084B (en) Method, device and system for switching ports
CN111093108A (en) Sound and picture synchronization judgment method and device, terminal and computer readable storage medium
CN110750734A (en) Weather display method and device, computer equipment and computer-readable storage medium
CN110996117B (en) Video transcoding method and device, electronic equipment and storage medium
CN111586413B (en) Video adjusting method and device, computer equipment and storage medium
CN110049326B (en) Video coding method and device and storage medium
CN111083554A (en) Method and device for displaying live gift
CN111010588B (en) Live broadcast processing method and device, storage medium and equipment
CN108965711B (en) Video processing method and device
CN111478915B (en) Live broadcast data stream pushing method and device, terminal and storage medium
CN110177275B (en) Video encoding method and apparatus, and storage medium
CN110572679B (en) Method, device and equipment for coding intra-frame prediction and readable storage medium
CN109714628B (en) Method, device, equipment, storage medium and system for playing audio and video
CN115205164B (en) Training method of image processing model, video processing method, device and equipment
CN111770339B (en) Video encoding method, device, equipment and storage medium
CN111478914B (en) Timestamp processing method, device, terminal and storage medium
CN112492331B (en) Live broadcast method, device, system and storage medium
CN112153404B (en) Code rate adjusting method, code rate detecting method, code rate adjusting device, code rate detecting device, code rate adjusting equipment and storage medium
CN111698262B (en) Bandwidth determination method, device, terminal and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant