CN110366004B - Secondary imaging mixed frame information analysis method and system - Google Patents
Secondary imaging mixed frame information analysis method and system Download PDFInfo
- Publication number
- CN110366004B CN110366004B CN201910660860.0A CN201910660860A CN110366004B CN 110366004 B CN110366004 B CN 110366004B CN 201910660860 A CN201910660860 A CN 201910660860A CN 110366004 B CN110366004 B CN 110366004B
- Authority
- CN
- China
- Prior art keywords
- frame
- video
- video frame
- received video
- frames
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
- H04N21/4402—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
Abstract
The application provides a method and a system for analyzing secondary imaging mixed frame information, which are characterized in that transmission information is embedded into an original video frame as a video frame to be sent according to a sending end information embedding mode by determining the sending end information embedding mode; sending the video frame to be sent to a receiving end; a receiving end receives a video frame to be sent, the received video frame is used as a received video frame, and a relational expression of the received video frame and the video frame to be sent meets a preset mixed frame model; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the maximum spatial mean value of the absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result, wherein the decoding judgment result is embedded information in the video frame sent by the sending end, and the receiving end can be ensured to analyze the information transmitted by the sending end.
Description
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and a system for analyzing secondary imaging mixed frame information.
Background
In recent years, with the popularization of high-resolution cameras in handheld electronic terminals, cameras are beginning to be used to interact with the environment to acquire data, and visible light imaging communication applications are emerging. The system operates on visible light spectrum, uses a light emitting diode or a display screen as a sending end and a camera as a receiving end, transmits information through an optical line-of-sight channel, and gradually becomes a new trend of visible light communication. At present, visible light imaging communication mainly comprises a display mode and an implicit mode, wherein the most common application of the display imaging communication is two-dimensional codes, and the implicit imaging communication aims at realizing the implicit transmission of information under the condition of no sense of vision of human eyes and attracts more and more students.
At present, research on visible light implicit imaging communication is advanced to a certain extent, but reliability of information transmission is still difficult to guarantee, one of the main reasons is a problem of frame synchronization of a display screen-camera link, such as a mixed frame problem (it can be understood that a video frame played by a display screen is captured by a receiving end camera through secondary imaging, and the captured video frame is linear mixing of adjacent original video frames). The mixed frame problem occurs, and it is necessary for the receiving end (i.e. the camera end) to have the capability of parsing the information transmitted by the transmitting end (i.e. the display screen) from the mixed frame, and then how the receiving end parses the information transmitted by the transmitting end from the mixed frame becomes a problem.
Disclosure of Invention
In order to solve the foregoing technical problems, embodiments of the present application provide a method and a system for analyzing secondary imaging mixed frame information, so as to achieve the purpose of ensuring that a receiving end can analyze information transmitted by a transmitting end, and the technical solution is as follows:
a secondary imaging mixed frame information analysis method comprises the following steps:
a sending end determines an embedded information video frame group, wherein the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, and A is an integer greater than 0;
the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end;
the receiving end receives the video frame to be sent, the received video frame is used as a received video frame, and the received video frame and the video frame to be sent meet a mixed frame model;
the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header;
the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
Preferably, said a is equal to 3;
the method for determining the embedded information video frame group by the sending end comprises the following steps:
the sending end utilizes an embedded information model
wherein the content of the first and second substances,representing the kth group of mth frame video frames in the original video;representing the embedded data;the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of the mth frame video frame in the original video, N represents the image pixel in the kth group of the mth frame video frame in the original videoThe maximum vertical direction coordinate of (a);the equiprobability takes values of 1 and-1,B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
wherein the content of the first and second substances,representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame transmitted by the transmitting end.
Preferably, the determining, by the receiving end, the received video frame as the frame header from the received video frames of each frame includes:
using the relation 1Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of an absolute value of a difference frame between a kth frame received video frame and a kth-1 frame received video frame, argmin { } represents values of a variable k and k-1 when M (k, k-1) takes a minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,indicating the ith frame header estimate, i being an integer greater than 1,representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6;
using the relation of twoCalculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not;
if so, determining the first stepEach received video frame is a received video frame serving as a frame header;
if not, using the relation IIICalculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frameUnder the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,indicating the ith frame header estimate, i being an integer greater than 1,is shown inThe minimum frame header estimate under the conditions,representing a first frame header estimation value obtained by calculation according to the relation formula I;
Preferably, the receiving end determines a received video frame as a frame header from among the received video frames of the frames, and divides the (a +2) frame received video frames into a group from the received video frame as the frame header, and further includes:
the receiving end respectively carries out spatial synchronization on each group of received video frames;
the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame, and the method comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
A secondary imaging hybrid frame information parsing system, comprising: a sending end and a receiving end;
the sending end is used for determining an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, each frame video frame in the embedded information video frame group is used as a video frame to be sent, and the video frame to be sent is sent to a receiving end, wherein A is an integer larger than 0;
the receiving end is configured to:
receiving the video frame to be sent, taking the received video frame as a received video frame, wherein the received video frame and the video frame to be sent meet a mixed frame model;
determining a received video frame as a frame header from among the received video frames of the frames, and dividing the (A +2) frame received video frames into a group from the received video frame as the frame header;
determining a difference frame with the maximum spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and carrying out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
Preferably, said a is equal to 3;
the sender is particularly adapted to utilize an embedded information model Determining the set of embedded information video frames;
wherein the content of the first and second substances,representing the kth group of mth frame video frames in the original video;representing the embedded data;the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;the equiprobability takes values of 1 and-1,B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
wherein the content of the first and second substances,representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame transmitted by the transmitting end.
Preferably, the process of determining, by the receiving end, the received video frame as the frame header from among the received video frames of each frame specifically includes:
using relational expressionsCalculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of an absolute value of a difference frame between a kth frame received video frame and a kth-1 frame received video frame, argmin { } represents values of a variable k and k-1 when M (k, k-1) takes a minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,indicating the ith frame header estimate, i being an integer greater than 1,representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6;
using the relation of twoCalculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not;
if so, determining the first stepEach received video frame is a received video frame serving as a frame header;
if not, using the relation IIICalculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frameUnder the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,indicating the ith frame header estimate, i being an integer greater than 1,is shown inThe minimum frame header estimate under the conditions,representing a first frame header estimation value obtained by calculation according to the relation formula I;
Preferably, the receiving end is further configured to:
respectively carrying out spatial synchronization on each group of received video frames;
the process that the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame specifically comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
Compared with the prior art, the beneficial effect of this application is:
in the method, an embedded information video frame group is determined through a sending end, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame; the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end; the receiving end receives the video frame to be sent and takes the received video frame as a received video frame; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result, wherein the decoding judgment result is embedded information in the video frame sent by the sending end, and the receiving end can be ensured to analyze the information transmitted by the sending end.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a method for analyzing secondary imaging mixed frame information provided by the present application;
FIG. 2 is another flow chart of a method for analyzing secondary imaging mixed frame information provided by the present application;
fig. 3 is a schematic logical structure diagram of a secondary imaging hybrid frame information analysis apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a secondary imaging mixed frame information analysis method, which comprises the following steps: a sending end determines an embedded information video frame group, wherein the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, and A is an integer greater than 0; the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end; the receiving end receives the video frame to be sent and takes the received video frame as a received video frame; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result. The method and the device can ensure that the receiving end can analyze the information transmitted by the transmitting end.
The embodiment of the present application discloses a method for analyzing secondary imaging mixed frame information, please refer to fig. 1, which may include:
step S11, the sending end determines an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame and embedded information included in the second embedded information video frame are complementary in positive and negative directions.
And A is an integer greater than 0.
In this embodiment, a description is given by taking an example of a first embedded information video frame and a second embedded information video frame, for example, the first embedded information video frame is V + D, and the second embedded information video frame is V-D, where V can be understood as an original video frame, and D is embedded information.
And step S12, the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end.
Step S13, the receiving end receives the video frame to be sent, and uses the received video frame as a received video frame, and the received video frame and the video frame to be sent satisfy a mixed frame model.
Step S14, the receiving end determines the received video frame as the frame header from the received video frames of each frame, and divides the (a +2) received video frames into a group starting from the received video frame as the frame header.
The receiving end divides the (A +2) frame receiving video frames into one group, and the frame number of the video frames included in each group of divided receiving video frames can be ensured to be consistent with the frame number of each group of video frames sent by the sending end.
Step S15, the receiving end determines the difference frame with the largest spatial mean of the absolute values in each group of received video frames as the best difference frame.
The difference frame can be understood as: difference of two different video frames.
The difference frame may be a matrix, the absolute value of each element in the matrix is calculated, and the mean value of the absolute values of the elements is calculated as the spatial mean value of the absolute values.
And step S16, the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
It should be noted that the decoding decision result is embedded information in the video frames (the first embedded information video frame and the second embedded information video frame) sent by the sending end.
And the receiving end carries out decoding judgment on the optimal difference frame, so that the signal-to-noise ratio of the receiving end can be ensured to be maximized, and the accuracy of a decoding judgment result is further ensured.
In the method, an embedded information video frame group is determined through a sending end, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame; the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end; the receiving end receives the video frame to be sent, the received video frame is used as a received video frame, and the received video frame and the video frame to be sent meet a mixed frame model; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result, wherein the decoding judgment result is embedded information in the video frame sent by the sending end, and the receiving end can be ensured to analyze the information transmitted by the sending end.
In another embodiment of the present application, the value of a may specifically include:
preferably, a may be equal to 3. Of course, the size of a can be adjusted according to the sending frame model to achieve the best transmission performance as much as possible.
Corresponding to the embodiment where a is equal to 3, the determining, by the transmitting end, the set of embedded information video frames may include:
the sending end utilizes an embedded information model
Wherein the content of the first and second substances,representing the kth group of mth frame video frames in the original video;representing the embedded data;the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;equal probabilityThe values of 1 and-1 are taken,B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
Based on the introduction of the preset sending frame model in the foregoing embodiment, in another embodiment of the present application, the introduction of the hybrid frame model may specifically be:
wherein the content of the first and second substances,representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame transmitted by the transmitting end.
The blending factor can represent the blending ratio of two adjacent video frames at the transmitting end in the received video frame at the receiving end.
Based on the introduction of the hybrid frame model in the foregoing embodiment, in another embodiment of the present application, the introducing, by the receiving end, a received video frame determined as a frame header from among video frames received by each frame, may specifically include:
M (k, k-1) represents the spatial mean value of the absolute value of the difference frame between the k-th frame receiving video frame and the k-1-th frame receiving video frame, argmin { } represents the value of the variable k and k-1 when M (k, k-1) takes the minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,indicating the ith frame header estimate, i being an integer greater than 1,represents the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6.
Relation 1The method is characterized in that a frame head estimation value is calculated by utilizing the spatial mean minimum characteristic of the absolute value of the starting difference frame.
In this embodiment, the determining process of M (k, k-1) may specifically include:
from the mixed frame model, a spatial mean of the absolute values of the difference frames between the k-th received video frame and the k-1-th received video frame, denoted as M (k, k-1),i and j represent the pixel coordinates in the two-dimensional matrix of the image.
Assume a random variable X (i.e.) Obey normal distribution and is marked as X to N (mu, sigma)2) Then the process of the first step is carried out,
when X is about N (-mu, sigma)2) When the temperature of the water is higher than the set temperature,
i.e. when X is to N (± mu, sigma)2) When the temperature of the water is higher than the set temperature,whereinTherefore, the temperature of the molten metal is controlled,whereinSuch as:
where μ represents the mean value, N () represents the normal distribution, and Q () represents the Q function in communication, i.e., Q
A12 using the relation twoCalculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,represents the ith frame header estimate, and var { } represents the variance function.
It can be understood that: the variance is calculated using the property that the spatial mean of the absolute value of the starting difference frame remains constant.
And A13, judging whether the variance is smaller than a preset threshold value.
If yes, go to step A14; if not, step A15 is performed.
M (k, k-1) represents the spatial mean of the absolute values of the difference frames between the k-th frame received video frame and the k-1 th frame received video frame, argmin { } is represented atUnder the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,indicating the ith frame header estimate, i being an integer greater than 1,is shown inThe minimum frame header estimate under the conditions,and the first frame header estimation value obtained by calculation according to the relation formula I is shown.
The present embodiment determines the received video frame as the frame header by using the characteristic that the spatial mean of the absolute values of the initial difference frames is minimum and the characteristic that the spatial mean of the absolute values of the initial difference frames is constant, and can improve the reliability of the received video frame as the frame header.
Based on the foregoing embodiments, in another embodiment of the present application, the determining, by the receiving end, a difference frame with a largest spatial mean of absolute values in each group of received video frames as an optimal difference frame is introduced, which specifically includes:
Wherein the content of the first and second substances,is the k-th3The frame receives a frame of video and,is the k-th4The frame receives a frame of video and,is the k-th4The frame receives a blending factor for the video frame,is the k-th4The frame receives additive white gaussian noise of the video frame,is the k-th3The frame receives additive white gaussian noise of the video frame,is the k-th4The frame receives embedded data in a video frame.
Wherein the content of the first and second substances,is the k-th5The frame receives a frame of video and,is the k-th4The frame receives a frame of video and,is the k-th5The frame receives a blending factor for the video frame,is the k-th4Frame receiving videoThe additive white gaussian noise of the frame,is the k-th5The frame receives additive white gaussian noise of the video frame,is the k-th4The frame receives embedded data in a video frame.
Based on the foregoing embodiments, in another embodiment of the present application, a decoding decision performed on the best difference frame by the receiving end to obtain a decoding decision result is introduced, where the decoding decision result includes:
the receiving end divides the optimal difference frame into B1B 2 blocks according to space, calculates the average value of each block, and judges the optimal difference frame to be 1 if the average value of each block is more than 0; if the block mean value is less than 0, the block mean value is judged to be 0. Then, the decoding decision result can be understood as: the mean values of the blocks form a matrix.
In another embodiment of the present application, another method for parsing secondary imaging mixed frame information is introduced, and referring to fig. 2, the method may include:
step S21, the sending end determines an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame and embedded information included in the second embedded information video frame are complementary in positive and negative directions.
And A is an integer greater than 0.
And step S22, the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end.
Step S23, the receiving end receives the video frame to be sent, and uses the received video frame as a received video frame, and the received video frame and the video frame to be sent satisfy a mixed frame model.
Step S24, the receiving end determines the received video frame as the frame header from the received video frames of each frame, and divides the (a +2) received video frames into a group starting from the received video frame as the frame header.
Steps S21-S24 are the same as steps S11-S14 in the previous embodiment, and the detailed procedures of steps S21-S24 can be referred to the related descriptions of steps S11-S14, and are not described herein again.
Step S25, the receiving end performs spatial synchronization on each group of received video frames respectively.
The receiving end respectively carries out spatial synchronization on each group of received video frames, can correct each group of received video frames, and improves the reliability of the content of the received video frames.
Step S26, the receiving end determines the difference frame with the largest spatial mean of the absolute values in each group of received video frames after spatial synchronization, as the optimal difference frame.
Step S26 is a specific implementation manner of step S15 in the previous embodiment.
And step S27, the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
Step S27 is the same as step S16 in the previous embodiment, and the detailed process of step S27 can be referred to the related description of step S16, which is not repeated herein.
Next, a description is given of a secondary imaging mixed frame information analysis system provided in the present application, and a secondary imaging mixed frame information analysis system described below and a secondary imaging mixed frame information analysis method described above may be referred to in correspondence with each other.
Referring to fig. 3, a schematic diagram of a logical structure of a secondary imaging hybrid frame information parsing system provided in the present application is shown, where the secondary imaging hybrid frame information parsing system includes: a transmitting end 11 and a receiving end 12.
The sending end 11 is configured to determine an embedded information video frame group, where the embedded information video frame group includes an a-frame original video frame, a first embedded information video frame, and a second embedded information video frame, where embedded information included in the first embedded information video frame is complementary to embedded information included in the second embedded information video frame, and each frame video frame in the embedded information video frame group is input into a mixed frame model, an output video frame is used as a video frame to be sent, and the video frame to be sent is sent to a receiving end 12, where a is an integer greater than 0.
The receiving end 12 is configured to:
receiving the video frame to be sent, and taking the received video frame as a received video frame;
determining a received video frame as a frame header from among the received video frames of the frames, and dividing the (A +2) frame received video frames into a group from the received video frame as the frame header;
determining a difference frame with the maximum spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and performing decoding judgment on the optimal difference frame to obtain a decoding judgment result, and determining embedded data in the video frame sent by the sending end 11 according to the decoding judgment result.
In this embodiment, a may be equal to 3. Accordingly, the sender is particularly adapted to utilize an embedded information model Determining the set of embedded information video frames;
wherein the content of the first and second substances,representing the kth group of mth frame video frames in the original video;representing the embedded data;the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the second in the original videoThe maximum horizontal direction coordinates of image pixels in the k groups of mth frame video frames, and N represents the maximum vertical direction coordinates of the image pixels in the kth group of mth frame video frames in the original video;the equiprobability takes values of 1 and-1,B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
wherein the content of the first and second substances,representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents the kth frame of video transmitted by the transmitting end 11; x is the number ofk+1(i, j) represents the (k + 1) th frame of video transmitted by the transmitting end 11.
In this embodiment, the process of determining, by the receiving end 12, a received video frame serving as a frame header from among video frames received by each frame may specifically include:
using the relation 1Calculating an i-th frame header estimate, M (k, k-1) representing an absolute frame difference between the k-th frame received video frame and the k-1-th frame received video frameThe space mean value of the value, argmin { }, represents the values of the variables k and k-1 when M (k, k-1) takes the minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,indicating the ith frame header estimate, i being an integer greater than 1,representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6;
using the relation of twoCalculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not;
if so, determining the first stepEach received video frame is a received video frame serving as a frame header;
if not, using the relation IIICalculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frameUnder the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,indicating the ith frame header estimate, i being an integer greater than 1,is shown inThe minimum frame header estimate under the conditions,representing a first frame header estimation value obtained by calculation according to the relation formula I;
In this embodiment, the receiving end 12 may further be configured to:
and respectively carrying out spatial synchronization on each group of received video frames.
Correspondingly, the process of determining, by the receiving end 12, the difference frame with the largest spatial mean of the absolute values in each group of received video frames as the optimal difference frame may specifically include:
the receiving end 12 determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The method and the system for analyzing the secondary imaging mixed frame information provided by the application are described in detail above, specific examples are applied in the description to explain the principle and the implementation of the application, and the description of the above embodiments is only used to help understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.
Claims (6)
1. A secondary imaging mixed frame information analysis method is characterized by comprising the following steps:
a sending end determines an embedded information video frame group, wherein the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, and A is an integer greater than 0;
the sending end takes each frame of video frames in the embedded information video frame group as video frames to be sent and sends the video frames to be sent to a receiving end;
the receiving end receives the video frame to be sent and takes the received video frame as a received video frame, and the relation between the received video frame and the video frame to be sent meets a mixed frame model, wherein the mixed frame model is as follows:wherein the content of the first and second substances,representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame sent by the sending end;
the receiving end determines a received video frame as a frame header from among the received video frames of each frame, and divides the received video frames of (a +2) frames into a group from the received video frame as the frame header, and the receiving end determines the received video frame as the frame header from among the received video frames of each frame, including: using the relation 1Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, argmin { } represents that when M (k, k-1) obtains the value under the condition that k is more than or equal to 2 and less than or equal to 6At the minimum, the values of the variables k and k-1,indicating the ith frame header estimate, i being an integer greater than 1,representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6; using the relation of twoCalculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,representing the ith frame header estimation value, and var { } representing a variance function; judging whether the variance is smaller than a preset threshold value or not; if so, determining the first stepEach received video frame is a received video frame serving as a frame header; if not, using the relation IIICalculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frameUnder the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,indicating the ith frame header estimate, i being an integer greater than 1,is shown inThe minimum frame header estimate under the conditions,representing a first frame header estimation value obtained by calculation according to the relation formula I; is determined to beEach received video frame is a received video frame serving as a frame header;
the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
2. The method of claim 1, wherein a is equal to 3;
the method for determining the embedded information video frame group by the sending end comprises the following steps:
the sending end utilizes an embedded information model
wherein the content of the first and second substances,representing the kth group of mth frame video frames in the original video;representation embeddingData;the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;the equiprobability takes values of 1 and-1,B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
3. The method according to claim 1, wherein the receiving end determines a received video frame as a frame header from among the received video frames of the frames, and after dividing the received video frames of (a +2) frames into a group starting from the received video frame as the frame header, further comprises:
the receiving end respectively carries out spatial synchronization on each group of received video frames;
the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame, and the method comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
4. A secondary imaging hybrid frame information parsing system, comprising: a sending end and a receiving end;
the sending end is used for determining an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, each frame video frame in the embedded information video frame group is used as a video frame to be sent, and the video frame to be sent is sent to a receiving end, wherein A is an integer larger than 0;
the receiving end is configured to:
receiving the video frame to be sent, and taking the received video frame as a received video frame, wherein the received video frame and the video frame to be sent meet a mixed frame model, and the mixed frame model is as follows:wherein the content of the first and second substances,representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame sent by the sending end;
determining a received video frame as a frame header from among the received video frames of the frames, and dividing the (A +2) frame received video frames into a group from the received video frame as the frame header; the process of determining the received video frame as the frame header from the video frames received by the receiving end specifically includes: using the relation 1Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of an absolute value of a difference frame between a kth frame received video frame and a kth-1 frame received video frame, argmin { } represents values of a variable k and k-1 when M (k, k-1) takes a minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,indicating the ith frame header estimate, i being an integer greater than 1,representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6; using the relation of twoCalculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not; if so, determining the first stepEach received video frame is a received video frame serving as a frame header; if not, using the relation IIICalculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frameUnder the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,indicating the ith frame header estimate, i being an integer greater than 1,is shown inThe minimum frame header estimate under the conditions,representing a first frame header estimation value obtained by calculation according to the relation formula I; is determined to beEach received video frame is a received video frame serving as a frame header;
determining a difference frame with the maximum spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and carrying out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
5. The system of claim 4, wherein A is equal to 3;
the sender is particularly adapted to utilize an embedded information model Determining the set of embedded information video frames;
wherein the content of the first and second substances,representing the kth group of mth frame video frames in the original video;representing the embedded data;the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;the equiprobability takes values of 1 and-1,B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
6. The system of claim 4, wherein the receiving end is further configured to:
respectively carrying out spatial synchronization on each group of received video frames;
the process that the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame specifically comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660860.0A CN110366004B (en) | 2019-07-22 | 2019-07-22 | Secondary imaging mixed frame information analysis method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910660860.0A CN110366004B (en) | 2019-07-22 | 2019-07-22 | Secondary imaging mixed frame information analysis method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110366004A CN110366004A (en) | 2019-10-22 |
CN110366004B true CN110366004B (en) | 2021-08-13 |
Family
ID=68220430
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910660860.0A Active CN110366004B (en) | 2019-07-22 | 2019-07-22 | Secondary imaging mixed frame information analysis method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110366004B (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8184164B2 (en) * | 2005-06-25 | 2012-05-22 | Huawei Technologies Co., Ltd. | Method for measuring multimedia video communication quality |
CN105846898A (en) * | 2016-05-20 | 2016-08-10 | 中国人民解放军信息工程大学 | Visible light communication method, sending device, receiving device and system |
CN106570816A (en) * | 2016-10-31 | 2017-04-19 | 努比亚技术有限公司 | Method and device for sending and receiving information |
CN107911167A (en) * | 2017-11-29 | 2018-04-13 | 中国人民解放军信息工程大学 | A kind of visual light imaging communication means and system |
CN108391028A (en) * | 2018-02-09 | 2018-08-10 | 东莞信大融合创新研究院 | A kind of implicit imaging communication method of the visible light of adaptive shooting direction |
CN109104243A (en) * | 2018-08-01 | 2018-12-28 | 北京邮电大学 | A kind of pixel communication means, information send terminal and information receiving terminal |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016133434A1 (en) * | 2015-02-17 | 2016-08-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Operation of a communication unit in a wireless local area network, wlan, environment |
-
2019
- 2019-07-22 CN CN201910660860.0A patent/CN110366004B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8184164B2 (en) * | 2005-06-25 | 2012-05-22 | Huawei Technologies Co., Ltd. | Method for measuring multimedia video communication quality |
CN105846898A (en) * | 2016-05-20 | 2016-08-10 | 中国人民解放军信息工程大学 | Visible light communication method, sending device, receiving device and system |
CN106570816A (en) * | 2016-10-31 | 2017-04-19 | 努比亚技术有限公司 | Method and device for sending and receiving information |
CN107911167A (en) * | 2017-11-29 | 2018-04-13 | 中国人民解放军信息工程大学 | A kind of visual light imaging communication means and system |
CN108391028A (en) * | 2018-02-09 | 2018-08-10 | 东莞信大融合创新研究院 | A kind of implicit imaging communication method of the visible light of adaptive shooting direction |
CN109104243A (en) * | 2018-08-01 | 2018-12-28 | 北京邮电大学 | A kind of pixel communication means, information send terminal and information receiving terminal |
Non-Patent Citations (2)
Title |
---|
"面向二次成像混合的可见光隐式通信系统设计";李明超;《信息工程大学学报》;20190228;全文 * |
"一种可见光隐式成像通信帧同步补偿算法";李明超;《光学学报》;20180131;第38卷(第1期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110366004A (en) | 2019-10-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10796685B2 (en) | Method and device for image recognition | |
CN110572622B (en) | Video decoding method and device | |
US20200162725A1 (en) | Video Quality Assessment Method and Apparatus | |
CN110418177B (en) | Video encoding method, apparatus, device and storage medium | |
EP2786342B1 (en) | Texture masking for video quality measurement | |
CN105611291B (en) | The method and apparatus that mark information is added in the video frame and detects frame losing | |
EP3472957A1 (en) | Methods, systems, and media for transmitting data in a video signal | |
CN106550240A (en) | A kind of bandwidth conservation method and system | |
CN111182303A (en) | Encoding method and device for shared screen, computer readable medium and electronic equipment | |
EP3598386A1 (en) | Method and apparatus for processing image | |
CN111182300B (en) | Method, device and equipment for determining coding parameters and storage medium | |
CN114245209A (en) | Video resolution determination method, video resolution determination device, video model training method, video coding device and video coding device | |
CN113469869B (en) | Image management method and device | |
CN110366004B (en) | Secondary imaging mixed frame information analysis method and system | |
CN111954034B (en) | Video coding method and system based on terminal equipment parameters | |
CN110365985A (en) | Image processing method and device | |
Choi et al. | Video QoE models for the compute continuum | |
US8503822B2 (en) | Image quality evaluation system, method, and program utilizing increased difference weighting of an area of focus | |
CN106921840B (en) | Face beautifying method, device and system in instant video | |
CN114339252B (en) | Data compression method and device | |
CN114466224B (en) | Video data encoding and decoding method and device, storage medium and electronic equipment | |
CN112804469B (en) | Video call processing method, device, equipment and storage medium | |
US10986337B2 (en) | Systems and methods for selective transmission of media content | |
CN113038179A (en) | Video encoding method, video decoding method, video encoding device, video decoding device and electronic equipment | |
US20190306500A1 (en) | Bit rate optimization system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |