CN110366004B - Secondary imaging mixed frame information analysis method and system - Google Patents

Secondary imaging mixed frame information analysis method and system Download PDF

Info

Publication number
CN110366004B
CN110366004B CN201910660860.0A CN201910660860A CN110366004B CN 110366004 B CN110366004 B CN 110366004B CN 201910660860 A CN201910660860 A CN 201910660860A CN 110366004 B CN110366004 B CN 110366004B
Authority
CN
China
Prior art keywords
frame
video
video frame
received video
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910660860.0A
Other languages
Chinese (zh)
Other versions
CN110366004A (en
Inventor
胡赟鹏
唐燕群
沈智翔
李明超
张效义
朱义君
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Information Engineering University of PLA Strategic Support Force
Original Assignee
Information Engineering University of PLA Strategic Support Force
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Information Engineering University of PLA Strategic Support Force filed Critical Information Engineering University of PLA Strategic Support Force
Priority to CN201910660860.0A priority Critical patent/CN110366004B/en
Publication of CN110366004A publication Critical patent/CN110366004A/en
Application granted granted Critical
Publication of CN110366004B publication Critical patent/CN110366004B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The application provides a method and a system for analyzing secondary imaging mixed frame information, which are characterized in that transmission information is embedded into an original video frame as a video frame to be sent according to a sending end information embedding mode by determining the sending end information embedding mode; sending the video frame to be sent to a receiving end; a receiving end receives a video frame to be sent, the received video frame is used as a received video frame, and a relational expression of the received video frame and the video frame to be sent meets a preset mixed frame model; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the maximum spatial mean value of the absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result, wherein the decoding judgment result is embedded information in the video frame sent by the sending end, and the receiving end can be ensured to analyze the information transmitted by the sending end.

Description

Secondary imaging mixed frame information analysis method and system
Technical Field
The present application relates to the field of communications technologies, and in particular, to a method and a system for analyzing secondary imaging mixed frame information.
Background
In recent years, with the popularization of high-resolution cameras in handheld electronic terminals, cameras are beginning to be used to interact with the environment to acquire data, and visible light imaging communication applications are emerging. The system operates on visible light spectrum, uses a light emitting diode or a display screen as a sending end and a camera as a receiving end, transmits information through an optical line-of-sight channel, and gradually becomes a new trend of visible light communication. At present, visible light imaging communication mainly comprises a display mode and an implicit mode, wherein the most common application of the display imaging communication is two-dimensional codes, and the implicit imaging communication aims at realizing the implicit transmission of information under the condition of no sense of vision of human eyes and attracts more and more students.
At present, research on visible light implicit imaging communication is advanced to a certain extent, but reliability of information transmission is still difficult to guarantee, one of the main reasons is a problem of frame synchronization of a display screen-camera link, such as a mixed frame problem (it can be understood that a video frame played by a display screen is captured by a receiving end camera through secondary imaging, and the captured video frame is linear mixing of adjacent original video frames). The mixed frame problem occurs, and it is necessary for the receiving end (i.e. the camera end) to have the capability of parsing the information transmitted by the transmitting end (i.e. the display screen) from the mixed frame, and then how the receiving end parses the information transmitted by the transmitting end from the mixed frame becomes a problem.
Disclosure of Invention
In order to solve the foregoing technical problems, embodiments of the present application provide a method and a system for analyzing secondary imaging mixed frame information, so as to achieve the purpose of ensuring that a receiving end can analyze information transmitted by a transmitting end, and the technical solution is as follows:
a secondary imaging mixed frame information analysis method comprises the following steps:
a sending end determines an embedded information video frame group, wherein the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, and A is an integer greater than 0;
the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end;
the receiving end receives the video frame to be sent, the received video frame is used as a received video frame, and the received video frame and the video frame to be sent meet a mixed frame model;
the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header;
the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
Preferably, said a is equal to 3;
the method for determining the embedded information video frame group by the sending end comprises the following steps:
the sending end utilizes an embedded information model
Figure GDA0003108152690000021
Figure GDA0003108152690000022
Determining the set of embedded information video frames;
wherein the content of the first and second substances,
Figure GDA0003108152690000023
representing the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000024
representing the embedded data;
Figure GDA0003108152690000025
the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of the mth frame video frame in the original video, N represents the image pixel in the kth group of the mth frame video frame in the original videoThe maximum vertical direction coordinate of (a);
Figure GDA0003108152690000026
the equiprobability takes values of 1 and-1,
Figure GDA0003108152690000031
B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
Preferably, the hybrid frame model is:
Figure GDA0003108152690000032
wherein the content of the first and second substances,
Figure GDA0003108152690000033
representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame transmitted by the transmitting end.
Preferably, the determining, by the receiving end, the received video frame as the frame header from the received video frames of each frame includes:
using the relation 1
Figure GDA0003108152690000034
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of an absolute value of a difference frame between a kth frame received video frame and a kth-1 frame received video frame, argmin { } represents values of a variable k and k-1 when M (k, k-1) takes a minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,
Figure GDA0003108152690000035
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000036
representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6;
using the relation of two
Figure GDA0003108152690000037
Calculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,
Figure GDA0003108152690000038
representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not;
if so, determining the first step
Figure GDA0003108152690000039
Each received video frame is a received video frame serving as a frame header;
if not, using the relation III
Figure GDA00031081526900000310
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frame
Figure GDA0003108152690000041
Under the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,
Figure GDA0003108152690000042
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000043
is shown in
Figure GDA0003108152690000044
The minimum frame header estimate under the conditions,
Figure GDA0003108152690000045
representing a first frame header estimation value obtained by calculation according to the relation formula I;
is determined to be
Figure GDA0003108152690000046
The received video frames are the received video frames as frame headers.
Preferably, the receiving end determines a received video frame as a frame header from among the received video frames of the frames, and divides the (a +2) frame received video frames into a group from the received video frame as the frame header, and further includes:
the receiving end respectively carries out spatial synchronization on each group of received video frames;
the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame, and the method comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
A secondary imaging hybrid frame information parsing system, comprising: a sending end and a receiving end;
the sending end is used for determining an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, each frame video frame in the embedded information video frame group is used as a video frame to be sent, and the video frame to be sent is sent to a receiving end, wherein A is an integer larger than 0;
the receiving end is configured to:
receiving the video frame to be sent, taking the received video frame as a received video frame, wherein the received video frame and the video frame to be sent meet a mixed frame model;
determining a received video frame as a frame header from among the received video frames of the frames, and dividing the (A +2) frame received video frames into a group from the received video frame as the frame header;
determining a difference frame with the maximum spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and carrying out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
Preferably, said a is equal to 3;
the sender is particularly adapted to utilize an embedded information model
Figure GDA0003108152690000051
Figure GDA0003108152690000052
Determining the set of embedded information video frames;
wherein the content of the first and second substances,
Figure GDA0003108152690000053
representing the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000054
representing the embedded data;
Figure GDA0003108152690000055
the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000056
the equiprobability takes values of 1 and-1,
Figure GDA0003108152690000057
B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
Preferably, the hybrid frame model is:
Figure GDA0003108152690000058
wherein the content of the first and second substances,
Figure GDA0003108152690000059
representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame transmitted by the transmitting end.
Preferably, the process of determining, by the receiving end, the received video frame as the frame header from among the received video frames of each frame specifically includes:
using relational expressions
Figure GDA00031081526900000510
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of an absolute value of a difference frame between a kth frame received video frame and a kth-1 frame received video frame, argmin { } represents values of a variable k and k-1 when M (k, k-1) takes a minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,
Figure GDA0003108152690000061
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000062
representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6;
using the relation of two
Figure GDA0003108152690000063
Calculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,
Figure GDA0003108152690000064
representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not;
if so, determining the first step
Figure GDA0003108152690000065
Each received video frame is a received video frame serving as a frame header;
if not, using the relation III
Figure GDA0003108152690000066
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frame
Figure GDA0003108152690000067
Under the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,
Figure GDA0003108152690000068
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000069
is shown in
Figure GDA00031081526900000610
The minimum frame header estimate under the conditions,
Figure GDA00031081526900000611
representing a first frame header estimation value obtained by calculation according to the relation formula I;
is determined to be
Figure GDA00031081526900000612
The received video frames are the received video frames as frame headers.
Preferably, the receiving end is further configured to:
respectively carrying out spatial synchronization on each group of received video frames;
the process that the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame specifically comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
Compared with the prior art, the beneficial effect of this application is:
in the method, an embedded information video frame group is determined through a sending end, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame; the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end; the receiving end receives the video frame to be sent and takes the received video frame as a received video frame; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result, wherein the decoding judgment result is embedded information in the video frame sent by the sending end, and the receiving end can be ensured to analyze the information transmitted by the sending end.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive labor.
FIG. 1 is a flow chart of a method for analyzing secondary imaging mixed frame information provided by the present application;
FIG. 2 is another flow chart of a method for analyzing secondary imaging mixed frame information provided by the present application;
fig. 3 is a schematic logical structure diagram of a secondary imaging hybrid frame information analysis apparatus according to the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The embodiment of the application discloses a secondary imaging mixed frame information analysis method, which comprises the following steps: a sending end determines an embedded information video frame group, wherein the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, and A is an integer greater than 0; the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end; the receiving end receives the video frame to be sent and takes the received video frame as a received video frame; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result. The method and the device can ensure that the receiving end can analyze the information transmitted by the transmitting end.
The embodiment of the present application discloses a method for analyzing secondary imaging mixed frame information, please refer to fig. 1, which may include:
step S11, the sending end determines an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame and embedded information included in the second embedded information video frame are complementary in positive and negative directions.
And A is an integer greater than 0.
In this embodiment, a description is given by taking an example of a first embedded information video frame and a second embedded information video frame, for example, the first embedded information video frame is V + D, and the second embedded information video frame is V-D, where V can be understood as an original video frame, and D is embedded information.
And step S12, the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end.
Step S13, the receiving end receives the video frame to be sent, and uses the received video frame as a received video frame, and the received video frame and the video frame to be sent satisfy a mixed frame model.
Step S14, the receiving end determines the received video frame as the frame header from the received video frames of each frame, and divides the (a +2) received video frames into a group starting from the received video frame as the frame header.
The receiving end divides the (A +2) frame receiving video frames into one group, and the frame number of the video frames included in each group of divided receiving video frames can be ensured to be consistent with the frame number of each group of video frames sent by the sending end.
Step S15, the receiving end determines the difference frame with the largest spatial mean of the absolute values in each group of received video frames as the best difference frame.
The difference frame can be understood as: difference of two different video frames.
The difference frame may be a matrix, the absolute value of each element in the matrix is calculated, and the mean value of the absolute values of the elements is calculated as the spatial mean value of the absolute values.
And step S16, the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
It should be noted that the decoding decision result is embedded information in the video frames (the first embedded information video frame and the second embedded information video frame) sent by the sending end.
And the receiving end carries out decoding judgment on the optimal difference frame, so that the signal-to-noise ratio of the receiving end can be ensured to be maximized, and the accuracy of a decoding judgment result is further ensured.
In the method, an embedded information video frame group is determined through a sending end, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame; the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end; the receiving end receives the video frame to be sent, the received video frame is used as a received video frame, and the received video frame and the video frame to be sent meet a mixed frame model; the receiving end determines a received video frame serving as a frame header from the received video frames of each frame, and divides the (A +2) frame received video frames into a group from the received video frame serving as the frame header; the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame; and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result, wherein the decoding judgment result is embedded information in the video frame sent by the sending end, and the receiving end can be ensured to analyze the information transmitted by the sending end.
In another embodiment of the present application, the value of a may specifically include:
preferably, a may be equal to 3. Of course, the size of a can be adjusted according to the sending frame model to achieve the best transmission performance as much as possible.
Corresponding to the embodiment where a is equal to 3, the determining, by the transmitting end, the set of embedded information video frames may include:
the sending end utilizes an embedded information model
Figure GDA0003108152690000101
Figure GDA0003108152690000102
The set of embedded information video frames is determined.
Wherein the content of the first and second substances,
Figure GDA0003108152690000103
representing the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000104
representing the embedded data;
Figure GDA0003108152690000105
the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000106
equal probabilityThe values of 1 and-1 are taken,
Figure GDA0003108152690000107
B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
Based on the introduction of the preset sending frame model in the foregoing embodiment, in another embodiment of the present application, the introduction of the hybrid frame model may specifically be:
Figure GDA0003108152690000108
wherein the content of the first and second substances,
Figure GDA0003108152690000109
representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame transmitted by the transmitting end.
The blending factor can represent the blending ratio of two adjacent video frames at the transmitting end in the received video frame at the receiving end.
Based on the introduction of the hybrid frame model in the foregoing embodiment, in another embodiment of the present application, the introducing, by the receiving end, a received video frame determined as a frame header from among video frames received by each frame, may specifically include:
a11, using the relation one
Figure GDA0003108152690000111
Calculating the i frame head estimated value。
M (k, k-1) represents the spatial mean value of the absolute value of the difference frame between the k-th frame receiving video frame and the k-1-th frame receiving video frame, argmin { } represents the value of the variable k and k-1 when M (k, k-1) takes the minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,
Figure GDA0003108152690000112
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000113
represents the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6.
Relation 1
Figure GDA0003108152690000114
The method is characterized in that a frame head estimation value is calculated by utilizing the spatial mean minimum characteristic of the absolute value of the starting difference frame.
In this embodiment, the determining process of M (k, k-1) may specifically include:
from the mixed frame model, a spatial mean of the absolute values of the difference frames between the k-th received video frame and the k-1-th received video frame, denoted as M (k, k-1),
Figure GDA0003108152690000115
i and j represent the pixel coordinates in the two-dimensional matrix of the image.
Assume a random variable X (i.e.
Figure GDA0003108152690000116
) Obey normal distribution and is marked as X to N (mu, sigma)2) Then the process of the first step is carried out,
Figure GDA0003108152690000117
when X is about N (-mu, sigma)2) When the temperature of the water is higher than the set temperature,
Figure GDA0003108152690000121
i.e. when X is to N (± mu, sigma)2) When the temperature of the water is higher than the set temperature,
Figure GDA0003108152690000122
wherein
Figure GDA0003108152690000123
Therefore, the temperature of the molten metal is controlled,
Figure GDA0003108152690000124
wherein
Figure GDA0003108152690000125
Such as:
Figure GDA0003108152690000126
Figure GDA0003108152690000127
Figure GDA0003108152690000128
Figure GDA0003108152690000129
Figure GDA00031081526900001210
where μ represents the mean value, N () represents the normal distribution, and Q () represents the Q function in communication, i.e., Q
Figure GDA00031081526900001211
A12 using the relation two
Figure GDA00031081526900001212
Calculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,
Figure GDA00031081526900001213
represents the ith frame header estimate, and var { } represents the variance function.
Figure GDA00031081526900001214
It can be understood that: the variance is calculated using the property that the spatial mean of the absolute value of the starting difference frame remains constant.
And A13, judging whether the variance is smaller than a preset threshold value.
If yes, go to step A14; if not, step A15 is performed.
A14, determining
Figure GDA0003108152690000131
The received video frames are the received video frames as frame headers.
A15 using the relation III
Figure GDA0003108152690000132
And calculating the ith frame header estimation value.
M (k, k-1) represents the spatial mean of the absolute values of the difference frames between the k-th frame received video frame and the k-1 th frame received video frame, argmin { } is represented at
Figure GDA0003108152690000133
Under the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,
Figure GDA0003108152690000134
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000135
is shown in
Figure GDA0003108152690000136
The minimum frame header estimate under the conditions,
Figure GDA0003108152690000137
and the first frame header estimation value obtained by calculation according to the relation formula I is shown.
This step can be excluded in
Figure GDA0003108152690000138
Erroneous judgment may be caused in the case.
A16, determining
Figure GDA0003108152690000139
The received video frames are the received video frames as frame headers.
The present embodiment determines the received video frame as the frame header by using the characteristic that the spatial mean of the absolute values of the initial difference frames is minimum and the characteristic that the spatial mean of the absolute values of the initial difference frames is constant, and can improve the reliability of the received video frame as the frame header.
Based on the foregoing embodiments, in another embodiment of the present application, the determining, by the receiving end, a difference frame with a largest spatial mean of absolute values in each group of received video frames as an optimal difference frame is introduced, which specifically includes:
when in use
Figure GDA00031081526900001310
The difference frame with the largest spatial mean of absolute values is
Figure GDA00031081526900001311
Figure GDA00031081526900001312
Wherein the content of the first and second substances,
Figure GDA00031081526900001313
is the k-th3The frame receives a frame of video and,
Figure GDA00031081526900001314
is the k-th4The frame receives a frame of video and,
Figure GDA00031081526900001315
is the k-th4The frame receives a blending factor for the video frame,
Figure GDA00031081526900001316
is the k-th4The frame receives additive white gaussian noise of the video frame,
Figure GDA00031081526900001317
is the k-th3The frame receives additive white gaussian noise of the video frame,
Figure GDA0003108152690000141
is the k-th4The frame receives embedded data in a video frame.
When in use
Figure GDA0003108152690000142
The difference frame with the largest spatial mean of absolute values is
Figure GDA0003108152690000143
Figure GDA0003108152690000144
Wherein the content of the first and second substances,
Figure GDA0003108152690000145
is the k-th5The frame receives a frame of video and,
Figure GDA0003108152690000146
is the k-th4The frame receives a frame of video and,
Figure GDA0003108152690000147
is the k-th5The frame receives a blending factor for the video frame,
Figure GDA0003108152690000148
is the k-th4Frame receiving videoThe additive white gaussian noise of the frame,
Figure GDA0003108152690000149
is the k-th5The frame receives additive white gaussian noise of the video frame,
Figure GDA00031081526900001410
is the k-th4The frame receives embedded data in a video frame.
Based on the foregoing embodiments, in another embodiment of the present application, a decoding decision performed on the best difference frame by the receiving end to obtain a decoding decision result is introduced, where the decoding decision result includes:
the receiving end divides the optimal difference frame into B1B 2 blocks according to space, calculates the average value of each block, and judges the optimal difference frame to be 1 if the average value of each block is more than 0; if the block mean value is less than 0, the block mean value is judged to be 0. Then, the decoding decision result can be understood as: the mean values of the blocks form a matrix.
In another embodiment of the present application, another method for parsing secondary imaging mixed frame information is introduced, and referring to fig. 2, the method may include:
step S21, the sending end determines an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, and embedded information included in the first embedded information video frame and embedded information included in the second embedded information video frame are complementary in positive and negative directions.
And A is an integer greater than 0.
And step S22, the sending end takes each frame video frame in the embedded information video frame group as a video frame to be sent and sends the video frame to be sent to a receiving end.
Step S23, the receiving end receives the video frame to be sent, and uses the received video frame as a received video frame, and the received video frame and the video frame to be sent satisfy a mixed frame model.
Step S24, the receiving end determines the received video frame as the frame header from the received video frames of each frame, and divides the (a +2) received video frames into a group starting from the received video frame as the frame header.
Steps S21-S24 are the same as steps S11-S14 in the previous embodiment, and the detailed procedures of steps S21-S24 can be referred to the related descriptions of steps S11-S14, and are not described herein again.
Step S25, the receiving end performs spatial synchronization on each group of received video frames respectively.
The receiving end respectively carries out spatial synchronization on each group of received video frames, can correct each group of received video frames, and improves the reliability of the content of the received video frames.
Step S26, the receiving end determines the difference frame with the largest spatial mean of the absolute values in each group of received video frames after spatial synchronization, as the optimal difference frame.
Step S26 is a specific implementation manner of step S15 in the previous embodiment.
And step S27, the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
Step S27 is the same as step S16 in the previous embodiment, and the detailed process of step S27 can be referred to the related description of step S16, which is not repeated herein.
Next, a description is given of a secondary imaging mixed frame information analysis system provided in the present application, and a secondary imaging mixed frame information analysis system described below and a secondary imaging mixed frame information analysis method described above may be referred to in correspondence with each other.
Referring to fig. 3, a schematic diagram of a logical structure of a secondary imaging hybrid frame information parsing system provided in the present application is shown, where the secondary imaging hybrid frame information parsing system includes: a transmitting end 11 and a receiving end 12.
The sending end 11 is configured to determine an embedded information video frame group, where the embedded information video frame group includes an a-frame original video frame, a first embedded information video frame, and a second embedded information video frame, where embedded information included in the first embedded information video frame is complementary to embedded information included in the second embedded information video frame, and each frame video frame in the embedded information video frame group is input into a mixed frame model, an output video frame is used as a video frame to be sent, and the video frame to be sent is sent to a receiving end 12, where a is an integer greater than 0.
The receiving end 12 is configured to:
receiving the video frame to be sent, and taking the received video frame as a received video frame;
determining a received video frame as a frame header from among the received video frames of the frames, and dividing the (A +2) frame received video frames into a group from the received video frame as the frame header;
determining a difference frame with the maximum spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and performing decoding judgment on the optimal difference frame to obtain a decoding judgment result, and determining embedded data in the video frame sent by the sending end 11 according to the decoding judgment result.
In this embodiment, a may be equal to 3. Accordingly, the sender is particularly adapted to utilize an embedded information model
Figure GDA0003108152690000161
Figure GDA0003108152690000162
Determining the set of embedded information video frames;
wherein the content of the first and second substances,
Figure GDA0003108152690000163
representing the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000164
representing the embedded data;
Figure GDA0003108152690000165
the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the second in the original videoThe maximum horizontal direction coordinates of image pixels in the k groups of mth frame video frames, and N represents the maximum vertical direction coordinates of the image pixels in the kth group of mth frame video frames in the original video;
Figure GDA0003108152690000166
the equiprobability takes values of 1 and-1,
Figure GDA0003108152690000167
B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
In this embodiment, the hybrid frame model may be:
Figure GDA0003108152690000168
wherein the content of the first and second substances,
Figure GDA0003108152690000169
representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents the kth frame of video transmitted by the transmitting end 11; x is the number ofk+1(i, j) represents the (k + 1) th frame of video transmitted by the transmitting end 11.
In this embodiment, the process of determining, by the receiving end 12, a received video frame serving as a frame header from among video frames received by each frame may specifically include:
using the relation 1
Figure GDA0003108152690000171
Calculating an i-th frame header estimate, M (k, k-1) representing an absolute frame difference between the k-th frame received video frame and the k-1-th frame received video frameThe space mean value of the value, argmin { }, represents the values of the variables k and k-1 when M (k, k-1) takes the minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,
Figure GDA0003108152690000172
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA0003108152690000173
representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6;
using the relation of two
Figure GDA0003108152690000174
Calculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,
Figure GDA0003108152690000175
representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not;
if so, determining the first step
Figure GDA0003108152690000176
Each received video frame is a received video frame serving as a frame header;
if not, using the relation III
Figure GDA0003108152690000177
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frame
Figure GDA0003108152690000178
Under the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,
Figure GDA0003108152690000179
indicating the ith frame header estimate, i being an integer greater than 1,
Figure GDA00031081526900001710
is shown in
Figure GDA00031081526900001711
The minimum frame header estimate under the conditions,
Figure GDA00031081526900001712
representing a first frame header estimation value obtained by calculation according to the relation formula I;
is determined to be
Figure GDA00031081526900001713
The received video frames are the received video frames as frame headers.
In this embodiment, the receiving end 12 may further be configured to:
and respectively carrying out spatial synchronization on each group of received video frames.
Correspondingly, the process of determining, by the receiving end 12, the difference frame with the largest spatial mean of the absolute values in each group of received video frames as the optimal difference frame may specifically include:
the receiving end 12 determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
Finally, it should also be noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functionality of the units may be implemented in one or more software and/or hardware when implementing the present application.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be essentially or partially implemented in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The method and the system for analyzing the secondary imaging mixed frame information provided by the application are described in detail above, specific examples are applied in the description to explain the principle and the implementation of the application, and the description of the above embodiments is only used to help understand the method and the core idea of the application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (6)

1. A secondary imaging mixed frame information analysis method is characterized by comprising the following steps:
a sending end determines an embedded information video frame group, wherein the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, and A is an integer greater than 0;
the sending end takes each frame of video frames in the embedded information video frame group as video frames to be sent and sends the video frames to be sent to a receiving end;
the receiving end receives the video frame to be sent and takes the received video frame as a received video frame, and the relation between the received video frame and the video frame to be sent meets a mixed frame model, wherein the mixed frame model is as follows:
Figure FDA0003108152680000011
wherein the content of the first and second substances,
Figure FDA0003108152680000012
representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame sent by the sending end;
the receiving end determines a received video frame as a frame header from among the received video frames of each frame, and divides the received video frames of (a +2) frames into a group from the received video frame as the frame header, and the receiving end determines the received video frame as the frame header from among the received video frames of each frame, including: using the relation 1
Figure FDA0003108152680000013
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, argmin { } represents that when M (k, k-1) obtains the value under the condition that k is more than or equal to 2 and less than or equal to 6At the minimum, the values of the variables k and k-1,
Figure FDA0003108152680000014
indicating the ith frame header estimate, i being an integer greater than 1,
Figure FDA0003108152680000015
representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6; using the relation of two
Figure FDA0003108152680000016
Calculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,
Figure FDA0003108152680000017
representing the ith frame header estimation value, and var { } representing a variance function; judging whether the variance is smaller than a preset threshold value or not; if so, determining the first step
Figure FDA0003108152680000018
Each received video frame is a received video frame serving as a frame header; if not, using the relation III
Figure FDA0003108152680000019
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frame
Figure FDA0003108152680000021
Under the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,
Figure FDA0003108152680000022
indicating the ith frame header estimate, i being an integer greater than 1,
Figure FDA0003108152680000023
is shown in
Figure FDA0003108152680000024
The minimum frame header estimate under the conditions,
Figure FDA0003108152680000025
representing a first frame header estimation value obtained by calculation according to the relation formula I; is determined to be
Figure FDA0003108152680000026
Each received video frame is a received video frame serving as a frame header;
the receiving end determines a difference frame with the largest spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and the receiving end carries out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
2. The method of claim 1, wherein a is equal to 3;
the method for determining the embedded information video frame group by the sending end comprises the following steps:
the sending end utilizes an embedded information model
Figure FDA0003108152680000027
Figure FDA0003108152680000028
Determining the set of embedded information video frames;
wherein the content of the first and second substances,
Figure FDA0003108152680000029
representing the kth group of mth frame video frames in the original video;
Figure FDA00031081526800000210
representation embeddingData;
Figure FDA00031081526800000211
the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;
Figure FDA00031081526800000212
the equiprobability takes values of 1 and-1,
Figure FDA00031081526800000213
B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
3. The method according to claim 1, wherein the receiving end determines a received video frame as a frame header from among the received video frames of the frames, and after dividing the received video frames of (a +2) frames into a group starting from the received video frame as the frame header, further comprises:
the receiving end respectively carries out spatial synchronization on each group of received video frames;
the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame, and the method comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
4. A secondary imaging hybrid frame information parsing system, comprising: a sending end and a receiving end;
the sending end is used for determining an embedded information video frame group, the embedded information video frame group comprises an A frame original video frame, a first embedded information video frame and a second embedded information video frame, embedded information included in the first embedded information video frame is positive and negative complementary with embedded information included in the second embedded information video frame, each frame video frame in the embedded information video frame group is used as a video frame to be sent, and the video frame to be sent is sent to a receiving end, wherein A is an integer larger than 0;
the receiving end is configured to:
receiving the video frame to be sent, and taking the received video frame as a received video frame, wherein the received video frame and the video frame to be sent meet a mixed frame model, and the mixed frame model is as follows:
Figure FDA0003108152680000031
wherein the content of the first and second substances,
Figure FDA0003108152680000032
representing the received video frame; lambda [ alpha ]k(i, j) represents a mixing factor, and 0. ltoreq. lambda.k(i,j)≤1;nk(i, j) represents additive white Gaussian noise, the mean is 0, and the variance is σ2;xk(i, j) represents a k frame video frame sent by the sending end; x is the number ofk+1(i, j) represents the (k + 1) th frame of video frame sent by the sending end;
determining a received video frame as a frame header from among the received video frames of the frames, and dividing the (A +2) frame received video frames into a group from the received video frame as the frame header; the process of determining the received video frame as the frame header from the video frames received by the receiving end specifically includes: using the relation 1
Figure FDA0003108152680000041
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of an absolute value of a difference frame between a kth frame received video frame and a kth-1 frame received video frame, argmin { } represents values of a variable k and k-1 when M (k, k-1) takes a minimum value under the condition that k is more than or equal to 2 and less than or equal to 6,
Figure FDA0003108152680000042
indicating the ith frame header estimate, i being an integer greater than 1,
Figure FDA0003108152680000043
representing the minimum frame header estimation value under the condition that k is more than or equal to 2 and less than or equal to 6; using the relation of two
Figure FDA0003108152680000044
Calculating a variance of a spatial mean of absolute values of difference frames between received video frames corresponding to the frame header estimate,
Figure FDA0003108152680000045
representing the ith frame header estimation value, and var { } representing a variance function;
judging whether the variance is smaller than a preset threshold value or not; if so, determining the first step
Figure FDA0003108152680000046
Each received video frame is a received video frame serving as a frame header; if not, using the relation III
Figure FDA0003108152680000047
Calculating an ith frame header estimation value, wherein M (k, k-1) represents a spatial mean value of absolute values of difference frames between a k frame received video frame and a k-1 frame received video frame, and argmin { }represents a difference between the k frame received video frame and the k-1 frame received video frame
Figure FDA0003108152680000048
Under the condition, when M (k, k-1) takes the minimum value, the values of the variables k and k-1 are taken,
Figure FDA0003108152680000049
indicating the ith frame header estimate, i being an integer greater than 1,
Figure FDA00031081526800000410
is shown in
Figure FDA00031081526800000411
The minimum frame header estimate under the conditions,
Figure FDA00031081526800000412
representing a first frame header estimation value obtained by calculation according to the relation formula I; is determined to be
Figure FDA00031081526800000413
Each received video frame is a received video frame serving as a frame header;
determining a difference frame with the maximum spatial mean value of absolute values in each group of received video frames as an optimal difference frame;
and carrying out decoding judgment on the optimal difference frame to obtain a decoding judgment result.
5. The system of claim 4, wherein A is equal to 3;
the sender is particularly adapted to utilize an embedded information model
Figure FDA00031081526800000414
Figure FDA00031081526800000415
Determining the set of embedded information video frames;
wherein the content of the first and second substances,
Figure FDA0003108152680000051
representing the kth group of mth frame video frames in the original video;
Figure FDA0003108152680000052
representing the embedded data;
Figure FDA0003108152680000053
the video frame is a video frame obtained by overlapping an original video frame and embedded data; i represents the horizontal coordinate of the image pixel in the mth frame video frame of the kth group in the original video, j represents the vertical coordinate of the image pixel in the mth frame video frame of the kth group in the original video, i is more than or equal to 1 and less than or equal to M, and j is more than or equal to 1 and less than or equal to N; m represents the maximum horizontal direction coordinate of the image pixel in the kth group of mth frame video frames in the original video, and N represents the maximum vertical direction coordinate of the image pixel in the kth group of mth frame video frames in the original video;
Figure FDA0003108152680000054
the equiprobability takes values of 1 and-1,
Figure FDA0003108152680000055
B1representing the number of blocks in the original video into which the image in the kth group of mth frame video frames is divided in the vertical direction, B2Representing the number of the image in the kth group of mth frame video frames in the original video divided into blocks in the horizontal direction; Δ represents the strength of the embedded data; m is 1,2,3, which means that the 1 st, 2 nd and 3 rd frame video frames are the original video frames; m-4 and m-5 indicate that the 4 th frame video frame and the 5 th frame video frame are the complementary video frames.
6. The system of claim 4, wherein the receiving end is further configured to:
respectively carrying out spatial synchronization on each group of received video frames;
the process that the receiving end determines the difference frame with the largest spatial mean value of the absolute values in each group of received video frames as the optimal difference frame specifically comprises the following steps:
and the receiving end determines the difference frame with the maximum spatial mean value of the absolute values in each group of received video frames after spatial synchronization as the optimal difference frame.
CN201910660860.0A 2019-07-22 2019-07-22 Secondary imaging mixed frame information analysis method and system Active CN110366004B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910660860.0A CN110366004B (en) 2019-07-22 2019-07-22 Secondary imaging mixed frame information analysis method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910660860.0A CN110366004B (en) 2019-07-22 2019-07-22 Secondary imaging mixed frame information analysis method and system

Publications (2)

Publication Number Publication Date
CN110366004A CN110366004A (en) 2019-10-22
CN110366004B true CN110366004B (en) 2021-08-13

Family

ID=68220430

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910660860.0A Active CN110366004B (en) 2019-07-22 2019-07-22 Secondary imaging mixed frame information analysis method and system

Country Status (1)

Country Link
CN (1) CN110366004B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184164B2 (en) * 2005-06-25 2012-05-22 Huawei Technologies Co., Ltd. Method for measuring multimedia video communication quality
CN105846898A (en) * 2016-05-20 2016-08-10 中国人民解放军信息工程大学 Visible light communication method, sending device, receiving device and system
CN106570816A (en) * 2016-10-31 2017-04-19 努比亚技术有限公司 Method and device for sending and receiving information
CN107911167A (en) * 2017-11-29 2018-04-13 中国人民解放军信息工程大学 A kind of visual light imaging communication means and system
CN108391028A (en) * 2018-02-09 2018-08-10 东莞信大融合创新研究院 A kind of implicit imaging communication method of the visible light of adaptive shooting direction
CN109104243A (en) * 2018-08-01 2018-12-28 北京邮电大学 A kind of pixel communication means, information send terminal and information receiving terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016133434A1 (en) * 2015-02-17 2016-08-25 Telefonaktiebolaget Lm Ericsson (Publ) Operation of a communication unit in a wireless local area network, wlan, environment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8184164B2 (en) * 2005-06-25 2012-05-22 Huawei Technologies Co., Ltd. Method for measuring multimedia video communication quality
CN105846898A (en) * 2016-05-20 2016-08-10 中国人民解放军信息工程大学 Visible light communication method, sending device, receiving device and system
CN106570816A (en) * 2016-10-31 2017-04-19 努比亚技术有限公司 Method and device for sending and receiving information
CN107911167A (en) * 2017-11-29 2018-04-13 中国人民解放军信息工程大学 A kind of visual light imaging communication means and system
CN108391028A (en) * 2018-02-09 2018-08-10 东莞信大融合创新研究院 A kind of implicit imaging communication method of the visible light of adaptive shooting direction
CN109104243A (en) * 2018-08-01 2018-12-28 北京邮电大学 A kind of pixel communication means, information send terminal and information receiving terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"面向二次成像混合的可见光隐式通信系统设计";李明超;《信息工程大学学报》;20190228;全文 *
"一种可见光隐式成像通信帧同步补偿算法";李明超;《光学学报》;20180131;第38卷(第1期);全文 *

Also Published As

Publication number Publication date
CN110366004A (en) 2019-10-22

Similar Documents

Publication Publication Date Title
US10796685B2 (en) Method and device for image recognition
CN110572622B (en) Video decoding method and device
US20200162725A1 (en) Video Quality Assessment Method and Apparatus
CN110418177B (en) Video encoding method, apparatus, device and storage medium
EP2786342B1 (en) Texture masking for video quality measurement
CN105611291B (en) The method and apparatus that mark information is added in the video frame and detects frame losing
EP3472957A1 (en) Methods, systems, and media for transmitting data in a video signal
CN106550240A (en) A kind of bandwidth conservation method and system
CN111182303A (en) Encoding method and device for shared screen, computer readable medium and electronic equipment
EP3598386A1 (en) Method and apparatus for processing image
CN111182300B (en) Method, device and equipment for determining coding parameters and storage medium
CN114245209A (en) Video resolution determination method, video resolution determination device, video model training method, video coding device and video coding device
CN113469869B (en) Image management method and device
CN110366004B (en) Secondary imaging mixed frame information analysis method and system
CN111954034B (en) Video coding method and system based on terminal equipment parameters
CN110365985A (en) Image processing method and device
Choi et al. Video QoE models for the compute continuum
US8503822B2 (en) Image quality evaluation system, method, and program utilizing increased difference weighting of an area of focus
CN106921840B (en) Face beautifying method, device and system in instant video
CN114339252B (en) Data compression method and device
CN114466224B (en) Video data encoding and decoding method and device, storage medium and electronic equipment
CN112804469B (en) Video call processing method, device, equipment and storage medium
US10986337B2 (en) Systems and methods for selective transmission of media content
CN113038179A (en) Video encoding method, video decoding method, video encoding device, video decoding device and electronic equipment
US20190306500A1 (en) Bit rate optimization system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant