CN112954448A - Live broadcast content image feature code extraction method and live broadcast content consistency comparison method - Google Patents

Live broadcast content image feature code extraction method and live broadcast content consistency comparison method Download PDF

Info

Publication number
CN112954448A
CN112954448A CN201911256174.3A CN201911256174A CN112954448A CN 112954448 A CN112954448 A CN 112954448A CN 201911256174 A CN201911256174 A CN 201911256174A CN 112954448 A CN112954448 A CN 112954448A
Authority
CN
China
Prior art keywords
image feature
feature code
comparison
nxn
extraction method
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201911256174.3A
Other languages
Chinese (zh)
Inventor
吴雪波
翁昌清
杨文昊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dekscom Technologies Ltd
Original Assignee
Dekscom Technologies Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dekscom Technologies Ltd filed Critical Dekscom Technologies Ltd
Priority to CN201911256174.3A priority Critical patent/CN112954448A/en
Publication of CN112954448A publication Critical patent/CN112954448A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a live content image feature code extraction method and a live content consistency comparison method, wherein the image feature code extraction method comprises the following steps: step S1, carrying out graying processing on the picture decoded and restored from the ES video, namely converting color pixel points in the picture into gray pixel points; step S2, averagely dividing the grayed picture according to the block unit size of [ NxN ]; wherein N is a positive integer greater than 2; step S3, calculating the average brightness value of the pixel points in each block, and storing the average brightness value into an image feature code matrix Lumi [ NxN ]; and step S4, encrypting the image feature code matrix Lumi [ NxN ], and carrying the UTC time label of the current frame as the image feature code. The invention can effectively monitor the position of the area where the feature code changes by dividing the image into blocks and forming a matrix.

Description

Live broadcast content image feature code extraction method and live broadcast content consistency comparison method
Technical Field
The invention belongs to the technical field of network security, relates to a live broadcast system, and particularly relates to a live broadcast content image feature code extraction method and a live broadcast content consistency comparison method.
Background
In recent years, with the comprehensive popularization of the policy of 'integration of three networks' in China, the IPTV construction management actively adapts to the great trend of informatization and networking development and realizes rapid development, and the scale of IPTV users is nearly three hundred million, thus becoming an important force for spreading positive energy and promoting the main melody.
IPTV is an important extension of the broadcast television in the field of new media, and is an important propaganda ideological and cultural platform and an awareness form place. The central part makes clear rules and puts forward specific requirements on IPTV construction management about a series of important policy documents such as a three-network integration overall scheme, a popularization scheme and the like. The IPTV construction management is related to national political safety, cultural information safety and consciousness form safety, the relation meets new expectations of the people for good life, the safety is essential, the management and the control are realized, the IPTV content safety, the transmission safety and the broadcasting safety are ensured, and the acquisition feeling and the happiness feeling of the people are enhanced by high-quality culture supply.
In recent years, television services are often attacked by hackers and adversaries, and have extremely adverse effects. (e.g., 6 months Xinnuo satellite in 2002 was illegally attacked, 8 months # 1 in 2014 46.5 ten thousand Wenzhou set-top boxes were illegally attacked, etc.). Due to the wide audience of the IPTV live broadcast service, especially in national major meetings, celebrations, holidays and competitions, if the live broadcast service is illegally tampered, the adverse effect and consequence thereof will be immeasurable, so the IPTV operator must increase the security guarantee of the live broadcast content to a particularly important height, and once a potential security problem is found, second-level security processing (one-key shutdown or switching between main and standby contents) is required.
The IPTV live content consistency detection system mainly comprises two components, namely feature code extraction equipment and a comparison server. The feature code extraction equipment is deployed at different detection points (generally comprising a source node and a monitoring node) of the IPTV network, extracts live video feature codes and uploads the live video feature codes to the comparison server, and the comparison server synchronously compares the feature codes uploaded at the different detection points and judges whether live contents are consistent or not. At present, two types of feature code extraction methods, namely, TS code stream layer feature codes and image layer feature codes, are mainly adopted in an IPTV live content consistency detection system in the industry.
The feature code of the TS code stream layer is extracted by mainly performing calculation processing on an MPEG2-TS transport code stream commonly used in the field of broadcasting and television, for example, calculating an MD5 value of the TS packet data content as the feature code. The method for calculating the feature code is simple and efficient, and is suitable for performing content consistency detection on scenes that TS code stream content carried in IP video streams cannot be modified at upstream and downstream nodes of an IPTV network. Although common IP network devices (such as switches and routers) do not modify a clean core part (TS code stream) of an IP packet, there are still a lot of IPTV network devices (such as a codec device, a multiplexing device, and an IP stream matrix) that modify information of a TS code stream layer (for example, inserting a PSI table, changing audio/video PID information, etc.), which may cause a change in feature codes of the TS code stream layer, and audio/video content itself is unchanged, thereby causing an error alarm.
The image layer feature code is obtained by decoding and restoring IPTV live video stream into an image, and then processing, calculating and extracting pixels in the image. The feature code extraction method is not influenced by changes of additional information (such as PSI (program specific information) tables and PID (proportion integration differentiation) information) and encoding formats (such as H.264/H.265/AVS (Audio video coding standard) of TS (transport stream) code stream data packets, and can achieve the comparison result of video content consistency identical to human eyes, so that the feature code extraction method can be suitable for various live content consistency detection and live tamper-proof detection scenes.
In the scene of comparing the content consistency of the main live broadcast signal and the standby live broadcast signal of a television station or an IPTV operator, because the main live broadcast signal and the standby live broadcast signal are usually provided by different content providers or channels, although the picture content of the same channel (e.g., CCTV1) of the two signals is completely the same at human eyes, the encoding modes (e.g., h.264/h.265/AVS) and the code stream layer parameter information (e.g., TS and ES layer parameters) are completely different. Therefore, the two paths of live broadcast signals must be subjected to video decoding and image restoration, and then image layer feature codes are extracted, so that the consistency comparison detection of the content of the live broadcast signals can be realized.
The extraction of the image layer feature codes also faces a lot of problems and challenges, firstly, the information quantity of the image is huge, how to effectively express the feature codes of the image by using the minimum byte number is crucial to reducing the transmission bandwidth required by uploading data to the comparison server by the feature code extraction equipment, and meanwhile, the operation quantity of the comparison server can be reduced, and the performance of the comparison server is improved (namely, more live broadcast channels can be compared at the same time). Secondly, although the main area content of the video is consistent, there are still some cases where the content of a part of the area is changed (such as changing the station caption, adding subtitles, etc.), which requires the feature code to effectively express which areas are changed.
In view of the above, there is a need to design a matching method of a live broadcast system so as to overcome the above-mentioned drawbacks of the existing methods.
Disclosure of Invention
The invention provides a live content image feature code extraction method, a live content consistency comparison method and a live content consistency comparison method.
In order to solve the technical problem, according to one aspect of the present invention, the following technical solutions are adopted:
a live content image feature code extraction method comprises the following steps:
step S1, performing graying processing on the picture restored by ES (Elementary Stream) video decoding, that is, converting color pixels in the picture into grayscale pixels;
step S2, averagely dividing the grayed picture according to the block unit size of [ NxN ]; wherein N is a positive integer greater than 2;
step S3, calculating the average brightness value of the pixel points in each block, and storing the average brightness value into an image feature code matrix Lumi [ NxN ];
and step S4, encrypting the image feature code matrix Lumi [ NxN ], and carrying the UTC time label of the current frame as the image feature code.
As an embodiment of the present invention, the method further includes step S5, returning the image feature code to the upper level calling module.
In one embodiment of the present invention, in step S4, the image feature code matrix Lumi [ NxN ] is encrypted by using an SM4 encryption algorithm.
In one embodiment of the present invention, N is 64.
A live broadcast content consistency comparison method comprises the image feature code extraction method.
As an embodiment of the present invention, the comparison method further includes:
an image feature code synchronization step, namely outputting an image synchronization identifier Syn _ Pic, and indicating whether the image feature codes of the source node and the monitoring node can realize synchronization or not;
the image feature code cyclic comparison module is used for cyclically acquiring and comparing the image feature codes of the video streams of the source node and the monitoring node and outputting a comparison detection Result;
if Result is 1, the video contents of the source node and the monitoring node sampled in the current 5 seconds are consistent, and the program continues to execute the cyclic comparison step of the feature codes;
if Result is 2, the video contents of the source node and the monitoring node sampled in the current 5 seconds are inconsistent, and the program outputs an abnormal comparison alarm;
if Result is 3, it indicates that the video contents of the source node and the monitoring node which are continuously sampled for 6 seconds are inconsistent, the program outputs a 'comparison long-term anomaly' alarm, and the ES/image feature code synchronization step is executed again.
The invention has the beneficial effects that: the live broadcast content image feature code extraction method and the live broadcast content consistency comparison method provided by the invention can be used for carrying out gray processing on key frame pictures of IPTV live broadcast videos; carrying out block division on the grayed picture pixel points according to an NxN mode, and forming a matrix; calculating the average value of the brightness values of all pixels in each block, and storing the average value into a corresponding matrix position to form an image feature code; the feature code can be used for realizing an efficient image-level IPTV live video stream content consistency detection system. The invention can effectively monitor the position of the area where the feature code changes by dividing the image into blocks and forming a matrix.
Drawings
Fig. 1 is a flowchart of an image feature code extraction method according to an embodiment of the present invention.
Detailed Description
Preferred embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
For a further understanding of the invention, reference will now be made to the preferred embodiments of the invention by way of example, and it is to be understood that the description is intended to further illustrate features and advantages of the invention, and not to limit the scope of the claims.
The description in this section is for several exemplary embodiments only, and the present invention is not limited only to the scope of the embodiments described. It is within the scope of the present disclosure and protection that the same or similar prior art means and some features of the embodiments may be interchanged.
In the specification, UTC is Universal Time Coordinated (UTC).
The invention discloses a live content image feature code extraction method, and FIG. 1 is a flow chart of the image feature code extraction method in an embodiment of the invention; referring to fig. 1, the method includes:
step S1, carrying out graying processing on the picture decoded and restored from the ES video, namely converting color pixel points in the picture into gray pixel points;
step S2, averagely dividing the grayed picture according to the block unit size of [ NxN ]; wherein N is a positive integer greater than 2; in one embodiment, N is 64.
Step S3, calculating the average brightness value of the pixel points in each block, and storing the average brightness value into an image feature code matrix Lumi [ NxN ];
and step S4, encrypting the image feature code matrix Lumi [ NxN ], and carrying the UTC time label of the current frame as the image feature code. In one embodiment, the image signature code matrix Lumi [ NxN ] is encrypted using the SM4 encryption algorithm.
In an embodiment of the present invention, the method further includes step S5, returning the image feature code to the upper level calling module.
The invention also discloses a live broadcast content consistency comparison method, which comprises the image feature code extraction method.
In an embodiment of the present invention, the comparing method further includes:
step 1, extracting image feature codes; please refer to the description of the above embodiments.
Step 2, synchronizing image feature codes, namely outputting an image synchronization identifier Syn _ Pic and indicating whether the image feature codes of the source node and the monitoring node can realize synchronization or not;
step 3, an image feature code cyclic comparison module, which is used for cyclically acquiring and comparing the image feature codes of the video streams of the source node and the monitoring node and outputting a comparison detection Result;
if Result is 1, the video contents of the source node and the monitoring node sampled in the current 5 seconds are consistent, and the program continues to execute the cyclic comparison step of the feature codes;
if Result is 2, the video contents of the source node and the monitoring node sampled in the current 5 seconds are inconsistent, and the program outputs an abnormal comparison alarm;
if Result is 3, the video contents of the source node and the monitoring node which are continuously sampled for 6 seconds are not consistent, the program outputs a comparison long-term abnormity alarm, and the image feature code synchronization step is executed again.
In an embodiment of the present invention, the step 2 (image feature code synchronization step) specifically includes the following steps:
and step 21, the probe devices of the source node and the monitoring node respectively acquire the live broadcast ES video streams of the monitoring points, locate the I-frame position of the live broadcast ES video streams, and restore the I-frame position to a picture.
Step 22, the source node probe acquires image feature codes of the video according to the frequency once per second and sends the image feature codes to a source node buffer area BufP _ s; the monitoring node probe also acquires the image feature code of the video according to the frequency once per second and sends the image feature code into a monitoring node buffer zone BufP _ d.
In an embodiment of the present invention, the process of collecting the image feature code includes performing graying processing on the picture decoded and restored from the ES video, that is, converting color pixel points in the picture into grayscale pixel points; averagely dividing the grayed picture according to the block unit size of [ NxN ], wherein the default N is 64; calculating the average brightness value of the pixel points in each block, and storing the average brightness value into an image feature code matrix Lumi [ NxN ]; the image feature code matrix Lumi [ NxN ] is encrypted using the SM4 encryption algorithm and carries the UTC time tag of the current frame as the image feature code.
Step 23, if Len (BufP _ s) > < 60 and Len (BufP _ d) > < 30, this indicates that the number of image feature codes buffered in the source node buffer and the monitor node buffer has reached the preset target required for performing the synchronization comparison.
And step 24, executing a synchronization comparison process, namely performing pattern matching search on the feature codes in the BufP _ s of the source node, and detecting whether the condition that the continuous 5-second feature code value in the BufP _ d of the monitoring node is equal to the continuous 5-second feature code value in the BufP _ s exists.
And step 25, if the synchronization matching is successful, that is, if the 5-second continuous feature codes exist in the buffers of the source node and the monitoring node, the module outputs Syn _ Pic ═ true, otherwise, outputs Syn _ Pic ═ false.
In an embodiment of the present invention, the step 3 includes:
step 31, writing the image feature code of the source node into a source image feature code buffer area BufX _ s according to the image feature code synchronization result described in the step 2; writing the image feature code of the monitoring node into a monitoring image feature code buffer BufX _ d;
step 32, if Len (BufX _ s) > (60) and Len (BufX _ d) > (30), this indicates that the number of feature codes in the source node buffer area or the monitoring node buffer area exceeds the threshold of the buffer area, that is, overflow occurs, and then the output Result is 3, and exit;
if Len (BufX _ s) > (5) and Len (BufX _ d) > (5), which indicates that feature codes in a source node buffer area and a monitoring node buffer area reach 5 seconds, comparing the feature codes of the earliest 5 seconds buffer areas in the source node buffer area and the monitoring node buffer area, and removing the compared 5 seconds feature codes from the buffer areas after the comparison is completed;
if the 5-second feature codes of the source node and the monitoring node are consistent in comparison, outputting a Result which is 1, and quitting;
and if the feature code comparison is inconsistent for continuous 30 seconds, outputting a Result which is 3, otherwise, outputting a Result which is 2, and exiting.
In summary, the live content image feature code extraction method and the live content consistency comparison method provided by the invention can perform graying processing on the key frame picture of the IPTV live video; carrying out block division on the grayed picture pixel points according to an NxN mode, and forming a matrix; calculating the average value of the brightness values of all pixels in each block, and storing the average value into a corresponding matrix position to form an image feature code; the feature code can be used for realizing an efficient image-level IPTV live video stream content consistency detection system.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The description and applications of the invention herein are illustrative and are not intended to limit the scope of the invention to the embodiments described above. Variations and modifications of the embodiments disclosed herein are possible, and alternative and equivalent various components of the embodiments will be apparent to those skilled in the art. It will be clear to those skilled in the art that the present invention may be embodied in other forms, structures, arrangements, proportions, and with other components, materials, and parts, without departing from the spirit or essential characteristics thereof. Other variations and modifications of the embodiments disclosed herein may be made without departing from the scope and spirit of the invention.

Claims (6)

1. A live content image feature code extraction method is characterized by comprising the following steps:
step S1, carrying out gray processing on the picture restored by the elementary stream ES video decoding, namely converting color pixel points in the picture into gray pixel points;
step S2, averagely dividing the grayed picture according to the block unit size of [ NxN ]; wherein N is a positive integer greater than 2;
step S3, calculating the average brightness value of the pixel points in each block, and storing the average brightness value into an image feature code matrix Lumi [ NxN ];
and step S4, encrypting the image feature code matrix Lumi [ NxN ], and carrying the UTC time label of the current frame as the image feature code.
2. The live content image feature code extraction method according to claim 1, characterized in that:
the method further includes step S5, returning the image feature code to the upper level calling module.
3. The live content image feature code extraction method according to claim 1, characterized in that:
in step S4, the image feature code matrix Lumi [ NxN ] is encrypted using the SM4 encryption algorithm.
4. The live content image feature code extraction method according to claim 1, characterized in that:
wherein N is 64.
5. A live content consistency comparison method, characterized in that the comparison method comprises the image feature code extraction method of any one of claims 1 to 4.
6. The live content consistency comparison method according to claim 5, characterized in that:
the comparison method further comprises the following steps:
an image feature code synchronization step, namely outputting an image synchronization identifier Syn _ Pic, and indicating whether the image feature codes of the source node and the monitoring node can realize synchronization or not;
the image feature code cyclic comparison module is used for cyclically acquiring and comparing the image feature codes of the video streams of the source node and the monitoring node and outputting a comparison detection Result;
if Result is 1, the video contents of the source node and the monitoring node sampled in the current 5 seconds are consistent, and the program continues to execute the cyclic comparison step of the feature codes;
if Result is 2, the video contents of the source node and the monitoring node sampled in the current 5 seconds are inconsistent, and the program outputs an abnormal comparison alarm;
if Result is 3, it indicates that the video contents of the source node and the monitoring node which are continuously sampled for 6 seconds are inconsistent, the program outputs a 'comparison long-term anomaly' alarm, and the ES/image feature code synchronization step is executed again.
CN201911256174.3A 2019-12-10 2019-12-10 Live broadcast content image feature code extraction method and live broadcast content consistency comparison method Withdrawn CN112954448A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911256174.3A CN112954448A (en) 2019-12-10 2019-12-10 Live broadcast content image feature code extraction method and live broadcast content consistency comparison method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911256174.3A CN112954448A (en) 2019-12-10 2019-12-10 Live broadcast content image feature code extraction method and live broadcast content consistency comparison method

Publications (1)

Publication Number Publication Date
CN112954448A true CN112954448A (en) 2021-06-11

Family

ID=76225362

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911256174.3A Withdrawn CN112954448A (en) 2019-12-10 2019-12-10 Live broadcast content image feature code extraction method and live broadcast content consistency comparison method

Country Status (1)

Country Link
CN (1) CN112954448A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596407A (en) * 2024-01-19 2024-02-23 慧盾信息安全科技(苏州)股份有限公司 Video stream tampering detection system and method based on feature code layered embedding

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040022313A1 (en) * 2002-07-30 2004-02-05 Kim Eung Tae PVR-support video decoding system
CN101222626A (en) * 2008-02-01 2008-07-16 中国传媒大学 Digital television signal code stream characteristic extraction and recognition method and equipment thereof
CN106454426A (en) * 2016-10-27 2017-02-22 四川长虹电器股份有限公司 Method for identifying analog channel of intelligent television
CN106658071A (en) * 2016-11-28 2017-05-10 北京蓝拓扑电子技术有限公司 Method and device for determining transmission state of bit stream
CN107767350A (en) * 2017-10-17 2018-03-06 青岛海信医疗设备股份有限公司 Video image restoring method and device
US10127631B1 (en) * 2017-03-02 2018-11-13 Snap Inc. Automatic image inpainting using local patch statistics
CN109325480A (en) * 2018-09-03 2019-02-12 平安普惠企业管理有限公司 The input method and terminal device of identity information
CN109729390A (en) * 2019-02-01 2019-05-07 浪潮软件集团有限公司 A kind of IPTV program monitoring method, apparatus and system

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040022313A1 (en) * 2002-07-30 2004-02-05 Kim Eung Tae PVR-support video decoding system
CN101222626A (en) * 2008-02-01 2008-07-16 中国传媒大学 Digital television signal code stream characteristic extraction and recognition method and equipment thereof
CN106454426A (en) * 2016-10-27 2017-02-22 四川长虹电器股份有限公司 Method for identifying analog channel of intelligent television
CN106658071A (en) * 2016-11-28 2017-05-10 北京蓝拓扑电子技术有限公司 Method and device for determining transmission state of bit stream
US10127631B1 (en) * 2017-03-02 2018-11-13 Snap Inc. Automatic image inpainting using local patch statistics
CN107767350A (en) * 2017-10-17 2018-03-06 青岛海信医疗设备股份有限公司 Video image restoring method and device
CN109325480A (en) * 2018-09-03 2019-02-12 平安普惠企业管理有限公司 The input method and terminal device of identity information
CN109729390A (en) * 2019-02-01 2019-05-07 浪潮软件集团有限公司 A kind of IPTV program monitoring method, apparatus and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596407A (en) * 2024-01-19 2024-02-23 慧盾信息安全科技(苏州)股份有限公司 Video stream tampering detection system and method based on feature code layered embedding
CN117596407B (en) * 2024-01-19 2024-03-26 慧盾信息安全科技(苏州)股份有限公司 Video stream tampering detection system and method based on feature code layered embedding

Similar Documents

Publication Publication Date Title
CN102160375B (en) Method for delivery of digital linear TV programming using scalable video coding
CN102037731B (en) Signalling and extraction in compressed video of pictures belonging to interdependency tiers
EP2731346A2 (en) Micro-filtering of streaming entertainment content based on parental control setting
WO2009088669A1 (en) Systems and methods for determining attributes of media items accessed via a personal media broadcaster
US9445137B2 (en) Method for conditioning a network based video stream and system for transmitting same
US11677795B2 (en) Methods, systems, and apparatuses for improved content delivery
US20220239972A1 (en) Methods and systems for content synchronization
CN102326403A (en) Accelerating channel change time with external picture property markings
CN112954448A (en) Live broadcast content image feature code extraction method and live broadcast content consistency comparison method
US20220210518A1 (en) Methods and systems for content output adjustment
EP2341680B1 (en) Method and apparatus for adaptation of a multimedia content
CA3011330A1 (en) Reduced content manifest size
CN112954371A (en) Live broadcast content ES feature code extraction method and live broadcast content consistency comparison method
CN114827617B (en) Video coding and decoding method and system based on perception model
US11893090B2 (en) Synchronization of digital rights management data
US20230328308A1 (en) Synchronization of multiple content streams
Go et al. Secure video transmission framework for battery-powered video devices
CN113038146A (en) Method and system for detecting consistency of self-adaptive IPTV live broadcast content
CN110633592B (en) Image processing method and device
US11743439B2 (en) Methods and systems for managing content items
EP3360334A1 (en) Digital media splicing system and method
Iqbal et al. Compressed-domain spatial adaptation resilient perceptual encryption of live H. 264 video
EP4030768A1 (en) Systems and methods for analyzing streamed media to determine common source encoders
US11627296B2 (en) Methods and systems for condition mitigation
Su et al. A Source video identification algorithm based on features in video stream

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20210611

WW01 Invention patent application withdrawn after publication