US20220353524A1 - Parallel forensic marking apparatus and method - Google Patents

Parallel forensic marking apparatus and method Download PDF

Info

Publication number
US20220353524A1
US20220353524A1 US17/256,828 US202017256828A US2022353524A1 US 20220353524 A1 US20220353524 A1 US 20220353524A1 US 202017256828 A US202017256828 A US 202017256828A US 2022353524 A1 US2022353524 A1 US 2022353524A1
Authority
US
United States
Prior art keywords
decoding
unit
content
forensic
transform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US17/256,828
Other languages
English (en)
Inventor
Yun-Ha Park
Dae-Soo Kim
Jae-Hyun Jun
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wookyoung Information Technology Co Ltd
Original Assignee
Wookyoung Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wookyoung Information Technology Co Ltd filed Critical Wookyoung Information Technology Co Ltd
Assigned to WOOKYOUNG INFORMATION TECHNOLOGY CO., LTD. reassignment WOOKYOUNG INFORMATION TECHNOLOGY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JUN, JAE-HYUN, KIM, DAE-SOO, PARK, YUN-HA
Publication of US20220353524A1 publication Critical patent/US20220353524A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/102Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
    • H04N19/119Adaptive subdivision aspects, e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/167Position within a video image, e.g. region of interest [ROI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/172Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • H04N19/176Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/423Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation characterised by memory arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/42Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
    • H04N19/436Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • H04N19/467Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/60Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2389Multiplex stream processing, e.g. multiplex stream encrypting
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/238Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
    • H04N21/2389Multiplex stream processing, e.g. multiplex stream encrypting
    • H04N21/23892Multiplex stream processing, e.g. multiplex stream encrypting involving embedding information at multiplex stream level, e.g. embedding a watermark at packet level
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/90Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using coding techniques not provided for in groups H04N19/10-H04N19/85, e.g. fractals
    • H04N19/91Entropy coding, e.g. variable length coding [VLC] or arithmetic coding

Definitions

  • a forensic marking technique is a technique for inserting information on a seller, a copyright proprietor or a buyer into multimedia content, and is a technique for identifying a disseminator by extracting inserted information (forensic mark) when content is illegally distributed.
  • the first disseminator may be held liable for the leaked content by identifying a forensic mark.
  • OSP online service provider
  • a forensic marking apparatus of the present disclosure may include may include a split part configured to split one frame of content, compressed using a setting method, into a plurality of regions; a plurality of decoding parts configured to have the split regions assigned thereto, respectively, and to perform entropy-decode the split regions; and a synchronization part configured to complete the frame by synchronizing the regions input to and output from the plurality of decoding parts.
  • a forensic marking method of the present disclosure may include a parsing step of identifying a first input and output format between entropy decoding and an insertion operation of a forensic mark and a second input and output format between the insertion operation and entropy encoding by pre-decoding content compressed using a set method; a decoding step of entropy-decoding the content and transforming the entropy-decoded content according to the first input and output format; a marking step of receiving the content transformed into the first input and output format and inserting a forensic mark into the content; and an encoding step of transforming the content into which the forensic mark has been inserted into the second input and output format necessary for the entropy encoding and entropy-encoding the content transformed into the second input and output format.
  • various processing processes performed between an entropy-decoding member for compressed content and a forensic marking member can be omitted.
  • various processing processes performed between an entropy-decoding member for forensic-marked content and a forensic marking member can be omitted.
  • the corresponding processing processes include quantization, a transform, motion compensation, intra/inter prediction, etc. and may require a lot of a processing time. According to the present disclosure, a total time taken to insert a forensic mark can be significantly reduced because at least some of the corresponding processing processes are excluded.
  • a code of entropy-decoded content may be directly provided to the forensic marking member due to the exclusion of various processing processes. In this case, it is difficult to insert the code of the entropy-decoded content into the forensic marking member due to a difference between formats.
  • the transform unit and the marking unit may be provided in order to synchronize the code formats.
  • the transform unit may transform a code format of the entropy-decoded content into a first input and output format which may be input to the forensic marking member.
  • the transform unit transforms the code format based on a reference. The corresponding reference may be provided by the marking unit.
  • the marking unit may perform only very some of the various processing processes that need to be performed between the entropy decoding member and the forensic marking member.
  • the various processing processes may include a process of processing multimedia data and a process related to a syntax structure.
  • the marking unit may obtain a syntax element that forms a syntax structure by performing only the process related to the syntax structure.
  • the present disclosure can process entropy decoding and entropy encoding in parallel using a plurality of GPUs.
  • the present disclosure may split a frame using a time method, and may process the plurality of split tiles in parallel using a plurality of GPUs.
  • a codec processing time including entropy decoding and the entropy encoding can be significantly reduced by the parallel processing.
  • the present disclosure can restore, to one frame, regions split in plural before and after a codec processing process normally using the synchronization part.
  • the synchronization part may perform synchronize between the frame and the regions using a syntax structure identified by the parsing unit.
  • the forensic coding apparatus and method of the present disclosure enable real-time forensic marking for a large amount of content compressed using the HEVC method. Furthermore, the present disclosure may be applied to various compression methods and compression algorithms using a single codec in addition to the HEVC method.
  • FIG. 1 is a schematic diagram illustrating a forensic marking system of the present disclosure.
  • FIG. 2 is a block diagram illustrating a forensic marking apparatus of the present disclosure.
  • FIG. 3 is a schematic diagram illustrating an operation of a parsing unit.
  • FIG. 4 is a schematic diagram illustrating an operation of the forensic marking apparatus of the present disclosure.
  • FIG. 5 is a flowchart illustrating a forensic marking method of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a frame split into a plurality of regions by a parallel forensic marking apparatus of the present disclosure.
  • FIG. 7 is a schematic diagram illustrating the parallel forensic marking apparatus of the present disclosure.
  • FIG. 8 is a diagram illustrating a computing device according to an embodiment of the present disclosure.
  • a term such as “include” or “have”, is intended to designate that a characteristic, a number, a step, an operation, a component, a part or a combination of them described in the specification is present, and does not exclude the presence or addition possibility of one or more other characteristics, numbers, steps, operations, components, parts, or combinations of them in advance.
  • the term “and/or” includes a combination of a plurality of described items or any one of a plurality of described items.
  • “A or B” may include all of “A”, “B”, or “A and B.”
  • FIG. 1 is a schematic diagram illustrating a forensic marking system of the present disclosure.
  • the forensic marking system illustrated in FIG. 1 may include a content server 50 , a forensic marking server 100 , a terminal 90 , and a forensic server 70 .
  • the content server 50 may compress and store multimedia content, such as an image and a sound, using a set method.
  • the terminal 90 may receive content compressed using the set method in a wired and wireless manner by requesting the content from the content server 50 .
  • a communication module 200 communicating with the forensic server 70 upon forensic mark-related processing may be provided in the terminal 90 .
  • the forensic marking server 100 may insert a forensic mark into the content provided to the terminal 90 .
  • the forensic server 70 may obtain user information from the terminal 90 , and may generate the forensic mark including the user information. The forensic server 70 may provide the forensic mark to the forensic marking server 100 .
  • the forensic marking server 100 may decompress compressed content stored in the content server 50 . After inserting the forensic mark into the decompressed content, the forensic marking server 100 may compress the corresponding content again and transmit the compressed content to the terminal 90 in a form, such as a bit stream.
  • a heavy load may be applied to the forensic server 70 due to a process of decompressing previously compressed content and compressing the decompressed content again, for the marking process of inserting a forensic mark into the content.
  • the insertion of a forensic mark may be practically difficult in a high-compression method aimed at a large amount of content, such as 4 k resolution or 8 k resolution, due to the heavy load.
  • FIG. 2 is a block diagram illustrating a forensic marking apparatus of the present disclosure.
  • the forensic marking apparatus illustrated in FIG. 2 may correspond to the forensic marking server 100 of FIG. 1 .
  • the forensic marking apparatus may include a first unit 110 , a marking unit 130 , a second unit 120 , and a transform unit 150 .
  • the first unit 110 may decode content compressed using a set method.
  • the decoding of the content may solely include entropy decoding or may include a first processing process, such as inverse quantization, in addition to the entropy decoding.
  • Content may be compressed or decompressed in various manners, such as MPEG, high efficiency video codec (HEVC, H.265), and VP9.
  • the content compressed using the set method may be decompressed by the decoding performed in the first unit 110 .
  • the marking unit 130 may insert a forensic mark into the content decoded by the first unit 110 .
  • the forensic mark may include information on a seller, a copyright proprietor, or a buyer of the content.
  • Content requested by the terminal 90 may be a compression copy compressed using a set method.
  • a corresponding compression copy may be in a state in which the compression copy has been decompressed by the first unit 110 .
  • the second unit 120 may encode the content into which the forensic mark has been inserted.
  • the encoding of the content solely includes an entropy encoding process, and may include a second processing process, such as quantization, in addition to the entropy encoding.
  • the content whose compression has been decompressed by the encoding task of the second unit 120 and into which the forensic mark has been inserted may be changed into the original compression copy format requested by the terminal 90 .
  • a compression copy of content looks like that it is externally input and output.
  • the compression copy output by the forensic marking apparatus may have a state in which the forensic mark has been inserted into the compression copy unlike the existing compression copy.
  • Content input to the first unit 110 in the state in which the content has been compressed using a set method is hereinafter defined as a first compression copy.
  • content compressed and output by the second unit 120 is defined as a second compression copy.
  • the second compression copy may be content in which a forensic mark has been inserted into the first compression copy.
  • a high-speed member 1000 may be used to reduce the decoding processing time of the first unit 110 or to reduce the encoding processing time of the second unit 120 .
  • the high-speed member 1000 may include the transform unit 150 and a parsing unit 170 .
  • the parsing unit 170 may be positioned in front of the first unit or may be connected to the first unit in parallel.
  • the transform unit 150 may synchronize a first input and output format between the first unit 110 and the marking unit 130 . Alternatively, the transform unit 150 may synchronize a second input and output format between the second unit 120 and the marking unit 130 .
  • entropy decoding and an additional post-processing process may be performed.
  • inverse quantization, inverse transform, inverse motion compensation, and inverse intra/inter prediction tasks may be sequentially performed after entropy decoding.
  • the marking unit 130 may be formed to receive a code output through the inverse intra/inter prediction task.
  • the marking unit 130 configured to receive a code output through the inverse intra/inter prediction task cannot receive a code output through any one of entropy decoding, an inverse transform, and inverse motion compensation.
  • the transform unit 150 of FIG. 2 may synchronize the first input and output format between the first unit 110 and the marking unit 130 .
  • the transform unit 150 may transform a format of a code, output through the entropy decoding process, into the first input and output format which may be received by the marking unit 130 .
  • the transform unit 150 may transform a format of a code, output through the inverse transform process, into the first input and output format.
  • a code output through entropy decoding, a code output through an inverse transform, and a code output through inverse motion compensation may be directly input to the marking unit 130 without the intervention of inverse intra/inter prediction in a stage right before the marking unit 130 .
  • first means for performing inverse quantization, an inverse transform, inverse motion compensation, and inverse intra/inter prediction does not need to be provided in the first unit 110 .
  • the corresponding first means may be essentially necessary for the normal playback of previously compressed content.
  • the forensic marking apparatus of the present disclosure has an object of inserting a forensic mark into content without a need to play the content back. Accordingly, the corresponding first means may be excluded without a problem. According to the forensic marking apparatus equipped with the transform unit 150 , even in the state in which the separate first means has been extremely excluded, a forensic mark may be inserted into previously compressed content.
  • the first unit 110 may be solely provided with decoding parts 111 , 112 , 113 , and 114 for entropy decoding content. Alternatively, the first unit 110 may be provided with some first means among a plurality of pieces of means for first processing content along with the decoding part.
  • the second unit 120 may be solely provided with encoding parts 121 , 122 , 123 , and 124 for entropy encoding content into which a forensic mark has been inserted.
  • the second unit 120 may be provided with some second means among a plurality of pieces of means for second processing content into which a forensic mark has been inserted along with the encoding part.
  • the transform unit 150 may transform an output code of the decoding part or the first means into a first format ⁇ circle around (1) ⁇ which may be input to the marking unit 130 .
  • the transform unit 150 may transform an output code of the marking unit 130 or the second means into a second format ⁇ circle around (2) ⁇ which may be input to the encoding part.
  • the marking unit 130 may be formed to receive the code having the first format ⁇ circle around (1) ⁇ from a first post-processing member that post-processes the content entropy-decoded by the first unit 110 .
  • the second unit 120 may be formed to receive the code having the second format ⁇ circle around (2) ⁇ from a second post-processing member that post-processes the content into which the forensic mark has been inserted by the marking unit 130 .
  • the transform unit 150 may transform the code, output by the first unit 110 , into the code having the first format ⁇ circle around (1) ⁇ , and may directly provide the code to the marking unit 130 .
  • the transform unit 150 may transform the code, output by the marking unit 130 , into the code having the second format ⁇ circle around (2) ⁇ , and may directly provide the code to the second unit 120 .
  • various first processing means which need to be present between the first unit 110 and the marking unit 130 may be excluded.
  • various second processing means which need to be present between the marking unit 130 and the second unit 120 may be excluded.
  • the marking unit 130 may be formed to receive the code having the first format ⁇ circle around (1) ⁇ output through the first processing process performed along with entropy decoding. According to the present embodiment, the marking unit 130 may have a versatile property which may be applied to other forensic marking devices.
  • the first processing process may include at least one of inverse quantization, an inverse transform, inverse motion compensation, and inverse intra/inter prediction.
  • the first processing process may be performed by one or more first processing means.
  • the transform unit 150 may be provided with a first transform part 151 that transforms a code, output by the decoding part, into the code having the first format ⁇ circle around (1) ⁇ and provides the code to the marking unit 130 .
  • the second unit 120 may be formed to receive the code having the second format ⁇ circle around (2) ⁇ output through the second processing process performed after the forensic mark is inserted.
  • the second processing process may include at least one of intra/inter prediction, motion compensation, a transform, and quantization.
  • the second processing process may be performed by one or more second processing means.
  • the transform unit 150 may be provided with a second transform part 152 that transform the code, output by the marking unit 130 , into the code having the second format ⁇ circle around (2) ⁇ and provides the code to the encoding part.
  • the transform unit 150 may use a transform algorithm or a transform routine.
  • the transform algorithm or the transform routine may be different for each frame.
  • the parsing unit 170 for establishing the transform algorithm in real time may be provided.
  • the parsing unit 170 may identify the first input and output format or the second input and output format and provide the format to the transform unit 150 .
  • the parsing unit 170 may identify the first input and output format while performing pre-decoding for decompressing content compressed using a set method. Alternatively, the parsing unit 170 may identify the second input and output format while compressing content, decompressed through pre-decoding, using a set method.
  • the parsing unit 170 may previously perform actual decoding or actual encoding between content compressed using a set method and content of a playback level. To decode content up to a playback level may mean that all processing processes actually necessary for playback are performed. However, if all actual decoding and actual encoding are performed without omission, the processing time of the parsing unit 170 may be consumed as much as time in a comparison embodiment disclosed in FIG. 3 .
  • the parsing unit 170 may extract a syntax element that forms a syntax structure for each process of actual decoding or actual encoding or may extract a transform code whose code format is transformed. In this case, a processing process for an image or a sound itself may be excluded.
  • the parsing unit 170 may provide the syntax element or the transform code to the transform unit 150 .
  • the transform unit 150 may synchronize the first input and output format or the second input and output format using the syntax element or the transform code.
  • FIG. 3 is a schematic diagram illustrating an operation of the parsing unit 170 .
  • an operation of the parsing unit 170 is described by taking a comparison embodiment as an example.
  • the comparison embodiment may be obtained.
  • the first unit 110 may entropy-decode the first compression copy.
  • the results of the entropy decoding may be output in an (a1)-th format.
  • a result value of the entropy decoding may sequentially pass through inverse quantization means 11 , inverse transform means 12 , inverse motion compensation means 13 , and inverse intra/inter prediction means 14 .
  • the inverse quantization means may receive a code having the (a1)-th format, and may perform inverse quantization, in other words, may convert digital information into analog information.
  • the results of the inverse quantization may be output in an (a2)-th format.
  • the parsing unit 170 may divide the inverse quantization process into an inverse quantization-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code c12 for transforming the code having the (a1)-th format into a code having the (a2)-th format may be obtained.
  • the inverse transform means may receive the code having the (a2)-th format, and may perform an inverse transform, in other words, a frequency inverse transform.
  • the results of the inverse transform may be output in an (a3)-th format.
  • the parsing unit 170 may divide the inverse transform process into an inverse transform-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code c23 for transforming the code having the (a2)-th format into a code having the (a3)-th format may be obtained.
  • the inverse motion compensation means may receive the code having the (a3)-th format, and may perform inverse motion compensation.
  • the results of the inverse motion compensation may be output in an (a4)-th format.
  • the parsing unit 170 may divide the inverse motion compensation process into an inverse motion compensation-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code c34 for transforming the code having the (a3)-th format into a code having the (a4)-th format may be obtained.
  • the inverse intra/inter prediction means may receive the code having the (a4)-th format, and may perform inverse intra/inter prediction. The results of the inverse intra/inter prediction may be output in an (a5)-th format.
  • the parsing unit 170 may divide the inverse intra/inter prediction process into an inverse intra/inter prediction-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code c45 for transforming the code having the (a4)-th format into a code having the (a5)-th format may be obtained.
  • the (a5)-th format may correspond to the first format ⁇ circle around (1) ⁇ which may be received by the marking unit 130 .
  • the content into which the forensic mark has been inserted by the marking unit 130 may be output in a (b1)-th format.
  • the content into which the forensic mark has been inserted may sequentially pass through intra/inter prediction means 15 , motion compensation means 16 , transform means 17 , and quantization means 18 .
  • the intra/inter prediction means may receive the code having the (b1)-th format, and may perform intra/inter prediction.
  • the results of the intra/inter prediction may be output in a (b2)-th format.
  • the parsing unit 170 may divide the intra/inter prediction process into an intra/inter prediction-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code d12 for transforming the code having the (b1)-th format into a code having the (b2)-th format may be obtained.
  • the motion compensation means may receive the code having the (b2)-th format, and may perform motion compensation.
  • the results of the motion compensation may be output in a (b3)-th format.
  • the parsing unit 170 may divide the motion compensation process into a motion compensation-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code d23 for transforming the code having the (b2)-th format into a code having the (b3)-th format may be obtained.
  • the transform means may receive the code having the (b3)-th format, and may perform a frequency transform.
  • the results of the transform may be output in a (b4)-th format.
  • the parsing unit 170 may divide the transform process into a transform-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code d34 for transforming the code having the (b3)-th format into the (b4)-th format may be obtained.
  • the quantization means may receive the code having the (b4)-th format and perform quantization.
  • the results of the quantization may be output in a (b5)-th format.
  • the parsing unit 170 may divide the quantization process into a quantization-inherent process and a format transform process. Through the extraction and analysis of the format transform process, an algorithm or transform code d45 for transforming the code having the (b4)-th format into a code having the (b5)-th format may be obtained.
  • the (b5)-th format may correspond to the second format ⁇ circle around (2) ⁇ which may be received by the encoding part of the second unit 120 .
  • FIG. 4 is a schematic diagram illustrating an operation of the forensic marking apparatus of the present disclosure.
  • the parsing unit 170 may pre-decode content compressed using a set method or pre-encode pre-decoded content.
  • the parsing unit 170 may perform only some processes of obtaining a syntax element or a transform code necessary to identify only the first input and output format among pre-decoding processes. Alternatively, the parsing unit 170 may perform only some processes of obtaining a syntax element or a transform code necessary to identify only the second input and output format among the pre-decoding processes.
  • the parsing unit 170 may perform only a format transform process by thoroughly excluding the inverse quantization-inherent process, the inverse transform-inherent process, the inverse motion compensation-inherent process, and the inverse intra/inter prediction-inherent process described with reference to FIG. 3 .
  • the format transform process has a very shorter processing time than the first processing- or second processing-inherent process. Accordingly, the time taken for a pre-decoding process or pre-encoding process performed by the parsing unit 170 is also short.
  • the parsing unit 170 may obtain transform codes c12, c23, c34, c45, d12, d23, d34, and d45 by performing the format transform process, and may generate a transform code for synchronizing the first input and output format or the second input and output format.
  • the parsing unit 170 may primarily decode content compressed using a set method.
  • the first unit 110 may secondarily decode content compressed using a set method.
  • the primary decoding only an operation of extracting a syntax element necessary to identify the first input and output format in each decoding process defined in the set method may be performed.
  • the operation of extracting a syntax element may indicate a format transform process.
  • the results of the secondary decoding may be synchronized by the first input and output format and directly output to the marking unit 130 .
  • FIG. 4( a ) may indicate the state in which the first input and output format is synchronized or the second input and output format is synchronized using a transform code which is fixed and input or generated by machine learning.
  • FIG. 4( b ) may indicate the state in which all transform codes are prepared by collecting only the corresponding transform codes in accordance with a transform code varying in real time. Each transform code or all of the transform codes may also be obtained by identifying a syntax structure.
  • the first transform part 151 capable of synchronizing the first input and output format may be formed.
  • the second transform part 152 capable of synchronizing the second input and output format may be formed.
  • the parsing unit 170 may not perform the primary encoding process on the primary-decoded content, and may identify the second input and output format by analyzing a syntax structure generated using a syntax element.
  • Content into which a forensic mark has been inserted may be synchronized by the second input and output format and directly entropy-encoded.
  • the second transform part 152 may be obtained without any change. Accordingly, if a single codec has been applied, the first encoding process is excluded, and the first decoding process has only to be performed.
  • FIG. 5 is a flowchart illustrating a forensic marking method of the present disclosure.
  • the forensic marking method of the present disclosure may include a parsing step S 510 , a decoding step S 520 , a marking step S 530 , and an encoding step S 540 .
  • the steps may be performed by the forensic marking apparatus illustrated in FIG. 1
  • the parsing step S 510 may identify the first input and output format between entropy decoding and an operation of inserting a forensic mark and the second input and output format between the insertion operation and entropy encoding by pre-decoding content compressed using a set method.
  • the decoding step S 520 may entropy-decode the content, and may transform the entropy-decoded content according to the first input and output format.
  • the marking step S 530 may receive the content transformed into the first input and output format, and may insert a forensic mark.
  • the encoding step S 540 may transform the content into which the forensic mark has been inserted into the second input and output format necessary for entropy encoding, and may entropy-encode the content transformed into the second input and output format.
  • a separate first processing process between the decoding step and the marking step may be excluded. Furthermore, a separate second processing process between the marking step and the encoding step may be excluded.
  • the speed of forensic marking can be significantly improved by excluding the first processing process and the second processing process. Through the improvement of the speed, a forensic mark can be inserted into multimedia content having a playback ability of 8K resolution 60 frame/seconds or more in real time.
  • a parallel processing method may be introduced.
  • the content server 50 may store content having a standard proposed by a compression technology using a set method. For example, in the HEVC standard, upon first encoding for the original content, it is better to encode the content in parallel. The reason for this is that if the content is not initially encoded in parallel, parallel processing becomes difficult because coding tree units (CTUs) having different conditions for each image have an association. Accordingly, the content server 50 may register the original raw content in a content registration procedure, may receive a CTU size and a tile size as parameters, and may encode the content into the tile.
  • CTUs coding tree units
  • the content that has been processed in parallel and stored in the content server 50 may be input to the parallel forensic marking apparatus of the present disclosure.
  • FIG. 6 is a schematic diagram illustrating a frame split into a plurality of regions by a parallel forensic marking apparatus of the present disclosure.
  • FIG. 7 is a schematic diagram illustrating the parallel forensic marking apparatus of the present disclosure.
  • the parallel forensic marking apparatus illustrated in FIG. 7 may correspond to the forensic marking server 100 of FIG. 1 .
  • the parallel forensic marking apparatus of the present disclosure may include a split part 180 , a plurality of decoding parts 111 , 112 , 113 , and 114 , and a synchronization part 160 .
  • the split part 180 may split one frame of content, compressed using a set method, into a plurality of regions.
  • the plurality of decoding parts may perform entropy decoding on the split regions assigned thereto, respectively.
  • the synchronization part 160 may complete the frame by synchronizing the regions input to and output from the plurality of decoding parts.
  • the content compressed using the setting method may be content encoded from raw content using a time method using a coding tree unit (CTU) size and a tile size as parameters.
  • the split part 180 may split the frame into the plurality of regions according to a tile method.
  • the plurality of decoding parts may independently entropy-decode the split regions.
  • a 1 st frame may be split into four regions by the split part 180 .
  • the split part 180 may assign different tile numbers F 0 , F 1 , F 2 , and F 3 to the split regions.
  • Each tile may include a plurality of CTUs.
  • the size of a tile and the size of a CTU may be fixed, but a CU included in the CTU may be different for each CTU or tile.
  • the processing speed of each decoding part that entropy-decodes each region may be different due to a difference between coding units (CU) included in each region. Due to a difference between the processing speeds, although the plurality of tiles F 0 , F 1 , F 2 , F 3 split from the one 1 st frame are simultaneously input to the decoding parts, timing at which the tile is output from the decoding part may be different.
  • CU coding units
  • the tile F 0 may be input to the first decoding part 111 .
  • Tile F 1 may be input to the second decoding part 112 .
  • the tile F 2 may be input to the third decoding part 113 .
  • the tile F 3 may be input to the fourth decoding part 114 .
  • the input timing is the same, timing at which the tile is output from each decoding part may be different.
  • the synchronization part 160 may assemble one frame by matching regions, split in plural and entropy-decoded, using tile numbers.
  • a storage part 139 for storing the regions output from the respective decoding parts at different timing may be provided.
  • the synchronization part 160 may complete the frame by collecting the regions, stored in the storage part 139 , into one.
  • Each region of a specific frame may be output from each decoding part at different timing due to a difference between CUs.
  • the synchronization part 160 may input some region of a next frame into the specific decoding part in the idle state.
  • the synchronization part 160 may input a tile of a next frame into the first decoding part. For processing speed balancing of all of the decoding parts, it is better to input, to the first decoding part, a tile predicted to consume the most processing time in a next frame.
  • the parsing unit 170 may be used to predict the processing time of each tile or equally maintain the processing speed and processing time of each decoding part.
  • the parsing unit 170 may identify a syntax structure through a pre-decoding process.
  • the synchronization part 160 may synchronize the plurality of regions using the syntax structure.
  • the parsing unit 170 may divide pre-decoding into multimedia data processing and syntax-related processing.
  • the multimedia data processing may include the aforementioned inverse quantization-inherent process, inverse transform-inherent process, inverse motion compensation-inherent process, and inverse intra/inter prediction-inherent process.
  • the syntax-related processing may include the aforementioned format transform process.
  • the parsing unit 170 may obtain a syntax element necessary to identify the syntax structure by performing only the syntax-related processing to the exclusion of the multimedia data processing.
  • the synchronization part 160 that has received the syntax element or the syntax structure from the parsing unit 170 may predict a decoding time for each region with respect to the frame prior to the decoding.
  • the synchronization part 160 may assign a split region of each frame to each decoding part in a way to minimize a difference between total task times of the decoding parts for the entire content using the prediction times.
  • a format of a code output by the decoding part may be different from the first input and output format necessary for the marking unit 130 .
  • the transform unit 150 for transforming an output code of the decoding part into the first input and output format using the syntax structure and providing the marking unit 130 with the output code transformed into the first input and output format may be provided.
  • the split part 180 and the synchronization part 160 may be formed in the central processing unit (CPU) of a computer device.
  • the decoding part may be formed in the graphics processing unit (GPU) of the computer device.
  • FIG. 8 is a diagram illustrating a computing device according to an embodiment of the present disclosure.
  • the computing device TN 100 of FIG. 8 may be an apparatus (e.g., the forensic marking apparatus and the parallel forensic marking apparatus) described in the present disclosure.
  • the computing device TN 100 may include at least one processor TN 110 , a transmission and reception device TN 120 , and a memory TN 130 . Furthermore, the computing device TN 100 may further include a storage device TN 140 , an input interface device TN 150 , an output interface device TN 160 , etc. The components included in the computing device TN 100 are connected by a bus TN 170 , and may perform communication with each other.
  • the processor TN 110 may execute a program command stored in at least one of the memory TN 130 and the storage device TN 140 .
  • the processor TN 110 may mean a central processing unit (CPU), a graphics processing unit (GPU), or a dedicated processor for performing the methods according to an embodiment of the present disclosure.
  • the processor TN 110 may be configured to implement the procedures, functions, and methods described in relation to the embodiments of the present disclosure.
  • the processor TN 110 may control the components of the computing device TN 100 .
  • Each of the memory TN 130 and the storage device TN 140 may store various types of information related to an operation of the processor TN 110 .
  • Each of the memory TN 130 and the storage device TN 140 may be configured as at least one of a volatile storage medium and a non-volatile storage medium.
  • the memory TN 130 may be configured as at least one of a read only memory (ROM) and a random access memory (RAM).
  • the transmission and reception device TN 120 may transmit or receive a wired signal or a radio signal.
  • the transmission and reception device TN 120 is connected to a network, and may perform communication.
  • the embodiments of the present disclosure are not implemented by only the method and apparatus described so far, but may be implemented through a program that realizes a function corresponding to a construction according to an embodiment of the present disclosure or a recording medium on which the program is recorded. Such an implementation may be evident to those skilled in the art to which the present disclosure pertains from the embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computing Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)
US17/256,828 2019-11-28 2020-11-26 Parallel forensic marking apparatus and method Abandoned US20220353524A1 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR10-2019-0155718 2019-11-28
KR1020190155718A KR102192631B1 (ko) 2019-11-28 2019-11-28 병렬 포렌식 마킹 장치 및 방법
PCT/KR2020/016997 WO2021107651A1 (ko) 2019-11-28 2020-11-26 병렬 포렌식 마킹 장치 및 방법

Publications (1)

Publication Number Publication Date
US20220353524A1 true US20220353524A1 (en) 2022-11-03

Family

ID=74089842

Family Applications (1)

Application Number Title Priority Date Filing Date
US17/256,828 Abandoned US20220353524A1 (en) 2019-11-28 2020-11-26 Parallel forensic marking apparatus and method

Country Status (4)

Country Link
US (1) US20220353524A1 (ko)
JP (1) JP2022515946A (ko)
KR (1) KR102192631B1 (ko)
WO (1) WO2021107651A1 (ko)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7352877B2 (en) * 2003-02-04 2008-04-01 Hitachi, Ltd. Digital-watermark-embedding and picture compression unit
US20130136191A1 (en) * 2011-11-30 2013-05-30 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
US20130194384A1 (en) * 2012-02-01 2013-08-01 Nokia Corporation Method and apparatus for video coding
US20150326883A1 (en) * 2012-09-28 2015-11-12 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding the transform units of a coding unit
US20160100196A1 (en) * 2014-10-06 2016-04-07 Microsoft Technology Licensing, Llc Syntax structures indicating completion of coded regions
US20160212433A1 (en) * 2015-01-16 2016-07-21 Microsoft Technology Licensing, Llc Encoding/decoding of high chroma resolution details
US20170150186A1 (en) * 2015-11-25 2017-05-25 Qualcomm Incorporated Flexible transform tree structure in video coding
US20180152702A1 (en) * 2011-10-31 2018-05-31 Mitsubishi Electric Corporation Video decoding device and video decoding method
US20190075328A1 (en) * 2016-03-16 2019-03-07 Mediatek Inc. Method and apparatus of video data processing with restricted block size in video coding

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06245080A (ja) * 1993-02-18 1994-09-02 Fujitsu Ltd 画像データ圧縮伸長方式
KR20030010694A (ko) * 2001-04-12 2003-02-05 코닌클리케 필립스 일렉트로닉스 엔.브이. 워터마크 삽입
JP2006513659A (ja) * 2003-01-23 2006-04-20 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ 符号化された信号にウォーターマークを埋め込む方法
WO2007136093A1 (ja) * 2006-05-24 2007-11-29 Panasonic Corporation 画像復号装置
WO2010143226A1 (en) * 2009-06-09 2010-12-16 Thomson Licensing Decoding apparatus, decoding method, and editing apparatus
US10244246B2 (en) * 2012-02-02 2019-03-26 Texas Instruments Incorporated Sub-pictures for pixel rate balancing on multi-core platforms
KR101992779B1 (ko) 2012-05-09 2019-06-26 한국전자통신연구원 실시간 콘텐츠 서비스를 위한 포렌식 마킹 장치 및 방법
JP6080405B2 (ja) * 2012-06-29 2017-02-15 キヤノン株式会社 画像符号化装置、画像符号化方法及びプログラム、画像復号装置、画像復号方法及びプログラム
JP6242139B2 (ja) * 2013-10-02 2017-12-06 ルネサスエレクトロニクス株式会社 動画像復号処理装置およびその動作方法
KR101847899B1 (ko) * 2014-02-12 2018-04-12 주식회사 칩스앤미디어 동영상 처리 방법 및 장치
KR20160094498A (ko) * 2015-01-30 2016-08-10 한국전자통신연구원 고속 병렬처리를 위한 차등 부호화 방법 및 장치

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7352877B2 (en) * 2003-02-04 2008-04-01 Hitachi, Ltd. Digital-watermark-embedding and picture compression unit
US20180152702A1 (en) * 2011-10-31 2018-05-31 Mitsubishi Electric Corporation Video decoding device and video decoding method
US20130136191A1 (en) * 2011-11-30 2013-05-30 Samsung Electronics Co., Ltd. Image processing apparatus and control method thereof
US20130194384A1 (en) * 2012-02-01 2013-08-01 Nokia Corporation Method and apparatus for video coding
US20150326883A1 (en) * 2012-09-28 2015-11-12 Canon Kabushiki Kaisha Method, apparatus and system for encoding and decoding the transform units of a coding unit
US20160100196A1 (en) * 2014-10-06 2016-04-07 Microsoft Technology Licensing, Llc Syntax structures indicating completion of coded regions
US20160212433A1 (en) * 2015-01-16 2016-07-21 Microsoft Technology Licensing, Llc Encoding/decoding of high chroma resolution details
US20170150186A1 (en) * 2015-11-25 2017-05-25 Qualcomm Incorporated Flexible transform tree structure in video coding
US20190075328A1 (en) * 2016-03-16 2019-03-07 Mediatek Inc. Method and apparatus of video data processing with restricted block size in video coding

Also Published As

Publication number Publication date
WO2021107651A1 (ko) 2021-06-03
KR102192631B1 (ko) 2020-12-17
JP2022515946A (ja) 2022-02-24

Similar Documents

Publication Publication Date Title
US11638007B2 (en) Codebook generation for cloud-based video applications
US9807404B2 (en) Method and apparatus for video encoding for each spatial sub-area, and method and apparatus for video decoding for each spatial sub-area
US9826253B2 (en) Method for entropy-encoding slice segment and apparatus therefor, and method for entropy-decoding slice segment and apparatus therefor
US8351498B2 (en) Transcoding video data
US20070133674A1 (en) Device for coding, method for coding, system for decoding, method for decoding video data
US8660177B2 (en) Parallel entropy coding
US20140355690A1 (en) Method and apparatus for entropy-encoding capable of parallel processing, and method and apparatus for entropy-decoding capable of parallel processing
JP2012508485A (ja) Gpu加速を伴うソフトウエアビデオトランスコーダ
CN112956205B (zh) 变换系数编码方法及其装置
US9083977B2 (en) System and method for randomly accessing compressed data from memory
US9083952B2 (en) System and method for relative storage of video data
US20050125475A1 (en) Circuit sharing of MPEG and JPEG on IDCT
US8660188B2 (en) Variable length coding apparatus, and method and integrated circuit of the same
CN111935500B (zh) 视频解码方法、装置及电子设备
US11086843B2 (en) Embedding codebooks for resource optimization
US20030128764A1 (en) Method of decoding coded video signals
US20220353524A1 (en) Parallel forensic marking apparatus and method
KR102192630B1 (ko) 포렌식 마킹 장치 및 방법
US20170201765A1 (en) Video stream decoding method and video stream decoding system
CN113422960A (zh) 图像的传输方法及装置
US20240022743A1 (en) Decoding a video stream on a client device
CN114640849B (zh) 直播视频编码方法、装置、计算机设备及可读存储介质
KR20220082633A (ko) Drm 스트리밍 기술을 지원하는 포렌식 마킹 장치 및 방법
US11736730B2 (en) Systems, methods, and apparatuses for video processing
US11463699B2 (en) Image processing apparatus and control method thereof

Legal Events

Date Code Title Description
AS Assignment

Owner name: WOOKYOUNG INFORMATION TECHNOLOGY CO., LTD., KOREA, REPUBLIC OF

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:PARK, YUN-HA;KIM, DAE-SOO;JUN, JAE-HYUN;REEL/FRAME:055458/0716

Effective date: 20201229

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION