WO2007114995A1 - Procédé et dispositif de prétraitement - Google Patents
Procédé et dispositif de prétraitement Download PDFInfo
- Publication number
- WO2007114995A1 WO2007114995A1 PCT/US2007/063929 US2007063929W WO2007114995A1 WO 2007114995 A1 WO2007114995 A1 WO 2007114995A1 US 2007063929 W US2007063929 W US 2007063929W WO 2007114995 A1 WO2007114995 A1 WO 2007114995A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- frame
- information
- metadata
- frames
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0112—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level one of the standards corresponding to a cinematograph film standard
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/102—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or selection affected or controlled by the adaptive coding
- H04N19/103—Selection of coding mode or of prediction mode
- H04N19/107—Selection of coding mode or of prediction mode between spatial and temporal predictive coding, e.g. picture refresh
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/134—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
- H04N19/146—Data rate or code amount at the encoder output
- H04N19/147—Data rate or code amount at the encoder output according to rate distortion criteria
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/169—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
- H04N19/17—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
- H04N19/176—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/10—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
- H04N19/189—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding
- H04N19/19—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding using optimisation based on Lagrange multipliers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/503—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
- H04N19/51—Motion estimation or motion compensation
- H04N19/523—Motion estimation or motion compensation with sub-pixel accuracy
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/86—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/85—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
- H04N19/87—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/144—Movement detection
- H04N5/145—Movement estimation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/01—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level
- H04N7/0117—Conversion of standards, e.g. involving analogue television standards or digital television standards processed at pixel level involving conversion of the spatial resolution of the incoming video signal
- H04N7/012—Conversion between an interlaced and a progressive signal
Definitions
- a method of processing multimedia data comprises receiving interlaced video frames, converting the interlaced video frames to progressive video, generating metadata associated with the progressive video, and providing the progressive video and at least a portion of the metadata to an encoder for use in encoding the progressive video.
- the method can further include encoding the progressive video using the metadata
- the interlaced video frames comprise NTSC video.
- an apparatus for processing multimedia data includes means for receiving interlaced video frames, means for converting the interlaced video frames to progressive video, means for generating metadata associated with the progressive video, and means for providing the progressive video and at least a portion of the metadata to an encoder for use in encoding the progressive video
- the converting means comprises an inverse teleciner and/or a spatio- temporal deinterlacer
- the generating means is configured to perform shot detection and generate compression information based on the shot detection.
- the generating means is configured to generate bandwidth information
- the generating includes means for resampling to resize a progressive frame.
- fO ⁇ OSJ Another aspect comprises a machine readable medium comprising instructions for processing multimedia data that upon execution cause a machine to receive interlaced video frames, convert the interlaced video frames to progressive v ideo, generate metadata associated with the progressive video, and provide the progressive video and at least a portion of the metadata to an encoder for use in encoding the progressive video.
- the metadata can include bandwidth information, bi-directional motion information, complexity information such as temporal or spatial complexity information based on content, and/or com pressi on i nforroati on BRIEF DESCRIPTION OF THE DRAWINGS f ⁇ 010
- Figtiic 1 is a block diagram of a communications system for delivering streaming multimedia data
- Figure 2 is a block diagram of a digital transmission facility that includes a preprocessor.
- Figure 6 is a How diagram illustrating a piocess of inverting telecined video
- Figure 9 is a flou diagiam illustrating how the metrics of Figure 8 are created.
- Figure I i is a dataflow diagram illustrating a sj steni for generating decision variables
- Figure 12 is a block diagram depicting variables that are used to exaluate the branch information
- FIG. 21 illustrates one aspect of an aperture for determining static areas of multimedia data
- Figure 25 is a flow diagram illustrating a method of deinterlacing multimedia data
- FIG. 31 is a flow diagram illustrating a process for shot detection
- Figure 33 is a flow diagiam illustrating a process for assigning frame compression schemes to video frames based on shot detection results
- 0045j figure 34 is a flow diagram illustrating a process for determining abmpt scene changes
- the preprocessor 202 can also include other appropriate modules that may be used to process the video and metadata, including memory 308 and a communications module 309.
- a software module may reside in RAM memory, Hash memory, ROM memory, EPROM memory, EBPROM memory, registers, a hard disk, a remov able disk, a CD-ROM, or any other form of storage medium known in the art.
- An exemplary storage medium is coupled to the processor. such that the processor can read information from, and write information to, the storage medium In the alternative, the storage medium ma) be integral to the processor
- the processor and the storage medium may reside in an ASIC
- the ASIC may reside in a user terminal
- the processor and the storage medium may reside as discrete components in a user terminal
- process 300 can end f ⁇ 63j figure 3C is a block diagram illustrating means for processing multimedia data Shown here such means are incorporated in a preprocessor 202
- the preprocessor 202 includes means for receiving video such as module 330
- the preprocessor 202 also includes means for converting interlaced data to progressive ⁇ ideo such as module 332
- Such means can include, foi esample.
- the preprocessor 202 can use obtained metadata (e.g., obtained from the decoder 20i or from another source) for one or more of the preprocessing operations.
- Metadata can include information relating to. describing, or classifying the content of the multimedia data ("content information") hi particular the metadata can include a content classification.
- the metadata does not include content information desired for encoding operations
- the preprocessor 202 can be configured to determine content information and use the content information for preprocessing operations and/or provides the content information to other components, e g., the decoder 203
- the preprocessor 202 can use such content information to influence GOP partitioning, determine appropriate type of filtering, and/or determine encoding parameters that are communicated to an encoder.
- Film on the other hand is shot at 24 frames/sec, each frame consisting of a complete image. This may be referred to as a "progressive' * format.
- ' -progressive" video is converted into "interlaced" video format via a telecine process
- the system advantageously determines when video lias been telecmed and performs an appropriate transform to regenerate the original progressive frames ⁇ 0075]
- Figure 4 shows the effect of telecining progressive frames that were converted to interlaced video, Fi, F;, Fjs, and F 4 are progressive images that are the input to a teleciner.
- phase selector 1090 uses the applicable phase to either invert the telecined video or deinteriace it as shown It is a more explicit statement of the operation of phase detector 404 in Figure 4, In one aspect the processing of Figure 10 is performed by the phase detector 404 of Figure 4. Starting at step 1030, detector 404 determines a plurality of metrics by the process described above with reference to figure 8, and continues through steps 1083. 1085, 1087, 1089, j ⁇ 90, and 1091 . f ⁇ OSSJ Flowchart 1000 illustrates a process for estimating the current phase.
- Figure 16 shows how the inverse telecine process proceeds once the pull down phase is determined
- fields 1605 and 1605" are identified as representing the same field of video.
- the two fields are averaged together, and combined with field 1606 to reconstruct frame 1620.
- the reconstructed frame is 162O 1 .
- modem pixel-based displays e.g., LCD, DLP, LCOS, plasma, etc
- modem pixel-based displays are progressive scan and display progressively scanned video sources (whereas many older video devices use the older interlaced scan technology).
- deinterlacing algorithms are described in "Scan rate up-conversion using adaptive weighted median filtering, 1' P. Haavisto, J. Juhola, and Y, Neuvo, Signal Processing of HDT! ' H, pp. 703-710, 1990, and "Deinterlacing of HDTV images for Multimedia Applications," R. Sir ⁇ onetti, S. Carrato, G. Ramponi, and A.
- motion estimation and compensation uses iuroa (intensity or brightness of the pixels) and chroma data (color information of the pixels) to improve deinterlacing regions of the selected frame where the brightness level is almost uniform but the color differs
- a denoising filter can be used to increase the accuracy of motion estimation.
- the denoising filter can be applied to Wmed deinterlaced provisional frames to remove alias artifacts generated by Wmed filtering.
- the deinterlacing methods and systems described below produce good deinterlacing results and have a relatively low computational complexity that allow fast running deinterlacing implementations, making such implementations suitable for a wide variety of deinterlacing applications, including systems that are used to provide data to cell phones, computers and other types of electronic or communication devices utilizing a display
- FIG. 25 illustrates a process 2500 for processing multimedia data to produce a sequence of progressive frames from a sequence of interlaced frames.
- a progressive frame is produced by the deinterlacer 405 illustrated in Figure 4.
- process 2500 (process "'A") generates spatio-temporal information for a selected frame Spatio-temporal information can include information used to categorize the motion levels of the multimedia data and generate a motion intensity map. and includes the Wmed provisional deinteriaced frame and information used to generate the frame (e.g , information used in Equations 26-33).
- This process can be performed by the Wmed filter 2054, as illustrated in the upper portion of Figure 20, and its associated processing, which is described in further detail below.
- process A process A.
- L is the Luminance of a pixel E located in the Current Field
- Threshold I 1 can he predetermined and set at a particular value, determined by a process other than dei ⁇ terfacing and prcnided (for example, as metadata for the video being deinterlaccd) or it can be dynamically determined during deinferlacing
- the threshold 72 can also be predetermined and set at a particular value, determined by a process other than dei ⁇ terlacing and provided (for example, as metadata for the video being deinteriaced) or it can be dynamically determined during deinterlacing.
- the Wmed filtered provisional deinferlaced frame is provided for further processing in conjunction with motion estimation and motion compensation processing, as illustrated in the lower portion of Figure 20
- the bi-directional MFAIC 2068 can use sum of squared errors (SSR) can be used to measure the similarity between a predicting block and a predicted block for the Wmcd current frame 2060 relative to the Wmed next frame 2058 and the deinterlaced current frame 2070 The generation of the motion compensated current frame 2066 then uses pixel information from the most similar matching blocks to fill in the missing data between the original pixel lines
- the bi-directional MIi MC 2068 biases or gives more weight to the pixel information from the deinteriaced pievious frame 2070 information because it uas generated by motion compensation information and W med information, while the Wmed next frame 2058 is only deinterlaced b ⁇ spatio-temporal filtering
- a metric can be used that includes the contribution of pixel values of one or more luma group of pixels (SSR) can be used to measure the similarity between a predicting block and a predicted block for the Wmcd current frame 2060 relative to the Wmed next frame 2058
- a combiner 2062 typically merges the Wmed Current Frame 2060 and the MC Current Frame 2066 by using at least a portion of the Wined Current Frame 2060 and the MC Current Frame 2066 to generate a Current Deinferlaced Frame 2064.
- the chroma motion level shall be fast-motion; other wise, if any one of the four iuma pixels has a slow motion level, the chroma motion level shall be slow-motion; otherwise the chroma motion level is static
- the conservative approach may not achieve the highest PSNR, but it avoids the risk of using IN TER prediction wherever there is ambiguity in chroma motion level [015Oj Multimedia data sequences were deinterlaced using the described Wmed algorithm described alone and the combined Wmed and motion compensated algorithm described herein The same multimedia data sequences were also deinterlaced using a pixel blending (or averaging) algorithm and a "no-dei ⁇ terlacing" ' case where the fields were merely combined without any interpolation or blending. The resulting frames were analyzed to determine the PSNR and is shown in the following table.
- the safe title area is defined as the area where "all the useful information can be confined to ensure visibility on the majority of home television receivers * " For example, as illustrated in Figure 43, the safe action area 4310 occupies the center 90% of the screen, giving a 5% border all around The safe title area 4305 occupies the center 80% of the screen, giving a 10% border Figure
- Bandwidth Map Generation f ⁇ l ⁇ Oj Human visual quality V can be a function of both encoding complexity C and allocated bits B ⁇ also referred to as bandwidth).
- Figure 29 is a graph illustrating this relationship. It should be noted that the encoding complexity metric C considers spatial and temporal frequencies from the human vision point of view. For distortions more sensitive to human eyes, the complexity value is correspondingly higher. It can typically be assume that V is monotonically decreasing in C, and monotonica ⁇ ly increasing in B.
- FIG. 31 illustrates an example of a process for obtaining metrics of the ⁇ ideo Figure 31 illustrates certain steps that occur in block 3042 of Figure 30
- process A obtains or determines bidirectional motion estimation and compensation information of the video.
- the motion compensator 2832 of Figure 28 can be configured to perform bi-directional motion estimation on the frames and determine motion compensation information that can be used for subsequent shot classification.
- Process A then proceeds to block 31 54 where it generates luminance information including a luminance difference histogram for a current or selected frame and one or more adjacent frames Lastly, process A then continues to block 3156 where a metric is calculated that indicative of the shot contained in the frame.
- Reconstruction of a P-frame can be started after the reference frame (or a portion of a picture or frame that is being referenced) is reconstructed.
- the encoded quantized coefficients are dequantized 4050 and then 2D Inverse DCT, or IDCT, 4052 is performed resulting in decoded or reconstructed residual error 4054.
- Encoded motion vector 4040 is decoded and used to locate the already reconstructed best matching macroblock 4056 in the already reconstructed reference picture 4032 deconstructed residual error 4054 is then added to reconstructed best matching raacroblock 4056 to form reconstructed macrob5ock 4058.
- Reeonstmeted macroblock 4058 can be stored in memory, displayed independently or in a picture with other reconstructed macrobiocks, or processed further for image enhancement.
- SADp and SADx are the sum of absolute differences of the forward and the backward difference metric, respectively.
- the denominator contains a small positive number ⁇ to prevent Ui e "divide-by-zero " error.
- the nominator also contains an ⁇ to balance the effect of the unity in the denominator. For example, if the previous frame, the current frame, and the next frame are identical, motion search should yield SADp ⁇ SA D ⁇ ⁇ 0 in this case, the above calculation generators ⁇ - • - • - ⁇ S instead of 0 or infinity
- a luminance histogram can be calculated for ever ⁇ ' frame.
- the multimedia images have a luminance depth (e g., number of "bins " ) of eight bits
- the luminance depth used for calculating the luminance histogram according t ⁇ some aspects can be set to 16 to obtain the histogram
- the luminance depth can be set to an appropriate number which may depend upon the type of data being processed, the computational power available, or other predetermined criteria.
- the luminance depth can be set dynamically based on a calculated or received metric, such as the content of the data
- Figme 34 is a flow diagram illustrating a process of determining abrupt scene changes Hgure 34 further elaborates certain steps that can occur in some aspects of block 3262 of Figure 32, At block 3482 checks if the frame difference metric D meets the criterion shown in Equation 51 33
- FIG. 35 further illustrates further details of some aspects that can occur in block 3264 of Figure 32.
- process E determines if the frame is part of a series of frames depicting a slow scene change.
- Process E determines that the current frame is a cross-fading or other slow scene change if the frame difference metric D is less than the first threshold value T> and greater or equal to a second threshold value /> as illustrated in Equation 52:
- process F returns
- a "B" frame (B stands for bi-directional) can use the previous and next ⁇ or P pictures either individually or simultaneously as reference.
- the number of bits used to encode an 1-fraine on the average exceeds the number of bits used to encode a P-frame; likewise the number of bits used to encode a P-frame on the average exceeds that of a B-fraroe A skipped frame, if it is used, may use no bits for its representation.
- the GOP partitioner 412 operates by assigning picture types to frames as they are received.
- the picture type indicates the method of prediction that may be used to code each block
- Each functional component of flowchart 4100 including the preprocessor 4135. the bidirectional motion compensator 4133, the toward and backward difference metric modules 4136 and 4137. the histogram difference module 4141, and the frame difference metric combiner 4143, may be realized as a standalone component, incorporated as hardware, firmware, middleware in a component of another device, or be implemented in microcode or software that is executed on the processor, or a combination thereof.
- the program code or code segments that perform the desired tasks may be stored in a machine readable medium such as a storage medium.
- ⁇ is a scaler
- SAl )p is the SAP with forward motion compensation
- Afl ⁇ P is the sum of lengths measured in pixels of the motion ⁇ ectojs from the forward motion compensation
- ⁇ and m are two threshold numbers that render the frame encoding complexity indicator to zero if SADp is lower than .v or ⁇ / ⁇ > is lower than m.
- M* would he used in place of the cu ⁇ ent frame difference in flowchart 4200 of Figure 41 ⁇ s can be seen, M* is different from M onl ⁇ if the forward motion compensation shows a low le ⁇ el of rno ⁇ ement In this case, ⁇ /;s smaller than ⁇ /
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Graphics (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Television Systems (AREA)
- Studio Devices (AREA)
- Microscoopes, Condenser (AREA)
Abstract
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2009504372A JP2009532741A (ja) | 2006-04-03 | 2007-03-13 | 顕微鏡スライド自動読み取りシステム |
KR1020127017181A KR101377370B1 (ko) | 2006-04-03 | 2007-03-13 | 전처리기 방법 및 장치 |
KR1020117026505A KR101373896B1 (ko) | 2006-04-03 | 2007-03-13 | 전처리기 방법 및 장치 |
KR1020137034600A KR20140010190A (ko) | 2006-04-03 | 2007-03-13 | 전처리기 방법 및 장치 |
KR1020107022928A KR101127432B1 (ko) | 2006-04-03 | 2007-03-13 | 전처리기 방법 및 장치 |
EP07758479A EP2002650A1 (fr) | 2006-04-03 | 2007-03-13 | Procede et dispositif de pretraitement |
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US78904806P | 2006-04-03 | 2006-04-03 | |
US60/789,048 | 2006-04-03 | ||
US78937706P | 2006-04-04 | 2006-04-04 | |
US78926606P | 2006-04-04 | 2006-04-04 | |
US60/789,377 | 2006-04-04 | ||
US60/789,266 | 2006-04-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007114995A1 true WO2007114995A1 (fr) | 2007-10-11 |
Family
ID=38121947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2007/063929 WO2007114995A1 (fr) | 2006-04-03 | 2007-03-13 | Procédé et dispositif de prétraitement |
Country Status (7)
Country | Link |
---|---|
EP (1) | EP2002650A1 (fr) |
JP (3) | JP2009532741A (fr) |
KR (5) | KR101377370B1 (fr) |
CN (1) | CN104159060B (fr) |
AR (1) | AR060254A1 (fr) |
TW (1) | TW200803504A (fr) |
WO (1) | WO2007114995A1 (fr) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012100117A1 (fr) * | 2011-01-21 | 2012-07-26 | Thomson Licensing | Système et procédé pour transcodage distant amélioré à l'aide d'un profilage de contenu |
US20130266080A1 (en) * | 2011-10-01 | 2013-10-10 | Ning Lu | Systems, methods and computer program products for integrated post-processing and pre-processing in video transcoding |
US10847116B2 (en) | 2009-11-30 | 2020-11-24 | Semiconductor Energy Laboratory Co., Ltd. | Reducing pixel refresh rate for still images using oxide transistors |
CN114125346A (zh) * | 2021-12-24 | 2022-03-01 | 成都索贝数码科技股份有限公司 | 视频转换方法及装置 |
CN114363638A (zh) * | 2021-12-08 | 2022-04-15 | 慧之安信息技术股份有限公司 | 基于h.265熵编码二值化的视频加密方法 |
EP4030341A4 (fr) * | 2020-05-11 | 2023-01-25 | Tencent Technology (Shenzhen) Company Limited | Procédé de reconnaissance d'image, procédé de lecture de vidéo, dispositif associé et support |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI396975B (zh) * | 2008-08-06 | 2013-05-21 | Realtek Semiconductor Corp | 可調適緩衝裝置及其方法 |
TWI392335B (zh) * | 2009-08-14 | 2013-04-01 | Sunplus Technology Co Ltd | 在縮放器中去除一影像訊號之環形雜訊之濾波系統及方法 |
KR101906946B1 (ko) | 2011-12-02 | 2018-10-12 | 삼성전자주식회사 | 고밀도 반도체 메모리 장치 |
US10136147B2 (en) | 2014-06-11 | 2018-11-20 | Dolby Laboratories Licensing Corporation | Efficient transcoding for backward-compatible wide dynamic range codec |
JP6883218B2 (ja) * | 2016-03-07 | 2021-06-09 | ソニーグループ株式会社 | 符号化装置および符号化方法 |
JP7228917B2 (ja) * | 2018-01-02 | 2023-02-27 | キングス カレッジ ロンドン | 局在化顕微鏡法のための方法及びシステム |
CN112949449B (zh) * | 2021-02-25 | 2024-04-19 | 北京达佳互联信息技术有限公司 | 交错判断模型训练方法及装置和交错图像确定方法及装置 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5619272A (en) | 1992-12-30 | 1997-04-08 | Thomson-Csf | Process for deinterlacing the frames of a moving image sequence |
US5864369A (en) | 1997-06-16 | 1999-01-26 | Ati International Srl | Method and apparatus for providing interlaced video on a progressive display |
EP1005227A2 (fr) | 1998-11-25 | 2000-05-31 | Sharp Kabushiki Kaisha | Conversion entrelacé-progressif avec moyennage de trames et à faible retard d'un signal vidéo issu d'une source cinématographique |
WO2001069936A2 (fr) | 2000-03-13 | 2001-09-20 | Sony Corporation | Methode et dispositif permettant de generer des metadonnees compactes sur des indices de transcodage |
EP1164792A2 (fr) | 2000-06-13 | 2001-12-19 | Samsung Electronics Co., Ltd. | Convertisseur de format utilisant des vecteurs de mouvement bidirectionnels et méthode correspondante |
US20020196362A1 (en) | 2001-06-11 | 2002-12-26 | Samsung Electronics Co., Ltd. | Apparatus and method for adaptive motion compensated de-interlacing of video data |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1995027362A2 (fr) * | 1994-04-05 | 1995-10-12 | Philips Electronics Nv | Conversion d'un signal video entrelace en un signal a balayage sequentiel |
JP2832927B2 (ja) * | 1994-10-31 | 1998-12-09 | 日本ビクター株式会社 | 走査線補間装置及び走査線補間用動きベクトル検出装置 |
JPH09284770A (ja) * | 1996-04-13 | 1997-10-31 | Sony Corp | 画像符号化装置および方法 |
JP3649370B2 (ja) * | 1998-02-25 | 2005-05-18 | 日本ビクター株式会社 | 動き補償符号化装置及び動き補償符号化方法 |
JP3588564B2 (ja) * | 1999-03-31 | 2004-11-10 | 株式会社東芝 | 映像データ記録装置 |
JP2001204026A (ja) * | 2000-01-21 | 2001-07-27 | Sony Corp | 画像情報変換装置及び方法 |
US6970513B1 (en) * | 2001-06-05 | 2005-11-29 | At&T Corp. | System for content adaptive video decoding |
US6784942B2 (en) * | 2001-10-05 | 2004-08-31 | Genesis Microchip, Inc. | Motion adaptive de-interlacing method and apparatus |
JP4016646B2 (ja) * | 2001-11-30 | 2007-12-05 | 日本ビクター株式会社 | 順次走査変換装置及び順次走査変換方法 |
KR100446083B1 (ko) * | 2002-01-02 | 2004-08-30 | 삼성전자주식회사 | 움직임 추정 및 모드 결정 장치 및 방법 |
KR100850706B1 (ko) * | 2002-05-22 | 2008-08-06 | 삼성전자주식회사 | 적응적 동영상 부호화 및 복호화 방법과 그 장치 |
KR20060011281A (ko) * | 2004-07-30 | 2006-02-03 | 한종기 | 트랜스코더에 적용되는 해상도 변환장치 및 방법 |
JP2006074684A (ja) * | 2004-09-06 | 2006-03-16 | Matsushita Electric Ind Co Ltd | 画像処理方法及び装置 |
-
2007
- 2007-03-13 EP EP07758479A patent/EP2002650A1/fr not_active Withdrawn
- 2007-03-13 JP JP2009504372A patent/JP2009532741A/ja not_active Withdrawn
- 2007-03-13 KR KR1020127017181A patent/KR101377370B1/ko not_active IP Right Cessation
- 2007-03-13 KR KR1020107022928A patent/KR101127432B1/ko not_active IP Right Cessation
- 2007-03-13 KR KR1020137034600A patent/KR20140010190A/ko not_active Application Discontinuation
- 2007-03-13 KR KR1020117026505A patent/KR101373896B1/ko not_active IP Right Cessation
- 2007-03-13 CN CN201410438251.8A patent/CN104159060B/zh not_active Expired - Fee Related
- 2007-03-13 WO PCT/US2007/063929 patent/WO2007114995A1/fr active Application Filing
- 2007-03-13 KR KR1020087026885A patent/KR101019010B1/ko not_active IP Right Cessation
- 2007-03-26 TW TW096110382A patent/TW200803504A/zh unknown
- 2007-03-30 AR ARP070101371A patent/AR060254A1/es unknown
-
2012
- 2012-07-23 JP JP2012162714A patent/JP5897419B2/ja not_active Expired - Fee Related
-
2014
- 2014-12-25 JP JP2014263408A patent/JP6352173B2/ja not_active Expired - Fee Related
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5619272A (en) | 1992-12-30 | 1997-04-08 | Thomson-Csf | Process for deinterlacing the frames of a moving image sequence |
US5864369A (en) | 1997-06-16 | 1999-01-26 | Ati International Srl | Method and apparatus for providing interlaced video on a progressive display |
EP1005227A2 (fr) | 1998-11-25 | 2000-05-31 | Sharp Kabushiki Kaisha | Conversion entrelacé-progressif avec moyennage de trames et à faible retard d'un signal vidéo issu d'une source cinématographique |
WO2001069936A2 (fr) | 2000-03-13 | 2001-09-20 | Sony Corporation | Methode et dispositif permettant de generer des metadonnees compactes sur des indices de transcodage |
EP1164792A2 (fr) | 2000-06-13 | 2001-12-19 | Samsung Electronics Co., Ltd. | Convertisseur de format utilisant des vecteurs de mouvement bidirectionnels et méthode correspondante |
US20020196362A1 (en) | 2001-06-11 | 2002-12-26 | Samsung Electronics Co., Ltd. | Apparatus and method for adaptive motion compensated de-interlacing of video data |
Non-Patent Citations (1)
Title |
---|
See also references of EP2002650A1 * |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10847116B2 (en) | 2009-11-30 | 2020-11-24 | Semiconductor Energy Laboratory Co., Ltd. | Reducing pixel refresh rate for still images using oxide transistors |
US11636825B2 (en) | 2009-11-30 | 2023-04-25 | Semiconductor Energy Laboratory Co., Ltd. | Liquid crystal display device, method for driving the same, and electronic device including the same |
US11282477B2 (en) | 2009-11-30 | 2022-03-22 | Semiconductor Energy Laboratory Co., Ltd. | Liquid crystal display device, method for driving the same, and electronic device including the same |
US9681091B2 (en) | 2011-01-21 | 2017-06-13 | Thomson Licensing | System and method for enhanced remote transcoding using content profiling |
KR102013461B1 (ko) | 2011-01-21 | 2019-08-22 | 인터디지탈 매디슨 페이튼트 홀딩스 | 콘텐츠 프로파일링을 사용한 강화된 원격 트랜스코딩을 위한 시스템 및 방법 |
WO2012100117A1 (fr) * | 2011-01-21 | 2012-07-26 | Thomson Licensing | Système et procédé pour transcodage distant amélioré à l'aide d'un profilage de contenu |
KR20140005261A (ko) * | 2011-01-21 | 2014-01-14 | 톰슨 라이센싱 | 콘텐츠 프로파일링을 사용한 강화된 원격 트랜스코딩을 위한 시스템 및 방법 |
TWI637627B (zh) * | 2011-10-01 | 2018-10-01 | 英特爾公司 | 用於視訊轉碼中經整合之後置處理與前置處理的系統、方法及電腦程式產品 |
US20130266080A1 (en) * | 2011-10-01 | 2013-10-10 | Ning Lu | Systems, methods and computer program products for integrated post-processing and pre-processing in video transcoding |
EP4030341A4 (fr) * | 2020-05-11 | 2023-01-25 | Tencent Technology (Shenzhen) Company Limited | Procédé de reconnaissance d'image, procédé de lecture de vidéo, dispositif associé et support |
CN114363638A (zh) * | 2021-12-08 | 2022-04-15 | 慧之安信息技术股份有限公司 | 基于h.265熵编码二值化的视频加密方法 |
CN114125346A (zh) * | 2021-12-24 | 2022-03-01 | 成都索贝数码科技股份有限公司 | 视频转换方法及装置 |
CN114125346B (zh) * | 2021-12-24 | 2023-08-29 | 成都索贝数码科技股份有限公司 | 视频转换方法及装置 |
Also Published As
Publication number | Publication date |
---|---|
KR20140010190A (ko) | 2014-01-23 |
EP2002650A1 (fr) | 2008-12-17 |
JP2015109662A (ja) | 2015-06-11 |
JP5897419B2 (ja) | 2016-03-30 |
JP2013031171A (ja) | 2013-02-07 |
TW200803504A (en) | 2008-01-01 |
KR101127432B1 (ko) | 2012-07-04 |
JP6352173B2 (ja) | 2018-07-04 |
KR20090006159A (ko) | 2009-01-14 |
KR20110128366A (ko) | 2011-11-29 |
CN104159060A (zh) | 2014-11-19 |
KR20120091423A (ko) | 2012-08-17 |
KR101019010B1 (ko) | 2011-03-04 |
CN104159060B (zh) | 2017-10-24 |
JP2009532741A (ja) | 2009-09-10 |
AR060254A1 (es) | 2008-06-04 |
KR101377370B1 (ko) | 2014-03-26 |
KR101373896B1 (ko) | 2014-03-12 |
KR20100126506A (ko) | 2010-12-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9131164B2 (en) | Preprocessor method and apparatus | |
WO2007114995A1 (fr) | Procédé et dispositif de prétraitement | |
US8750372B2 (en) | Treating video information | |
US8238421B2 (en) | Apparatus and method for estimating compression modes for H.264 codings | |
US6862372B2 (en) | System for and method of sharpness enhancement using coding information and local spatial features | |
JP2009532741A6 (ja) | プリプロセッサ方法および装置 | |
EP1938580A1 (fr) | Procede et appareil de detection de prises en diffusion video en temps reel | |
EP1938590A2 (fr) | Procede et appareil de desentrelacement spatio-temporel assiste par compensation de mouvement pour une video a base de champ | |
EP1980115A2 (fr) | Procédé et appareil de détermination d'un procédé de codage fondé sur une valeur de distorsion relative à un masquage d'erreurs | |
WO2007047755A1 (fr) | Structure de groupes d'images adaptative en diffusion video en temps reel | |
US7031388B2 (en) | System for and method of sharpness enhancement for coded digital video | |
JP2010232734A (ja) | 画像符号化装置及び画像符号化方法 | |
Segall et al. | Super-resolution from compressed video | |
CN101411183A (zh) | 预处理器方法及设备 | |
Jo et al. | Hybrid error concealments based on block content | |
Manimaraboopathy et al. | Frame Rate Up-Conversion using Trilateral Filtering For Video Processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07758479 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009504372 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1995/MUMNP/2008 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 200780010753.9 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007758479 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020087026885 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: KR Ref document number: 1020107022928 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020117026505 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020127017181 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020137034600 Country of ref document: KR |