KR102050423B1 - method for playing video - Google Patents

method for playing video Download PDF

Info

Publication number
KR102050423B1
KR102050423B1 KR1020130048519A KR20130048519A KR102050423B1 KR 102050423 B1 KR102050423 B1 KR 102050423B1 KR 1020130048519 A KR1020130048519 A KR 1020130048519A KR 20130048519 A KR20130048519 A KR 20130048519A KR 102050423 B1 KR102050423 B1 KR 102050423B1
Authority
KR
South Korea
Prior art keywords
image
information
image data
complexity
quality
Prior art date
Application number
KR1020130048519A
Other languages
Korean (ko)
Other versions
KR20140129777A (en
Inventor
이현규
Original Assignee
한화테크윈 주식회사
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한화테크윈 주식회사 filed Critical 한화테크윈 주식회사
Priority to KR1020130048519A priority Critical patent/KR102050423B1/en
Publication of KR20140129777A publication Critical patent/KR20140129777A/en
Application granted granted Critical
Publication of KR102050423B1 publication Critical patent/KR102050423B1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/136Incoming video signal characteristics or properties
    • H04N19/14Coding unit complexity, e.g. amount of activity or edge presence estimation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to an image monitoring system and its eternal reproduction method.
The image reproducing method of the present invention comprises the steps of determining the presence or absence of complexity and quality information of the image data in units of blocks; And selecting a scaling and color space conversion method of the image data on a block basis based on the complexity and quality information.

Description

How to play video {method for playing video}

The present invention relates to an image monitoring system and its eternal reproduction method.

Unlike the existing video that is viewed on a single screen, CCTV users watch multiple CCTV cameras at the same time. Network CCTV cameras use computer video compression techniques to reduce the amount of network transmission data. Images are displayed on the monitor through a large amount of data, compressed, transmitted, and decompressed, which complicates the computation process and places a high load on the overall system. Image compression is handled separately by the CPU of each camera, so the load is distributed to avoid overload problems, but decompression uses multiple PC systems to handle overload, or to a high-performance PC system, because multiple images are viewed on multiple monitors at once. There is a need for a method of displaying images on multiple monitors. Therefore, various techniques are introduced to reduce the load of image decompression. In addition, recent systems have increased the monitor resolution from the existing low resolution (1024x768) monitor to the high resolution (1920x1080 or higher) monitor, and the overall monitor resolution has also increased exponentially from the use of one monitor to four or more monitors. In the case of network CCTV cameras, the video resolution has increased from the existing low resolution (640x480) to high resolution (1920x1080 or higher), and the number of video frames per second has also increased from 30 frames per second to 60 frames per second. However, advances in technology to reduce the load on video display have finally become a bottleneck in the overall system, with the load displaying more images on a wider monitor at higher frame rates.

KR 2007-0016976

The present invention is to provide a method for improving the performance of the monitoring system by reducing the load of updating the image to the monitor after image decompression.

An image reproducing method according to an exemplary embodiment of the present invention includes determining complexity and quality information of image data in units of blocks; And selecting a scaling and color space conversion method of the image data on a block basis based on the complexity and quality information.

The method may include determining whether there is change information of the image data in units of blocks; And determining whether to render the image data in units of blocks based on the change information.

The complexity, quality, and change information may be generated in one process of compression of the image data, decompression of the image data, and analysis of the decompressed image data.

The present invention builds the complexity, quality, and change information of an image or video screen for each image block, partially updates (renders) the image using the change information of the screen, and renders the blocks in different ways using the complexity and quality information. This reduces the rendering load and improves the performance of the monitoring system.

1 is a block diagram schematically illustrating an image monitoring system according to an embodiment of the present invention.
2 is a diagram illustrating an example of a method of generating an image frame.
3 is a diagram illustrating an example of a method of extracting motion information.
4 is a diagram illustrating an example of a method of blocking an image for image compression.
5A and 5B illustrate an example of a screen rendering method.
6 to 8 are flowcharts schematically illustrating an image rendering method for image reproduction according to an embodiment of the present invention.

Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings.

1 is a block diagram schematically illustrating an image monitoring system according to an exemplary embodiment of the present invention.

Referring to FIG. 1, the image monitoring system 10 of the present invention includes an image processing device 20 and a display device 80. The image processing apparatus 20 includes a decoder 30, a renderer 40, a D / A converter 50, and a memory 60.

The decoder 30 decompresses the compressed video and restores the original video. The reconstructed image data is stored in the memory 60. In the embodiment of the present invention, the decompression method is not particularly limited, and a corresponding decompression method may be applied according to various compression methods.

Video compression and decompression methods for computer video playback typically use the MPEG standard or transformation. Even with variations, the key data saving methods based on lossy compression are not very different. First, in the MPEG method, subsampling of the original image composed of RGB in the YUV (4: 2: 0) color format reduces the resolution of the color difference component, thereby reducing the amount of data in half. In addition, the low pass filter removes high frequency components at a level that is unrecognizable to humans, thereby losing the actual amount of data. In the method of compressing using a temporal model, as shown in FIG. 2, only the information of the changed part is left by cross-referencing the image information that changes with time, and the information of the unchanged part is reused from the information of the intra frame or the previous screen. That's the way. As shown in FIG. 3, only the motion information of the portion changed compared to the previous image or the reference image is detected, and the corresponding portion saves data by providing only coordinate information, not image information. Also, when decompressing, decompression is performed using only the corresponding coordinate information. As the block detection size for compression, various schemes such as 16x16, 16x8, 8x16, 8x8, 8x4, 4x8, and 4x4 may be used as shown in FIG. 4 according to the compression scheme. Each block performs one more lossy compression through DCT transform and quantization. In this manner, the image information compressed and transmitted for each block is restored to the original image through the decompression process by the decoder 30.

The renderer 40 performs a rendering in which the restored image is read from the memory 60 and displayed on the display device 80. The renderer 40 performs a scaling function for adjusting a screen size and a color space conversion function for converting a color display format. The scaled and color space-converted image data is stored (copyed) in the memory 60 in a form for output.

In general, the function of converting the YUV color space to the RGB color space is widely used. The color space conversion method has a real operation formula and a fast implementation method converted to an integer. As the scaling, a method such as Nearest Neighbor, Bi-Linear, Bi-Cubic, Lanczos, or Gauss may be used according to the interpolation method. The present invention is not limited thereto, and various scaling interpolation methods may be applied. Unlike the color space conversion process, which can be omitted by unifying the color space to YUV, scaling is necessarily performed unless the image size and the monitor size match 1: 1.

As shown in FIG. 5A, there are a progressive scan method for updating the entire screen as shown in FIG. 5A and an interlace scan method for reducing load by alternately updating odd and even lines as shown in FIG. 5B. Conventional rendering always has the same rendering load in proportion to the resolution and the number of frames regardless of the quality and complexity of the video. In other words, motion-free images filled with a single color and images of complex city centers have different compression loads and image complexity. However, according to conventional rendering techniques, they have the same rendering load. In addition, even if the image is taken in the same complex city center, the rendering load between the image whose image quality is deteriorated due to the lossy compression ratio and the image maintaining the complexity through the high quality compression is the same according to the conventional rendering technology. Have a load. In the case of image compression and decompression, the computational load as well as the amount of data is proportional to complexity and quality between simple and complex screens, low quality compression and high quality compression. However, conventional rendering is operated in a manner independent of the internal quality of such an image. The scan method also had to use progressive scan method regardless of the internal quality of the image, or use interlaced scan method that causes image quality deterioration in the moving part instead of cutting the rendering load in half.

The video decompression process of the decoder 30, the scaler process of the renderer 40, and the color space conversion process consume the most performance. Therefore, the load should be reduced in an appropriate manner. In general, in order to solve performance issues, the scaler of the renderer 40 may be applied with a low quality performance priority method such as Nearest Neighbor or various variants may be applied to remove the color space conversion load. The essential problem, however, is that the rendering module consumes a fixed load during the scaler and colorspace conversion, regardless of the actual quality of the video. As the number of monitors increases, so does the fixed consumption load on the rendering module. The rendering load is equally consumed even if the user uses high compression, low quality video to adapt to low end systems. Fixed-mount optics, such as CCTV cameras, do not appear throughout the screen, unlike movement in the video. This is because the part in which the motion occurs in the image area is part of the screen. In other words, it is the opposite of a video with various screen changes over the entire screen such as a camcorder recording or a movie. In addition, a region with a large complexity in the image is limited. In other words, it is the opposite of complicated urban photography or spectacular commercial video. However, conventional rendering methods do not take advantage of such an image configuration at all. In addition, the conventional rendering method does not utilize any loss of image information inevitably generated during the compression process.

The renderer 40 according to an embodiment of the present invention may reduce rendering load by utilizing image change and lossy compression characteristics of a CCTV camera by using screen change information, complexity degree, and quality information for each block of an image. . The screen change information, the complexity of the image, and the quality information may be obtained when performing image compression, decompressing an image, or analyzing the compressed data.

As a method of obtaining complexity and quality information of an image, various methods may be used as shown in the following example. For example, when performing image compression, when performing low pass filtering to remove high frequency components, information may be obtained by constructing a degree of complexity into a data structure by measuring a high frequency level for each image block. Alternatively, when image compression is performed, information may be obtained by constructing a degree of complexity as a data structure by measuring a high frequency level when performing DCT for each block. Alternatively, when image compression is performed, information may be obtained by constructing a quality structure according to a loss level when loss compression occurs due to quantization for each block. Alternatively, the data constructed by the MPEG compression method may be parsed to analyze DCT information and quantization parameters for each block to obtain information by constructing a data structure of complexity degree data and quality information as a data structure. Alternatively, when performing image decompression, the complexity information and the quality information of the image may be constructed as a data structure with reference to the DCT information and the quantization parameter for each block to obtain the information.

On the other hand, in the method of acquiring the screen change information, for example, when the image compression is performed, the difference image between the previous image and the current image is calculated, and the block is divided in the manner as shown in FIG. 4 to promise whether there is a change in the corresponding block. Information can be obtained by constructing a structured data structure. Alternatively, the information may be obtained by constructing a macroblock and motion vector information obtained by the MPEG compression scheme into a promised data structure when performing image compression. Alternatively, since the data constructed by the MPEG compression method is a set of macro blocks and motion vectors having image information, the compressed data may be parsed to construct and acquire information of an image block on which a screen change occurs in a promised data structure. . Alternatively, when image decompression is performed, information of an image block in which a screen change occurs may be obtained by constructing a promised data structure.

The promised structure for describing the complexity and quality information of the image, and the screen change information, for example, fixes the size of the block to 16x16 pixels and assigns 1 bit to each block to describe the information in the structure of a one-dimensional byte array. In the case of the complexity information, if the corresponding bit is 0, it means low complexity, and if it is 1, the high complexity may be used. In the case of the screen change information, the corresponding bit is 0, there is no screen change. Alternatively, the size of the block is defined in seven ways: 16x16, 16x8, 8x16, 8x8, 8x4, 4x8, and 4x4 pixels, each of which is described as 4 bits. In addition, one bit is allocated to each block according to the number of blocks described, and the structure of the one-dimensional byte array is additionally added.In case of the complexity information, if the corresponding bit is 0, low complexity and 1 means high complexity. In this case, if the corresponding bit is 0, there may be no screen change. Alternatively, the size of the block is fixed to 16x16 pixels and 1 byte is allocated to each block to describe the information in the structure of a two-dimensional byte array, and for complexity information, if the corresponding byte value is 1, low complexity. 2 means high complexity, and in the case of screen change information, a corresponding byte value of 1 means no screen change, and 2 means a screen change.

The above-described image information acquisition and data structure is exemplary, and the present invention is not particularly limited thereto. Of course, the image information may be obtained and the data structure may be described in various ways according to the compression and decompression method performed.

The D / A converter 50 digital-analog converts the image data for output from the memory 60 and outputs it to a display device 80 such as a monitor.

The memory 60 may include a decompressed image data storage area and a monitor image data storage area to be output to the display device 80 in one or more storage means.

6 to 8 are flowcharts schematically illustrating an image rendering method for image reproduction according to an embodiment of the present invention.

Referring to FIG. 6, the renderer 40 receives image data of a current frame from the memory 60 (S61) and checks whether there is image information promised in the image data (S63). The image information includes screen change information, image complexity, and quality information. The image information may use information extracted and generated during image compression, decompression, parsing and parsing compressed information.

If there is image information promised in the image data, the renderer 40 examines the screen change information of the image data for each block to determine whether there is a screen change in the image block (S63). In the image block having the screen change, scaling and / or color space conversion are performed (S64). On the other hand, the image block without a screen change is not stored in the monitor memory because it is not rendered (S66). In this case, the corresponding image block of the previous frame is kept on the display device 80.

The renderer 40 determines whether the image block requires high quality interpolation (S71), and scales the image block requiring high quality interpolation using the high quality interpolation method (S73), and scales the other image block with the low quality interpolation method (S75). . For example, the low quality interpolation method can be set to Nearest Neighbor method, which is a performance priority, and the high quality interpolation method can be set to other high-dimensional interpolation methods other than Bi-Linear.

In addition, the renderer 40 determines whether an image block requires color space conversion using an associative method (S81), and an image block requiring a high quality operation method is color space converted using a high quality operation method (S83). Color space conversion is performed using a low quality operation (S85). For example, the low quality arithmetic method may be set to a performance-first integer arithmetic method, and the high quality arithmetic method may be set to a high-quality method other than the quality real number arithmetic operation.

The renderer 40 stores the scaled and / or color space converted image block in the monitor memory (S65).

On the other hand, if there is no image information promised in the image data, the renderer 40 performs scaling and / or color space conversion in a designated manner for all image data (S67).

In the embodiment of the present invention, when performing image rendering, the block size and the presence or absence of a screen change are checked to skip the rendering of the block without the screen change and move to the next block to perform the rendering. When image rendering is performed for each line, the image is skipped by the horizontal size of the block without change and moved to the next position to perform rendering. In this case, the scaling and color space conversion methods are dynamically determined based on the complexity and quality information of the image block and rendered.

Unlike image compression technology, rendering of existing video information has been proceeded without referring to quality characteristics inside the image at all. As a result, the rendering part has an unreasonable characteristic of having the same overload even in a low quality image or a simple screen. However, the partial rendering according to the embodiment of the present invention allows the user to bear only the rendering load based on the quality and the complexity of the image, so that a more efficient system can be constructed and operated. That is, in the case of applying the rendering according to the embodiment of the present invention, since the high quality scaling and the high quality color space conversion are performed only on the high quality region having a substantially high complexity, the total computational amount is reduced, thereby reducing the load on the system.

In addition, the rendering of the video information is simply to copy the image data generated by the decoder to the memory area for display. Memory reads and writes are therefore essential and the operation is limited by memory bandwidth. Therefore, in order to render a large area, the memory bandwidth is overloaded and a load reduction method is required. When the technical process of rendering is expressed as pseudo code, it appears as a nested loop as below.

Loop (height)

Loop (width)

Memory Copy (Original Image Pixel-> Memory Area for Monitor)

Some systems have limitations in converting and rendering image data decoded in YUV color format into RGB color format in order to be displayed on the screen. At this time, the color conversion of each format has to be done one by one for each pixel, which puts a heavy load on the system. However, when the rendering according to the embodiment of the present invention is applied, only the part where the corresponding color conversion and the memory copy amount change substantially is performed, thereby reducing the total amount and thus reducing the load on the system.

As mentioned above, although preferred embodiment of this invention was described in detail with reference to an accompanying drawing, this invention is not limited to the said example. Those skilled in the art to which the present invention pertains can clearly conceive of various changes or modifications within the scope of the technical idea described in the claims, and of course those belonging to the technical scope of the present invention. It is understood that.

Claims (3)

Decompressing and restoring the compressed image data;
Determining the presence or absence of image information including complexity and quality information about the reconstructed image data;
If the image information does not exist, converting scaling and color spaces in a designated manner for all of the reconstructed image data;
Determining whether the image data is changed in units of blocks if the image information exists; And
And selecting a scaling method and a color space conversion method of the image data of the block determined to be changed based on the image information in units of blocks.
delete The method of claim 1,
Wherein the complexity, quality and change information are obtained in one process of compression of the image data, decompression of the image data and analysis of the decompressed image data.
KR1020130048519A 2013-04-30 2013-04-30 method for playing video KR102050423B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020130048519A KR102050423B1 (en) 2013-04-30 2013-04-30 method for playing video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020130048519A KR102050423B1 (en) 2013-04-30 2013-04-30 method for playing video

Publications (2)

Publication Number Publication Date
KR20140129777A KR20140129777A (en) 2014-11-07
KR102050423B1 true KR102050423B1 (en) 2019-11-29

Family

ID=52454971

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020130048519A KR102050423B1 (en) 2013-04-30 2013-04-30 method for playing video

Country Status (1)

Country Link
KR (1) KR102050423B1 (en)

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100481495B1 (en) * 2001-09-25 2005-04-07 주식회사 코디소프트 Apparatus and method for capturing image signals based on significance and apparatus and method for compressing and de-compressing significance-based captured image signals
KR100850705B1 (en) * 2002-03-09 2008-08-06 삼성전자주식회사 Method for adaptive encoding motion image based on the temperal and spatial complexity and apparatus thereof
KR100737857B1 (en) * 2004-12-31 2007-07-12 삼성전자주식회사 Apparatus and method for deinterlacing using optimal filter based on multi-resolution
KR20070016976A (en) 2005-08-05 2007-02-08 알프스 덴키 가부시키가이샤 Movable contact, switch device using the same and method of manufacturing the same
EP2144432A1 (en) * 2008-07-08 2010-01-13 Panasonic Corporation Adaptive color format conversion and deconversion
KR101805622B1 (en) * 2011-06-08 2017-12-08 삼성전자주식회사 Method and apparatus for frame rate control

Also Published As

Publication number Publication date
KR20140129777A (en) 2014-11-07

Similar Documents

Publication Publication Date Title
US20200145697A1 (en) System and Method for Real-Time Processing of Compressed Videos
KR100809354B1 (en) Apparatus and method for up-converting frame-rate of decoded frames
CN109547801B (en) Video stream coding and decoding method and device
TWI436286B (en) Method and apparatus for decoding image
TW201545545A (en) Projected interpolation prediction generation for next generation video coding
KR101090586B1 (en) Encoding/decoding device, encoding/decoding method and recording medium
JP4847076B2 (en) Method and transcoder for estimating output macroblocks and motion vectors for transcoding
EP1675404A2 (en) Efficient selection of intra coding modes
KR20180037042A (en) A motion vector field coding method and a decoding method, and a coding and decoding apparatus
KR100824161B1 (en) Image processing apparatus
WO2023005830A1 (en) Predictive coding method and apparatus, and electronic device
KR102321895B1 (en) Decoding apparatus of digital video
RU2695512C2 (en) Predictive video coding device, predictive video coding method, prediction video coding software, prediction video decoding device, prediction video decoding method and prediction video decoding software
JP2010081498A (en) Image compression coding method and apparatus
CN100551059C (en) Be used for producing the equipment of progressive frame from the interlace coded frame
JP6426648B2 (en) Moving picture predictive decoding method and moving picture predictive decoding apparatus
CN105359508A (en) Multi-level spatial-temporal resolution increase of video
JP4559811B2 (en) Information processing apparatus and information processing method
KR102050423B1 (en) method for playing video
JP2023522845A (en) Video coding method and system using reference region
JP2009296363A (en) Motion vector search apparatus, and motion vector search method
JP4749508B2 (en) Image decoding method
KR20170033355A (en) Multilevel video compression, decompression, and display for 4k and 8k applications
EP2645712A1 (en) Image downsampling
JP6066583B2 (en) Moving picture coding apparatus and moving picture coding method

Legal Events

Date Code Title Description
A201 Request for examination
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant