TWI424371B - Video processing device and processing method thereof - Google Patents

Video processing device and processing method thereof Download PDF

Info

Publication number
TWI424371B
TWI424371B TW98146005A TW98146005A TWI424371B TW I424371 B TWI424371 B TW I424371B TW 98146005 A TW98146005 A TW 98146005A TW 98146005 A TW98146005 A TW 98146005A TW I424371 B TWI424371 B TW I424371B
Authority
TW
Taiwan
Prior art keywords
video
frame
image
result
processing
Prior art date
Application number
TW98146005A
Other languages
Chinese (zh)
Other versions
TW201123070A (en
Inventor
Po Jung Lin
Shuei Lin Chen
Original Assignee
Altek Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Altek Corp filed Critical Altek Corp
Priority to TW98146005A priority Critical patent/TWI424371B/en
Publication of TW201123070A publication Critical patent/TW201123070A/en
Application granted granted Critical
Publication of TWI424371B publication Critical patent/TWI424371B/en

Links

Landscapes

  • Studio Devices (AREA)
  • Image Processing (AREA)

Description

視訊的處理裝置及其處理方法Video processing device and processing method thereof

本發明係關於一種視訊的處理裝置及其處理方法,特別是一種可節省所需的儲存器大小及頻寬的視訊的處理裝置及其處理方法。The present invention relates to a video processing device and a processing method thereof, and more particularly to a video processing device and a processing method thereof that can save a required memory size and bandwidth.

使用者能使用數位相機或攝影機等影像/視訊(video)擷取裝置來擷取影像或視訊,並得到可直接播放觀賞的影像/視訊輸出檔案。然而,數位相機等裝置之最初經由感測器(sensor)得到的數據是原始數據(raw data),原始數據尚須經過許多處理才能提供給使用者觀賞。The user can use a video/video capture device such as a digital camera or a video camera to capture images or video, and obtain an image/video output file that can be directly played. However, the data originally obtained by the sensor such as a digital camera is raw data, and the original data has to undergo a lot of processing to be provided for viewing by the user.

現在的數位影像處理(Digital Image Processing,DIP)技術多使用管線(pipeline)系統處理影像/視訊的原始數據,而管線系統係可對單一的影像進行一連串的處理。管線系統通常具有多個處理階段(stage),能夠一步一步地以施加過濾器(filter)等方法連續處理被輸入的影像。舉例而言,管線系統可使用過濾器將被輸入的影像/視訊轉換為RGB色彩空間模式,亦可將原始檔案轉換為通用的影像格式。Today's Digital Image Processing (DIP) technology uses a pipeline system to process raw images of video/video, while a pipeline system can perform a series of processing on a single image. The pipeline system usually has a plurality of processing stages, and the input image can be continuously processed step by step by applying a filter or the like. For example, the pipeline system can use a filter to convert the input image/video into RGB color space mode, and can also convert the original file into a common image format.

然而,由於傳統上之影像感測器所用的感測圖框率(sensor frame rate)與視訊輸出時的視訊圖框率(video frame rate)是差不多,所以視訊處理得在一個圖框時間內完成,因此無形中限制了管線系統的硬體速度和對記憶體的需求。尤其是現今甚為普遍之多尺度(multi-scale)或是多圖框(multi-frame)的影像應用技術更是增加管線系統所需的硬體成本。However, since the sensor frame rate used in the conventional image sensor is similar to the video frame rate at the time of video output, the video processing is completed in one frame time. Therefore, the hardware speed of the pipeline system and the demand for the memory are invisibly limited. In particular, today's popular multi-scale or multi-frame imaging applications increase the hardware cost of pipeline systems.

由於多尺度或是多圖框的處理技術需要處理多個輸入圖框才能產生一個輸出圖框,例如在處理影像感測器所提供的來源影像,係一種連續圖框處理,前一張圖框是先暫存在輸入緩衝器中,之後在處理下一張圖框時,又得至輸入緩衝器中讀取出前一張圖框以一起處理,因此管線系統就需具有至少一個額外的輸入緩衝器(buffer),才能保留影像感測器所給予的圖框以利後續的處理,這對於記憶體的大小與頻寬而言,都是非常大的挑戰。Since multi-scale or multi-frame processing techniques need to process multiple input frames to produce an output frame, such as processing the source image provided by the image sensor, a continuous frame processing, the previous frame It is temporarily stored in the input buffer, and then when the next frame is processed, the previous frame is read in the input buffer to be processed together, so the pipeline system needs to have at least one additional input buffer. (buffer), in order to preserve the frame given by the image sensor for subsequent processing, which is a very big challenge for the size and bandwidth of the memory.

此外,隨著科技的進步,影像或視訊的解析度也跟著提高。這也代表著輸入緩衝器需要具備有更大的容量,換言之,輸入緩衝器需要更多的成本,且來源影像讀出或寫入記憶體所需的頻寬會隨著處理階段增加而變大。再者,進行多圖框的影像處理時需要大量地進行存取運算,更對傳統的視訊處理方法造成了圖框延遲(frame delay)的問題。In addition, with the advancement of technology, the resolution of video or video has also increased. This also means that the input buffer needs to have a larger capacity. In other words, the input buffer requires more cost, and the bandwidth required for the source image to read or write to the memory increases as the processing stage increases. . Furthermore, when performing image processing of multiple frames, a large amount of access operations are required, and a problem of frame delay is caused to the conventional video processing method.

為了解決上述管線系統成本較高以及圖框延遲的問題,本發明提供一種視訊的處理裝置及其處理方法,其係用以將一取景區擷取為一視訊結果。本發明所提供的視訊的處理裝置及其處理方法能夠免除對輸入緩衝器的需求,因此可降低所需的硬體成本,並可解決圖框延遲的問題及降低記憶體所需的讀寫頻寬。In order to solve the problem of high cost and frame delay of the above pipeline system, the present invention provides a video processing device and a processing method thereof for extracting a framing area as a video result. The video processing device and the processing method thereof provided by the invention can eliminate the requirement of the input buffer, thereby reducing the required hardware cost, solving the problem of frame delay and reducing the read/write frequency required for the memory. width.

本發明所提供之視訊的處理裝置係包括:視訊感測器(video sensor)、暫時儲存器(temporary memory)以及視訊管線(video pipeline)。其中視訊感測器以一感測圖框率(sensor frame rate)擷取取景區並產生包括連續的多個圖框(frame)之一視訊,而視訊管線直接由視訊感測器接收這些圖框之一作為一第一圖框。視訊管線對第一圖框進行處理以產生一暫時結果圖框,再依據暫時結果圖框以及直接由視訊感測器接收的一第二圖框以一視訊圖框率(video frame rate)產生視訊結果,其中第二圖框係為第一圖框的下一個圖框,且其中視訊圖框率小於感測圖框率。The video processing device provided by the present invention comprises: a video sensor, a temporary memory, and a video pipeline. The video sensor captures the framing area at a sensor frame rate and generates one of a plurality of consecutive frames, and the video pipeline directly receives the frames by the video sensor. One as a first frame. The video pipeline processes the first frame to generate a temporary result frame, and then generates a video frame at a video frame rate according to the temporary result frame and a second frame directly received by the video sensor. As a result, the second frame is the next frame of the first frame, and wherein the video frame rate is less than the sensing frame rate.

根據本發明之一實施範例,視訊管線係可為選自由一影像處理單元、一影像縮放(image scaling)單元、一影像混合(image blending)單元、一圖框率轉換(frame rate conversion)單元以及一影像壓縮單元所組成的群組中之一或其組合。According to an embodiment of the present invention, the video pipeline may be selected from an image processing unit, an image scaling unit, an image blending unit, a frame rate conversion unit, and One of a group of image compression units or a combination thereof.

根據本發明之另一實施範例,視訊管線的影像混合單元依據暫時結果圖框以及第二圖框產生視訊結果。較佳的是,視訊的處理裝置另可包括一結果儲存器,且視訊管線將視訊結果儲存於結果儲存器。According to another embodiment of the present invention, the image mixing unit of the video pipeline generates a video result according to the temporary result frame and the second frame. Preferably, the video processing device further includes a result storage, and the video pipeline stores the video result in the result storage.

而本發明所提供之視訊的處理方法係包括:擷取取景區並產生視訊,其中視訊包括連續的多個圖框;直接接收這些圖框之一作為第一圖框,並對第一圖框進行處理,以產生暫時結果圖框;直接接收第二圖框,其中第二圖框為第一圖框的下一個圖框;以及依據第二圖框及暫時結果圖框產生視訊結果。The video processing method provided by the present invention includes: capturing a framing area and generating video, wherein the video comprises a plurality of consecutive frames; directly receiving one of the frames as the first frame, and the first frame Processing is performed to generate a temporary result frame; the second frame is directly received, wherein the second frame is the next frame of the first frame; and the video result is generated according to the second frame and the temporary result frame.

較佳的是,視訊的處理方法係以視訊管線接收這些圖框之一作為第一圖框,並對第一圖框進行處理以產生暫時結果圖框。且上述依據第二圖框及暫時結果圖框產生視訊結果的步驟可包括:以視訊管線的影像混合單元處理第二圖框與暫時結果圖框,以產生視訊結果。Preferably, the video processing method receives one of the frames as a first frame by the video pipeline, and processes the first frame to generate a temporary result frame. The step of generating the video result according to the second frame and the temporary result frame may include: processing the second frame and the temporary result frame by the image mixing unit of the video pipeline to generate the video result.

此外,視訊的處理方法另可包括:將暫時結果圖框儲存於暫時儲存器。視訊的處理方法亦可包括:將視訊結果儲存於結果儲存器。且視訊管線可將直接接收的這些圖框中剩餘的部分交替地作為第一圖框以及第二圖框並處理之,直到這些圖框都被處理完畢。In addition, the processing method of the video may further include: storing the temporary result frame in the temporary storage. The processing method of the video may also include: storing the video result in the result storage. And the video pipeline can alternately receive the remaining portions of the frames directly received as the first frame and the second frame until the frames are processed.

綜上所述,根據本發明之視訊的處理裝置及其處理方法能夠利用具有較高之感測圖框率的視訊感測器得到影像,再令視訊感測器直接將影像(之多個圖框)傳送給視訊管線。因此視訊管線能直接得到需要的圖框進行處理,而不需向輸入緩衝器要求。故根據上述處理方法,視訊的處理裝置不需配置任何輸入緩衝器即能執行多尺寸或是多圖框的影像處理技術,可以有效降低整體的記憶體大小,以及記憶體所需的讀寫頻寬。In summary, the video processing device and the processing method thereof according to the present invention can obtain an image by using a video sensor having a higher sensing frame rate, and then cause the video sensor to directly display the image (multiple images) The box is transferred to the video line. Therefore, the video pipeline can be directly processed by the required frame without being required to the input buffer. Therefore, according to the above processing method, the video processing device can perform multi-size or multi-frame image processing technology without any input buffer, which can effectively reduce the overall memory size and the read/write frequency required by the memory. width.

以下在實施方式中詳細敘述本發明之詳細特徵以及優點,其內容足以使任何熟習相關技藝者了解本發明之技術內容並據以實施,且根據本說明書所揭露之內容、申請專利範圍及圖式,任何熟習相關技藝者可輕易地理解本發明相關之目的及優點。The detailed features and advantages of the present invention are set forth in the Detailed Description of the Detailed Description of the <RTIgt; </ RTI> <RTIgt; </ RTI> </ RTI> </ RTI> <RTIgt; The objects and advantages associated with the present invention can be readily understood by those skilled in the art.

本發明提供一種視訊的處理裝置及其處理方法,其係用以將一取景區擷取為一視訊結果。請參考「第1A圖」,其係為根據本發明一實施範例之視訊的處理裝置之方塊示意圖。如「第1A圖」所繪示,視訊的處理裝置20包括一視訊感測器(video sensor)22、一暫時儲存器(temporary memory)24、一視訊管線(video pipeline)26以及一結果儲存器28。視訊的處理裝置20藉由視訊感測器22依據取景區得到一視訊(video)的原始數據(raw data),再由視訊管線26將視訊處理為視訊結果。The invention provides a video processing device and a processing method thereof, which are used for extracting a framing area as a video result. Please refer to FIG. 1A, which is a block diagram of a video processing apparatus according to an embodiment of the present invention. As shown in FIG. 1A, the video processing device 20 includes a video sensor 22, a temporary memory 24, a video pipeline 26, and a result storage. 28. The video processing device 20 obtains a video raw data according to the framing area by the video sensor 22, and then the video line 26 processes the video as a video result.

視訊感測器22亦可稱為影像感測器(image sensor),例如可以是數位相機、手機或是攝影機等裝置的影像擷取單元或影像感光元件。舉例而言,視訊感測器22可以是數位相機的電荷耦合元件(Charge Coupled Device,CCD),亦或是互補金屬氧化物半導體(Complementary Metal-Oxide-Semiconductor,CMOS)感光元件。更詳細的說,當使用者用數位相機對周圍景色擷取視訊時,視訊感測器22將透過鏡頭進入數位相機的景色的反射光擷取為視訊;而取景區即為可被數位相機之CCD或CMOS擷取到的景色。The video sensor 22 can also be referred to as an image sensor, such as an image capturing unit or an image sensing element of a digital camera, a cell phone, or a camera. For example, the video sensor 22 can be a Charge Coupled Device (CCD) of a digital camera or a Complementary Metal-Oxide-Semiconductor (CMOS) photosensitive element. In more detail, when the user uses a digital camera to capture video of the surrounding scenery, the video sensor 22 captures the reflected light of the scene entering the digital camera through the lens as video; and the framing area is a digital camera. CCD or CMOS captures the view.

由視訊感測器22所擷取的視訊可包括連續的多個圖框(frame),亦可包括音訊(audio)。且,視訊感測器22係以較高的一感測圖框率(sensor frame rate)對取景區擷取影像,感測圖框率可以例如是每秒60個圖框或是每秒90個圖框。隨著科技的進步,以後視訊感測器22之感測圖框率甚至可能達到每秒120個圖框以上。需注意的是,視訊感測器22的感測圖框率需大於視訊結果的一視訊圖框率(video frame rate)。更佳的是,感測圖框率可至少是視訊圖框率的兩倍。The video captured by the video sensor 22 may include a plurality of consecutive frames, and may also include audio. Moreover, the video sensor 22 captures an image of the framing area with a higher sensor frame rate, and the sensing frame rate can be, for example, 60 frames per second or 90 frames per second. Frame. With the advancement of technology, the sensing frame rate of the video sensor 22 may even reach above 120 frames per second. It should be noted that the sensing frame rate of the video sensor 22 needs to be greater than the video frame rate of the video result. More preferably, the sensing frame rate can be at least twice the video frame rate.

此外,根據本發明所提供的視訊的處理裝置20及其處理方法主要係針對視訊中的圖框進行處理,對於音訊的處理方法並無限制。In addition, the video processing device 20 and the processing method thereof according to the present invention are mainly for processing a frame in video, and there is no limitation on the method for processing the audio.

視訊管線26在時間軸上依序接收視訊感測器22所擷取的圖框,並可對接收的圖框進行各種數位影像處理(Digital Image Processing,DIP)以得到視訊結果。The video pipeline 26 sequentially receives the frame captured by the video sensor 22 on the time axis, and performs various digital image processing (DIP) on the received frame to obtain a video result.

根據本發明之一實施範例,視訊結果係指被視訊管線26處理過的圖框,而這些處理過的圖框可合成為一輸出視訊(output video)。而根據本發明之另一實施範例,視訊的處理裝置20可接收多個圖框並只產生視訊結果作為輸出,則此時之輸出為一張靜止影像(still image)。雖本說明書中多以輸出視訊的情況為例,但本發明所提供之視訊的處理裝置及其處理方法亦可用於處理靜止影像。According to an embodiment of the invention, the video result refers to a frame processed by the video pipeline 26, and the processed frames can be synthesized into an output video. According to another embodiment of the present invention, the video processing device 20 can receive a plurality of frames and generate only the video result as an output, and the output at this time is a still image. Although the case of outputting video is often taken as an example in the present specification, the video processing device and the processing method thereof provided by the present invention can also be used to process still images.

視訊管線26可依功能包括數種不同的處理單元,且基本上視訊管線26至少可包括一個影像處理單元。除了影像處理單元之外,視訊管線26內亦可包括一影像縮放(image scaling)單元、一影像混合(image blending)單元、一圖框率轉換(frame rate conversion)單元、一影像壓縮單元等處理單元。Video line 26 can include a number of different processing units depending on the function, and substantially video line 26 can include at least one image processing unit. In addition to the image processing unit, the video pipeline 26 may also include an image scaling unit, an image blending unit, a frame rate conversion unit, and an image compression unit. unit.

以下概略地介紹這些處理單元。These processing units are briefly described below.

影像縮放單元係用以將圖框進行縮小(downsizing,或稱為down-scaling)或是放大(upsizing,或稱為up-scaling)。當使用者對視訊結果的解析度要求不高時,視訊的處理裝置20可使用影像縮放單元降低視訊的解析度,以節省儲存視訊結果的所需空間。且進行例如超解析度(Super Resolution)等數位影像處理時,亦需要用到影像縮放單元。又例如影像縮放單元可將一個影像(或圖框)處理成不同解析度,以獲得影像在不同解析度下的影像特性。The image scaling unit is used to downsizing (or down-scaling) or upsizing (or up-scaling). When the user's resolution of the video result is not high, the video processing device 20 can use the image scaling unit to reduce the resolution of the video to save the space required for storing the video result. When performing digital image processing such as Super Resolution, an image scaling unit is also required. For example, the image scaling unit can process an image (or frame) into different resolutions to obtain image characteristics of the image at different resolutions.

影像混合單元則用以將多個(大部分是兩個)圖框混合成為一個新的圖框。影像混合單元可依據被混合之圖框中的各像素(pixel)的RGB色彩或是亮度,計算得到新的圖框的RGB色彩或是亮度,而得到各種不同的混合效果。The image blending unit is used to blend multiple (mostly two) frames into a new frame. The image mixing unit can calculate the RGB color or brightness of the new frame according to the RGB color or brightness of each pixel (pixel) in the mixed frame, and obtain various mixed effects.

圖框率轉換單元用以在一定範圍內提高或降低輸出視訊的視訊圖框率。圖框率轉換單元減少輸出視訊內包含的視訊結果數量以降低視訊圖框率,亦可用內插法產生補間圖框並加入輸出視訊內以提高視訊圖框率。而圖框率轉換單元亦可由軟體控制,而不需額外的硬體單元。The frame rate conversion unit is configured to increase or decrease the video frame rate of the output video within a certain range. The frame rate conversion unit reduces the number of video images included in the output video to reduce the video frame rate, and can also generate a tween frame by interpolation and add the output video to increase the video frame rate. The frame rate conversion unit can also be controlled by software without additional hardware units.

影像壓縮單元可使用破壞性壓縮,即降低圖框的品質以減少視訊結果所佔用的儲存空間。影像壓縮單元亦用以將輸出視訊壓縮為不同的視訊格式,視訊格式可例如是由動態影像專家組織(Moving Picture Experts Group,MPEG)所制定的MPEG-2格式,或是重視畫質的藍光(Blu-ray)格式。The image compression unit can use destructive compression, which reduces the quality of the frame to reduce the storage space occupied by the video result. The image compression unit is also used to compress the output video into different video formats. The video format may be, for example, an MPEG-2 format developed by Moving Picture Experts Group (MPEG), or a Blu-ray that emphasizes image quality ( Blu-ray format.

影像處理單元可對影像進行多種處理,例如銳利化(sharping)、色彩補正或是去除紅眼、自動白平衡、色調處理等。依據所需的功能不同,影像處理單元所使用的過濾器及運算方法可有各式各樣的變化,於本發明並不對其進行限制。影像處理單元亦可使用過濾器(filter)去除圖框中的鹽椒雜訊(Salt and pepper noise)或是高感光度雜訊(high ISO noise)等雜訊,以得到較好的畫面品質。舉例而言,簡單的過濾器可以例如是中值濾波器(median filter)或是線性過濾器。The image processing unit can perform various processing on the image, such as sharping, color correction or red-eye removal, automatic white balance, tone processing, and the like. Depending on the desired function, the filters and algorithms used by the image processing unit can be varied in various ways, and are not limited in the present invention. The image processing unit can also use a filter to remove noise such as salt and pepper noise or high ISO noise in the frame to obtain better picture quality. For example, a simple filter can be, for example, a median filter or a linear filter.

「第1B」圖係為根據本發明另一實施範例之視訊的處理裝置之方塊示意圖。如「第1B圖」所繪示,視訊的處理裝置20除了包括視訊感測器22、暫時儲存器24、視訊管線26以及結果儲存器28外,尚包括一感測器控制器221、一微處理器40、一編解碼器(codec)42、一顯示引擎單元(display engine unit)44、一輸入/輸出單元46。The "1B" diagram is a block diagram of a processing apparatus for video according to another embodiment of the present invention. As shown in FIG. 1B, the video processing device 20 includes a video sensor 22, a temporary memory 24, a video pipeline 26, and a result memory 28, and includes a sensor controller 221 and a micro The processor 40, a codec 42, a display engine unit 44, and an input/output unit 46.

感測器控制器221係用於產生高速控制訊號來控制視訊感測器22。The sensor controller 221 is for generating a high speed control signal to control the video sensor 22.

微處理器40係控制視訊的處理裝置20的整體運作,例如發送各種命令以令視訊管線26等配合處理視訊感測器22所擷取之影像。The microprocessor 40 controls the overall operation of the video processing device 20, such as transmitting various commands to cause the video line 26 and the like to cooperate with the image captured by the video sensor 22.

編解碼器42則用以將影像進行編碼與壓縮,例如可將影像轉換為音訊視訊交錯格式(Audio Video Interleave format,AVI format)或是動態影像壓縮標準格式(Moving Picture Experts Group,MPEG format)等視訊格式。The codec 42 is used for encoding and compressing images, for example, converting the video into an audio video interleave format (AVI format) or a motion picture compression standard format (Moving Picture Experts Group, MPEG format). Video format.

顯示引擎單元44係用以將視訊感測器22所擷取的影像或是由外部儲存器所讀取的影像顯示於和視訊的處理裝置20相連的一顯示裝置48。其中,顯示裝置48是根據視訊圖框率輸出視訊,且視訊圖框率是低於感測圖框率,更佳的是,感測圖框率可至少是視訊圖框率的兩倍。再者,顯示裝置48可已裝設在視訊的處理裝置20上,如液晶顯示器(Liquid Crystal Display,LCD)等,或是與視訊的處理裝置20外接,如電視螢幕。The display engine unit 44 is configured to display the image captured by the video sensor 22 or the image read by the external storage device on a display device 48 connected to the video processing device 20. The display device 48 outputs the video according to the video frame rate, and the video frame rate is lower than the sensing frame rate. More preferably, the sensing frame rate is at least twice the video frame rate. Furthermore, the display device 48 can be installed on the video processing device 20, such as a liquid crystal display (LCD) or the like, or externally connected to the video processing device 20, such as a television screen.

視訊的處理裝置20並可包括一輸入/輸出單元46,可以是如外部記憶卡控制單元,用於將處理過後的視訊資料儲存至記憶卡中,其中記憶卡可為安全數位卡(Secured Digital Card,SD卡)、記憶體堆疊記憶卡(Memory Stick Memory Card,MS卡)、壓縮快閃記憶卡(Compact Flash Memory Card,CF卡)等。The video processing device 20 can include an input/output unit 46, such as an external memory card control unit, for storing the processed video data into a memory card, wherein the memory card can be a secure digital card (Secured Digital Card). , SD card), memory stick memory card (Memory Stick Memory Card, MS card), compressed flash memory card (Compact Flash Memory Card, CF card).

藉由具有上述處理單元之視訊管線26,被視訊感測器22擷取的視訊的圖框被轉換為視訊結果,多個視訊結果並成為輸出視訊。而在視訊管線26處理圖框的過程,亦可將部分處理過的圖框作為一暫時結果圖框,並將暫時結果圖框儲存在暫時儲存器24。且根據本發明之一實施範例,暫時儲存器24係可被配置於視訊管線26內。也就是說,暫時儲存器24可以是視訊管線26的一個內部儲存器(internal storage)或是第二級快取(L2 cache)。The video frame captured by the video sensor 22 is converted into a video result by the video line 26 having the processing unit, and the plurality of video results become output video. In the process of processing the frame by the video pipeline 26, the partially processed frame may also be used as a temporary result frame, and the temporary result frame may be stored in the temporary storage 24. In accordance with an embodiment of the present invention, the temporary storage 24 can be configured within the video line 26. That is, the temporary storage 24 can be an internal storage of the video pipeline 26 or a second level cache (L2 cache).

更詳細的說,視訊管線26直接接收被視訊感測器22所擷取的圖框之一作為一第一圖框,並對第一圖框進行處理而產生暫時結果圖框,並可將暫時結果圖框存入暫時儲存器24。接著視訊管線26由視訊感測器22直接接收第一圖框的下一個圖框作為一第二圖框,視訊管線26再依據暫時結果圖框以及第二圖框產生視訊結果。In more detail, the video pipeline 26 directly receives one of the frames captured by the video sensor 22 as a first frame, and processes the first frame to generate a temporary result frame, and may temporarily The resulting frame is stored in temporary storage 24. Then, the video pipeline 26 directly receives the next frame of the first frame as a second frame by the video sensor 22, and the video pipeline 26 generates the video result according to the temporary result frame and the second frame.

根據本發明之一實施範例,視訊管線26可將處理完成的視訊結果(以及輸出視訊)儲存於結果儲存器28,且結果儲存器28係可為視訊管線26的外部儲存器(external storage)。根據本發明之另一實施範例,暫時儲存器24亦可與結果儲存器28為同一個儲存器,而以記憶體位址(address)區隔。換句話說,暫時儲存器24與結果儲存器28可為同一個儲存器之不同記憶體位址的儲存空間。In accordance with an embodiment of the present invention, the video pipeline 26 can store the processed video results (and output video) in the result store 28, and the result store 28 can be an external storage of the video line 26. According to another embodiment of the present invention, the temporary storage 24 may also be the same storage as the result storage 28, separated by a memory address. In other words, the temporary storage 24 and the result storage 28 can be storage spaces for different memory addresses of the same storage.

請同時參考「第1A圖」、「第1B圖」與「第2圖」,其中「第2圖」係為根據本發明一實施範例之視訊的處理裝置之流程方塊圖。於此實施範例中,視訊感測器22之感測圖框率係為輸出視訊之視訊圖框率的兩倍。如「第2圖」所繪示,視訊感測器22根據感測圖框擷取一取景區而產生視訊30,且視訊30包括多個圖框。視訊管線26包括影像縮放單元261、影像處理單元262以及影像混合單元263。Please refer to "1A", "1B" and "2", where "2" is a block diagram of a video processing apparatus according to an embodiment of the present invention. In this embodiment, the sensing frame rate of the video sensor 22 is twice the video frame rate of the output video. As shown in FIG. 2, the video sensor 22 generates a video 30 by capturing a framing area according to the sensing frame, and the video 30 includes a plurality of frames. The video pipeline 26 includes an image scaling unit 261, an image processing unit 262, and an image mixing unit 263.

視訊管線26直接由視訊感測器22接收這些圖框之一以作為一第一圖框32,並對第一圖框32進行處理後產生暫時結果圖框36,且暫時結果圖框36被存入暫時儲存器24。接著視訊管線26接收並處理第二圖框34,再以影像混合單元263將處理過的第二圖框34以及自暫時儲存器24讀出的暫時結果圖框36混合為視訊結果38。之後,視訊結果38可儲存在結果儲存器28中。The video pipeline 26 receives one of the frames directly from the video sensor 22 as a first frame 32, and processes the first frame 32 to generate a temporary result frame 36, and the temporary result frame 36 is saved. The temporary storage 24 is entered. The video frame 26 then receives and processes the second frame 34, and the image mixing unit 263 mixes the processed second frame 34 and the temporary result frame 36 read from the temporary storage 24 into the video result 38. Thereafter, the video result 38 can be stored in the result store 28.

隨著時間的經過,視訊管線26再接收第一圖框32’並產生暫時結果圖框36’,第二圖框34’與暫時結果圖框36’再經影像混合單元263混合後,產生視訊結果38’。視訊管線26如此重複接收圖框與處理的步驟直到處理完所有視訊感測器22所傳送的圖框為止,以得到相對應於視訊30的視訊結果38。As time passes, the video pipeline 26 receives the first frame 32 ′ and generates a temporary result frame 36 ′. The second frame 34 ′ and the temporary result frame 36 ′ are mixed by the image mixing unit 263 to generate a video. The result is 38'. The video pipeline 26 repeats the steps of receiving the frame and processing until the frames transmitted by all of the video sensors 22 have been processed to obtain a video result 38 corresponding to the video 30.

此外,視訊結果38可經由編解碼器42解碼後,經由顯示引擎單元44顯示於顯示裝置48上,且顯示裝置48係以低於感測圖框率的視訊圖框率輸出視訊結果38,例如當視訊感測器22所用的感測圖框率是每秒60圖框時,則顯示裝置48以每秒30圖框的視訊圖框率輸出視訊結果38。In addition, the video result 38 can be decoded by the codec 42 and displayed on the display device 48 via the display engine unit 44, and the display device 48 outputs the video result 38 at a video frame rate lower than the sensing frame rate, for example, When the sensing frame rate used by the video sensor 22 is 60 frames per second, the display device 48 outputs the video result 38 at a video frame rate of 30 frames per second.

請參照「第3圖」,其係為根據本發明另一實施範例之視訊的處理方法之流程示意圖。由「第3圖」可以知悉,視訊的處理方法可包括步驟S100:擷取取景區並產生視訊,其中視訊包括連續的多個圖框;步驟S110:接收這些圖框之一作為第一圖框,並對第一圖框進行處理,以產生暫時結果圖框;步驟S120:接收第二圖框,其中第二圖框為第一圖框的下一個圖框;步驟S130:依據第二圖框及暫時結果圖框產生視訊結果;以及步驟S140:重複執行步驟S110、S120以及S130,直到所有圖框處理完為止。Please refer to FIG. 3, which is a schematic flowchart of a video processing method according to another embodiment of the present invention. As shown in FIG. 3, the processing method of the video may include the step S100: capturing the framing area and generating the video, wherein the video includes a plurality of consecutive frames; and step S110: receiving one of the frames as the first frame And processing the first frame to generate a temporary result frame; step S120: receiving the second frame, wherein the second frame is the next frame of the first frame; step S130: according to the second frame And the temporary result frame generates a video result; and step S140: repeating steps S110, S120, and S130 until all the frames are processed.

其中步驟S110係可由視訊管線26執行,而步驟S130係可由影像混合單元263執行。更佳的是,於步驟S110得到暫時結果圖框36之後,視訊的處理方法另可包括步驟:將暫時結果圖框36儲存於暫時儲存器24。此外,於步驟S130得到視訊結果38後,視訊的處理方法另可包括步驟:將視訊結果38儲存於結果儲存器28。Step S110 can be performed by the video pipeline 26, and step S130 can be performed by the image mixing unit 263. More preferably, after the temporary result frame 36 is obtained in step S110, the video processing method may further include the step of storing the temporary result frame 36 in the temporary storage 24. In addition, after the video result 38 is obtained in step S130, the video processing method may further include the step of storing the video result 38 in the result storage 28.

需注意的是,第一圖框32以及第二圖框34係為視訊管線26直接從視訊感測器22接收而來。It should be noted that the first frame 32 and the second frame 34 are received by the video line 26 directly from the video sensor 22.

上述步驟S100至步驟S130係為視訊的處理方法產生單一個視訊結果38的步驟。在步驟S140,視訊的處理方法可重複執行這些步驟直到處理完視訊30的所有圖框,並得到包括所有視訊結果38之輸出視訊為止。意即視訊管線26可將直接接收的這些圖框中剩餘的部分交替地作為第一圖框32以及第二圖框34並處理之,直到視訊30的圖框都被處理完畢。The above steps S100 to S130 are steps of generating a single video result 38 for the video processing method. In step S140, the video processing method may repeat these steps until all the frames of the video 30 are processed, and the output video including all the video results 38 is obtained. That is, the video pipeline 26 can alternately treat the remaining portions of the frames directly received as the first frame 32 and the second frame 34 until the frames of the video 30 are processed.

更詳細地說,視訊管線26係將視訊30中剩餘的圖框依序交替地作為第一圖框32以及第二圖框34,且視訊管線26對第一圖框32處理以產生暫時結果圖框36。視訊管線26再依據暫時結果圖框36以及直接由視訊感測器22接收的第二圖框34產生視訊件結果38。視訊管線26並以小於感測圖框率的視訊圖框率輸出視訊結果38,直至處理完視訊30的這些圖框為止。In more detail, the video pipeline 26 sequentially replaces the remaining frames in the video 30 as the first frame 32 and the second frame 34, and the video pipeline 26 processes the first frame 32 to generate a temporary result map. Box 36. The video pipeline 26 then generates a video result 38 based on the temporary result frame 36 and the second frame 34 received directly by the video sensor 22. The video pipeline 26 outputs the video results 38 at a video frame rate that is less than the sensing frame rate until the frames of the video 30 are processed.

接下來請參照「第4圖」以及「第5圖」,其分別為根據本發明之多尺度應用(multi-scale application)範例以及多圖框應用(multi-frame application)範例之流程方塊圖。「第4圖」以及「第5圖」的實施範例係為利用本發明所提供之視訊的處理裝置20實作的多尺度及多圖框應用範例。Next, please refer to "4th figure" and "5th figure", which are respectively a block diagram of a multi-scale application example and a multi-frame application example according to the present invention. The implementation examples of "Fig. 4" and "Fig. 5" are examples of multi-scale and multi-frame applications implemented by the processing device 20 of the video provided by the present invention.

於「第4圖」之實施範例中,視訊感測器22提供第一圖框32以及第二圖框34予視訊管線26,視訊管線26並以兩階段的處理以產生視訊結果38。於本實施範例中,視訊感測器22的感測圖框率為輸出所需之視訊圖框率的兩倍,且視訊管線26含有影像縮放單元261、影像縮放單元261’、影像處理單元262及一影像混合單元263,其中影像縮放單元261是作為一第一影像縮放單元,影像縮放單元261’作為一第二影像縮放單元,這兩者是一對的,皆具支援影像放大或縮小功能,當其中一個進行放大影像時,另一個則進行縮小影像。In the embodiment of FIG. 4, the video sensor 22 provides a first frame 32 and a second frame 34 to the video pipeline 26, and the video pipeline 26 is processed in two stages to generate a video result 38. In this embodiment, the sensing frame rate of the video sensor 22 is twice the video frame rate required for output, and the video pipeline 26 includes an image scaling unit 261, an image scaling unit 261', and an image processing unit 262. And an image mixing unit 263, wherein the image scaling unit 261 is a first image scaling unit, and the image scaling unit 261' is a second image scaling unit, which are a pair of images, and both support image zooming or zooming. When one of them zooms in on the image, the other zooms out.

於第一階段的處理中,視訊管線26可使用影像縮放單元261先將第一圖框32縮小,並以影像處理單元262抽出被縮小之第一圖框32的一影像特徵。影像特徵可例如是以一邊緣偵測(edge-detection)方法得到之第一圖框32中的邊緣,亦可以是以一低通濾波器(low pass filter)處理後得到之第一圖框32的低頻部分,然後此處理過的第一圖框32係作為暫時結果圖框36且存在暫時儲存器24中。在第二階段處理中,暫時結果圖框36(被縮小的第一圖框32)被送至影像混合單元263進行混合前,視訊管線26會先以影像縮放單元261’將縮小過的第一圖框32放大回原本的解析度。In the first stage of processing, the video pipeline 26 may first reduce the first frame 32 using the image scaling unit 261, and extract an image feature of the reduced first frame 32 by the image processing unit 262. The image feature may be, for example, an edge in the first frame 32 obtained by an edge-detection method, or may be a first frame 32 obtained by a low pass filter. The low frequency portion, then the processed first frame 32 is used as a temporary result frame 36 and is present in the temporary storage 24. In the second stage processing, before the temporary result frame 36 (the reduced first frame 32) is sent to the image mixing unit 263 for mixing, the video line 26 will first reduce the first by the image scaling unit 261'. Frame 32 is enlarged back to the original resolution.

且於第二階段的處理中,視訊管線26接收第二圖框34並以影像處理單元262處理第二圖框34。最後,自暫時儲存器24讀出暫時結果圖框36,且由於暫時結果圖框36係一縮小過的第一圖框32,故先利用影像縮放單元261’使之放大回原本的解析度,再送至影像混合單元263與處理過的第二圖框34混合為視訊結果38。And in the second stage of processing, the video pipeline 26 receives the second frame 34 and processes the second frame 34 with the image processing unit 262. Finally, the temporary result frame 36 is read from the temporary storage 24, and since the temporary result frame 36 is a reduced first frame 32, the image scaling unit 261' is first used to enlarge it to the original resolution. The image feed unit 263 and the processed second frame 34 are mixed into the video result 38.

同理,剩餘的圖框會重複性如前述方式處理,遂不再贅述。Similarly, the remaining frames will be reproducible as described above, and will not be described again.

根據本發明之另一實施範例,視訊管線26亦可選取影像特徵的部分,並僅將影像特徵放大回原先的解析度,以供之後經由影像混合單元263與處理過的第二圖框34混合為視訊結果38。According to another embodiment of the present invention, the video pipeline 26 may also select portions of the image features and only magnify the image features back to the original resolution for later mixing with the processed second frame 34 via the image mixing unit 263. For the video result 38.

更詳細地說,影像特徵可具有一原始尺寸(即為第一圖框32以及影像特徵的原本的解析度)。在第一階段中,影像縮放單元261作為第一影像縮放單元用以改變第一圖框32的尺寸。接著影像處理單元262從已改變尺寸的第一圖框32中選擇影像特徵作為暫時結果圖框36。接著,於第二階段的處理中,視訊管線26接收第二圖框34並以影像處理單元262處理第二圖框34。最後,自暫時儲存器24讀出暫時結果圖框36。且由於暫時結果圖框36係一已改變尺寸(改變解析度)的影像特徵,故先利用影像縮放單元261’作為第二影像縮放單元將影像特徵之尺寸回復至原始尺寸(原本解析度),再送至影像混合單元263與處理過的第二圖框34混合為視訊結果38。In more detail, the image features may have an original size (ie, the original frame 32 and the original resolution of the image features). In the first stage, the image scaling unit 261 serves as a first image scaling unit for changing the size of the first frame 32. The image processing unit 262 then selects the image feature from the first frame 32 of the resized size as the temporary result frame 36. Next, in the second stage of processing, the video pipeline 26 receives the second frame 34 and processes the second frame 34 with the image processing unit 262. Finally, the temporary result frame 36 is read from the temporary storage 24. And because the temporary result frame 36 is an image feature that has been changed in size (changing the resolution), the image scaling unit 261 ′ is first used as the second image scaling unit to restore the size of the image feature to the original size (original resolution). The image feed unit 263 and the processed second frame 34 are mixed into the video result 38.

同理,剩餘的圖框會重複性如前述方式處理,遂不再贅述。Similarly, the remaining frames will be reproducible as described above, and will not be described again.

其中,當影像縮放單元261是縮小影像時,則影像縮放單元261’就是放大影像;反之,當影像縮放單元261是放大影像時,則影像縮放單元261’就是縮小影像。Wherein, when the image scaling unit 261 is a reduced image, the image scaling unit 261' is an enlarged image; otherwise, when the image scaling unit 261 is an enlarged image, the image scaling unit 261' is a reduced image.

前述的實施例是利用影像縮放單元261與261’互相搭配,但由於一個影像縮放單元261本身就具有放大與縮小影像兩種功能,所以亦可以根據需求,只透過一個影像縮放單元261來達成。例如,假若目前的視訊管線26只有影像縮放單元261,則影像縮放單元261接收第一圖框32並改變第一圖框32的尺寸,之後影像處理單元262從已改變尺寸的第一圖框32中選擇影像特徵作為暫時結果圖框36。接著亦由影像縮放單元261將影像特徵之尺寸回復至原始尺寸,再送至影像混合單元263。The foregoing embodiment uses the image scaling units 261 and 261' to match each other. However, since one image scaling unit 261 itself has two functions of zooming in and out, it can also be achieved by only one image scaling unit 261 as needed. For example, if the current video pipeline 26 has only the image scaling unit 261, the image scaling unit 261 receives the first frame 32 and changes the size of the first frame 32, after which the image processing unit 262 moves from the first frame 32 of the resized size. The image feature is selected as the temporary result frame 36. The image feature is then restored to the original size by the image scaling unit 261 and sent to the image mixing unit 263.

相較之下,傳統之多尺寸應用的做法需以一額外的輸入緩衝器保存影像感測器所擷取的圖框,以供管線以輸入緩衝器內的圖框進行第一階段與第二階段的處理。由於本發明所提供之視訊的處理裝置20之視訊感測器22具有較高的感測圖框率,因此視訊感測器22能夠即時連續提供第一圖框32與第二圖框34。故與傳統的做法相比,本發明所提供之視訊的處理裝置20並不需要輸入緩衝器的支援。In contrast, traditional multi-size applications require an additional input buffer to hold the frame captured by the image sensor for the pipeline to perform the first and second stages in the input buffer frame. Stage processing. Since the video sensor 22 of the video processing device 20 provided by the present invention has a high sensing frame rate, the video sensor 22 can continuously provide the first frame 32 and the second frame 34 in real time. Therefore, the video processing device 20 provided by the present invention does not require the input buffer support as compared with the conventional method.

於「第5圖」之多圖框應用的實施範例中,視訊感測器22的感測圖框率為輸出所需之視訊圖框率的兩倍。舉例而言,第一圖框32係為數位相機的鏡頭以1/45秒的曝光時間擷取得到,而第二圖框34係以1/90秒的曝光時間擷取得到。則處理過的視訊結果38可為一張具有1/30秒之曝光時間的影像,且視訊結果38的品質係比直接以1/30的曝光時間來擷取得到的圖框更好。例如視訊結果38可比以1/30的曝光時間來擷取得到的圖框具有更少的雜訊,或是更清晰的對比。In the embodiment of the multi-frame application of "Fig. 5", the sensing frame rate of the video sensor 22 is twice the video frame rate required for output. For example, the first frame 32 is obtained by taking the lens of the digital camera with an exposure time of 1/45 second, and the second frame 34 is obtained with an exposure time of 1/90 second. The processed video result 38 can be an image with an exposure time of 1/30 second, and the quality of the video result 38 is better than the frame obtained directly by the exposure time of 1/30. For example, the video result 38 may have less noise or a sharper contrast than the frame obtained by the exposure time of 1/30.

和「第4圖」的實施範例一樣,由於視訊感測器22具有較高的感測圖框率,因此視訊的處理裝置20並不需要傳統的輸入緩衝器與讀寫輸入緩衝器之頻寬。且去除輸入緩衝器後,因輸入緩衝器所造成之圖框延遲(frame delay)的問題亦隨之消失。As with the embodiment of "Fig. 4", since the video sensor 22 has a higher sensing frame rate, the video processing device 20 does not require the bandwidth of the conventional input buffer and the read/write input buffer. . After the input buffer is removed, the problem of frame delay caused by the input buffer also disappears.

此外,由於第一圖框32與第二圖框34係為不同的影像,也就是說第一圖框32與第二圖框34中可具有不同的影像資訊。因此視訊管線26能得到更多的影像資訊,以產生較佳的視訊結果。In addition, since the first frame 32 and the second frame 34 are different images, that is, the first frame 32 and the second frame 34 may have different image information. Therefore, the video pipeline 26 can obtain more image information to produce better video results.

且根據本發明之視訊的處理裝置及其處理方法可適用於各種數位影像處理的技術,例如視訊文字偵測(video text detection)、運動事件偵測(sport event detection)、方塊假象減輕(blocking-artifact reduction)、動態偵測/補償(motion detection/compensation)、超解析度、模糊去除(blur deconvolution)、人臉辨識(face recognition)或是視訊穩固(video stabilization,亦稱為防手震)。Moreover, the video processing device and the processing method thereof according to the present invention are applicable to various digital image processing technologies, such as video text detection, sport event detection, and block artifact reduction (blocking- Artifact reduction), motion detection/compensation, super-resolution, blur deconvolution, face recognition, or video stabilization (also known as anti-shake).

綜上所述,根據本發明之視訊的處理裝置及其處理方法能夠利用具有較高之感測圖框率的視訊感測器得到影像,再令視訊感測器直接將影像(之多個圖框)傳送給視訊管線。視訊管線直接得到多個需要的圖框作為輸入並進行處理,而不需向輸入緩衝器要求相同的圖框作為輸入,有效降低整體的記憶體大小,並降低將影像讀出或寫入記憶體時所需的頻寬。故根據上述處理方法,視訊的處理裝置不需配置任何輸入緩衝器即能執行多尺寸或是多圖框的影像處理技術。也就是說,根據本發明之視訊的處理裝置及其處理方法可解決傳統的管線系統需要輸入緩衝器所造成之硬體成本較高以及圖框延遲等問題。In summary, the video processing device and the processing method thereof according to the present invention can obtain an image by using a video sensor having a higher sensing frame rate, and then cause the video sensor to directly display the image (multiple images) The box is transferred to the video line. The video pipeline directly receives multiple required frames as input and processes it, without requiring the same frame as an input to the input buffer, effectively reducing the overall memory size and reducing the reading or writing of the image into the memory. The bandwidth required. Therefore, according to the above processing method, the video processing device can perform multi-size or multi-frame image processing technology without configuring any input buffer. That is to say, the video processing device and the processing method thereof according to the present invention can solve the problems of high hardware cost and frame delay caused by the conventional pipeline system requiring an input buffer.

雖然本發明以前述之較佳實施例揭露如上,然其並非用以限定本發明,任何熟習相像技藝者,在不脫離本發明之精神和範圍內,當可作些許之更動與潤飾,因此本發明之專利保護範圍須視本說明書所附之申請專利範圍所界定者為準。While the present invention has been described above in terms of the preferred embodiments thereof, it is not intended to limit the invention, and the invention may be modified and modified without departing from the spirit and scope of the invention. The patent protection scope of the invention is subject to the definition of the scope of the patent application attached to the specification.

20...視訊的處理裝置20. . . Video processing device

22...視訊感測器twenty two. . . Video sensor

221...感測器控制器221. . . Sensor controller

24...暫時儲存器twenty four. . . Temporary storage

26...視訊管線26. . . Video pipeline

261,261’...影像縮放單元261,261’. . . Image scaling unit

262...影像處理單元262. . . Image processing unit

263...影像混合單元263. . . Image mixing unit

28...結果儲存器28. . . Result storage

30...視訊30. . . Video

32,32’...第一圖框32,32’. . . First frame

34,34’...第二圖框34,34’. . . Second frame

36,36’...暫時結果圖框36,36’. . . Temporary result frame

38,38’...視訊結果38,38’. . . Video result

40...微處理器40. . . microprocessor

42...編解碼器42. . . Codec

44...顯示引擎單元44. . . Display engine unit

第1A圖係為根據本發明一實施範例之視訊的處理裝置之方塊示意圖;1A is a block diagram of a processing device for video according to an embodiment of the present invention;

第1B圖係為根據本發明另一實施範例之視訊的處理裝置之方塊示意圖;1B is a block diagram showing a processing device for video according to another embodiment of the present invention;

第2圖係為根據本發明一實施範例之視訊的處理裝置之流程方塊圖;2 is a block diagram showing a flow of a video processing apparatus according to an embodiment of the present invention;

第3圖係為根據本發明另一實施範例之視訊的處理方法之流程示意圖;3 is a schematic flow chart of a method for processing video according to another embodiment of the present invention;

第4圖係為根據本發明之多尺度應用範例之流程方塊圖;以及Figure 4 is a flow block diagram of a multi-scale application example in accordance with the present invention;

第5圖係為根據本發明之多圖框應用範例之流程方塊圖。Figure 5 is a block diagram of a multi-frame application example in accordance with the present invention.

Claims (18)

一種視訊的處理裝置,包括:一視訊感測器,以一感測圖框率擷取一取景區並產生一視訊,其中該視訊包括多個圖框;以及一視訊管線,係直接由該視訊感測器接收該些圖框之一作為一第一圖框,對該第一圖框處理以產生一暫時結果圖框,該視訊管線再依據該暫時結果圖框以及直接由該視訊感測器接收的一第二圖框產生一視訊結果,並以小於該感測圖框率的一視訊圖框率輸出該視訊結果,其中該第二圖框係為該第一圖框的下一個該圖框。A video processing device includes: a video sensor that captures a framing area at a sensing frame rate and generates a video, wherein the video includes a plurality of frames; and a video pipeline directly from the video The sensor receives one of the frames as a first frame, and processes the first frame to generate a temporary result frame, and the video pipeline is further based on the temporary result frame and directly by the video sensor Receiving a second frame to generate a video result, and outputting the video result at a video frame rate less than the sensing frame rate, wherein the second frame is the next picture of the first frame frame. 如申請專利範圍第1項所述之視訊的處理裝置,其中該視訊管線將剩餘的該些圖框依序交替地作為該第一圖框以及該第二圖框,且該視訊管線對該第一圖框處理以產生該暫時結果圖框,該視訊管線再依據該暫時結果圖框以及直接由該視訊感測器接收的該第二圖框產生該視訊結果,並以小於該感測圖框率的該視訊圖框率輸出該視訊結果,直至處理完該些圖框為止。The processing device of the video device of claim 1, wherein the video pipeline alternately uses the remaining frames as the first frame and the second frame, and the video pipeline is A frame processing is performed to generate the temporary result frame, and the video pipeline generates the video result according to the temporary result frame and the second frame directly received by the video sensor, and is smaller than the sensing frame The video frame rate of the rate outputs the video result until the frames are processed. 如申請專利範圍第1項所述之視訊的處理裝置,另包括:一暫時儲存器,該視訊管線係將該暫時結果圖框儲存在該暫時儲存器。The processing device for video according to claim 1, further comprising: a temporary storage, wherein the video pipeline stores the temporary result frame in the temporary storage. 如申請專利範圍第1項所述之視訊的處理裝置,另包括:一結果儲存器,該視訊管線係將該視訊結果儲存於該結果儲存器。The processing device for video according to claim 1, further comprising: a result storage, wherein the video pipeline stores the video result in the result storage. 如申請專利範圍第1項所述之視訊的處理裝置,其中該視訊管線包括:一影像混合單元,該影像混合單元混合該暫時結果圖框以及該第二圖框以產生該視訊結果。The processing device of the video device of claim 1, wherein the video pipeline comprises: an image mixing unit, the image mixing unit mixing the temporary result frame and the second frame to generate the video result. 如申請專利範圍第5項所述之視訊的處理裝置,其中該第一圖框具有至少一影像特徵,該影像特徵具有一原始尺寸,該視訊管線另包括:一第一影像縮放單元,接收該第一圖框並改變該第一圖框的尺寸;以及一影像處理單元,從已改變尺寸的該第一圖框中選擇該影像特徵以作為該暫時結果圖框,該第一影像縮放單元再將該暫時結果圖框之尺寸回復至該原始尺寸後傳送回復至該原始尺寸的該影像特徵至該影像混合單元以與該第二圖框進行混合以產生該視訊結果。The processing device of the video device of claim 5, wherein the first frame has at least one image feature, the image feature has an original size, and the video pipeline further comprises: a first image scaling unit, receiving the a first frame and changing a size of the first frame; and an image processing unit that selects the image feature from the first frame that has been resized as the temporary result frame, the first image scaling unit Retrieving the size of the temporary result frame to the original size and transmitting the image feature returned to the original size to the image mixing unit for mixing with the second frame to generate the video result. 如申請專利範圍第5項所述之視訊的處理裝置,其中該第一圖框具有至少一影像特徵,該影像特徵具有一原始尺寸,該視訊管線另包括:一第一影像縮放單元,接收該第一圖框並改變該第一圖框的尺寸;一影像處理單元,從已改變尺寸的該第一圖框中選擇該影像特徵以作為該暫時結果圖框;以及一第二影像縮放單元,將該暫時結果圖框之尺寸回復至該原始尺寸後傳送回復至該原始尺寸的該影像特徵至該影像混合單元以與該第二圖框進行混合以產生該視訊結果。The processing device of the video device of claim 5, wherein the first frame has at least one image feature, the image feature has an original size, and the video pipeline further comprises: a first image scaling unit, receiving the a first frame and changing a size of the first frame; an image processing unit, selecting the image feature from the first frame that has been resized as the temporary result frame; and a second image scaling unit, Retrieving the size of the temporary result frame to the original size and transmitting the image feature returned to the original size to the image mixing unit for mixing with the second frame to generate the video result. 如申請專利範圍第1項所述之視訊的處理裝置,其中該第一圖框與該第二圖框的曝光時間不同。The processing device for video according to claim 1, wherein the first frame is different from the exposure time of the second frame. 如申請專利範圍第1項所述之視訊的處理裝置,其中該視訊管線係為選自由一影像處理單元、一影像縮放單元、一影像混合單元、一圖框率轉換單元以及一影像壓縮單元所組成的群組中之一或其組合。The processing device of the video device of claim 1, wherein the video pipeline is selected from the group consisting of an image processing unit, an image scaling unit, an image mixing unit, a frame rate conversion unit, and an image compression unit. One of the group or a combination thereof. 一種視訊的處理方法,用以將一取景區擷取為一視訊結果,包括:以一感測圖框率擷取該取景區並產生一視訊,其中該視訊包括多個圖框;(a)直接接收該些圖框之一作為一第一圖框,並對該第一圖框進行處理,以產生一暫時結果圖框;(b)直接接收一第二圖框,其中該第二圖框為該第一圖框的下一個該圖框;以及(c)依據該第二圖框及該暫時結果圖框產生該視訊結果,並以小於該感測圖框率的一視訊圖框率輸出該視訊結果,其中該視訊圖框率小於該感測圖框率。A video processing method for capturing a framing area as a video result includes: capturing the framing area at a sensing frame rate and generating a video, wherein the video includes a plurality of frames; (a) Directly receiving one of the frames as a first frame, and processing the first frame to generate a temporary result frame; (b) directly receiving a second frame, wherein the second frame The next frame of the first frame; and (c) generating the video result according to the second frame and the temporary result frame, and outputting at a video frame rate smaller than the sensing frame rate The video result, wherein the video frame rate is less than the sensing frame rate. 如申請專利範圍第10項所述之視訊的處理方法,另包括:將剩餘的該些圖框交替地作為該第一圖框以及該第二圖框;以及重複執行步驟(a)、(b)以及(c),直到處理完剩餘的該些圖框為止。The method for processing video according to claim 10, further comprising: alternately using the remaining frames as the first frame and the second frame; and repeating steps (a) and (b) And (c) until the remaining frames are processed. 如申請專利範圍第10項所述之視訊的處理方法,其中該步驟(a),係以一視訊管線執行。The method for processing video according to claim 10, wherein the step (a) is performed by a video pipeline. 如申請專利範圍第12項所述之視訊的處理方法,其中該視訊管線係為選自由一影像處理單元、一影像縮放單元、一影像混合單元、一圖框率轉換單元以及一影像壓縮單元所組成的群組中之一或其組合。The method for processing video according to claim 12, wherein the video pipeline is selected from the group consisting of an image processing unit, an image scaling unit, an image mixing unit, a frame rate conversion unit, and an image compression unit. One of the group or a combination thereof. 如申請專利範圍第12項所述之視訊的處理方法,其中該步驟(c)包括:以該視訊管線的一影像混合單元處理該第二圖框與該暫時結果圖框,以產生該視訊結果。The method for processing video according to claim 12, wherein the step (c) comprises: processing the second frame and the temporary result frame by an image mixing unit of the video pipeline to generate the video result. . 如申請專利範圍第14項所述之視訊的處理方法,其中該步驟(a)包括:以該視訊管線的一第一影像縮放單元,改變該第一圖框的尺寸;以及以該視訊管線的一影像處理單元,從已改變尺寸的該第一圖框中選擇該第一圖框的一影像特徵以作為該暫時結果圖框;而該步驟(c)包括:以該第一影像縮放單元將該暫時結果圖框之尺寸回復至該影像特徵的一原始尺寸,並傳送回復至該原始尺寸的該影像特徵至該影像混合單元;以及以該影像混合單元混合回復至該原始尺寸的該影像特徵以及該第二圖框,以產生該視訊結果。The method for processing video according to claim 14, wherein the step (a) comprises: changing a size of the first frame by a first image scaling unit of the video pipeline; and using the video pipeline An image processing unit selects an image feature of the first frame from the first frame that has been resized as the temporary result frame; and the step (c) includes: using the first image scaling unit The size of the temporary result frame is restored to an original size of the image feature, and the image feature returned to the original size is transmitted to the image mixing unit; and the image feature is restored to the original size by the image mixing unit And the second frame to generate the video result. 如申請專利範圍第14項所述之視訊的處理方法,其中該步驟(a)包括:以該視訊管線的一第一影像縮放單元,改變該第一圖框的尺寸;以及以該視訊管線的一影像處理單元,從已改變尺寸的該第一圖框中選擇該第一圖框的一影像特徵作為該暫時結果圖框;而該步驟(c)包括:以該視訊管線的一第二影像縮放單元將該暫時結果圖框之尺寸回復至該影像特徵的一原始尺寸,並傳送回復至該原始尺寸的該影像特徵至該影像混合單元;以及以該影像混合單元混合回復至該原始尺寸的該影像特徵以及該第二圖框,以產生該視訊結果。The method for processing video according to claim 14, wherein the step (a) comprises: changing a size of the first frame by a first image scaling unit of the video pipeline; and using the video pipeline An image processing unit selects an image feature of the first frame from the first frame that has been resized as the temporary result frame; and the step (c) includes: capturing a second image of the video pipeline The zoom unit restores the size of the temporary result frame to an original size of the image feature, and transmits the image feature returned to the original size to the image mixing unit; and the image image mixing unit is mixed back to the original size The image feature and the second frame are used to generate the video result. 如申請專利範圍第10項所述之視訊的處理方法,另包括:將該暫時結果圖框儲存於一暫時儲存器。The method for processing video according to claim 10, further comprising: storing the temporary result frame in a temporary storage. 如申請專利範圍第10項所述之視訊的處理方法,另包括:將該視訊結果儲存於一結果儲存器。The method for processing video according to claim 10, further comprising: storing the video result in a result storage.
TW98146005A 2009-12-30 2009-12-30 Video processing device and processing method thereof TWI424371B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW98146005A TWI424371B (en) 2009-12-30 2009-12-30 Video processing device and processing method thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW98146005A TWI424371B (en) 2009-12-30 2009-12-30 Video processing device and processing method thereof

Publications (2)

Publication Number Publication Date
TW201123070A TW201123070A (en) 2011-07-01
TWI424371B true TWI424371B (en) 2014-01-21

Family

ID=45046526

Family Applications (1)

Application Number Title Priority Date Filing Date
TW98146005A TWI424371B (en) 2009-12-30 2009-12-30 Video processing device and processing method thereof

Country Status (1)

Country Link
TW (1) TWI424371B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI549500B (en) * 2014-09-12 2016-09-11 聚晶半導體股份有限公司 Method of capturing images and device of capturing images using the method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200509673A (en) * 2003-07-22 2005-03-01 Omnivision Tech Inc CMOS image sensor using high frame rate with frame addition and movement compensation
US20050047676A1 (en) * 2003-04-29 2005-03-03 Microsoft Corporation System and process for generating high dynamic range video
TW200726234A (en) * 2005-12-30 2007-07-01 Altek Corp Image magnifying system and method thereof for a digigal camera
TW200729965A (en) * 2005-12-27 2007-08-01 Mediatek Inc Video processing method and computer-readable storage medium therefor
TW200951809A (en) * 2008-05-29 2009-12-16 Axis Semiconductor Inc Method & apparatus for real-time data processing.

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050047676A1 (en) * 2003-04-29 2005-03-03 Microsoft Corporation System and process for generating high dynamic range video
TW200509673A (en) * 2003-07-22 2005-03-01 Omnivision Tech Inc CMOS image sensor using high frame rate with frame addition and movement compensation
TW200729965A (en) * 2005-12-27 2007-08-01 Mediatek Inc Video processing method and computer-readable storage medium therefor
TW200726234A (en) * 2005-12-30 2007-07-01 Altek Corp Image magnifying system and method thereof for a digigal camera
TW200951809A (en) * 2008-05-29 2009-12-16 Axis Semiconductor Inc Method & apparatus for real-time data processing.

Also Published As

Publication number Publication date
TW201123070A (en) 2011-07-01

Similar Documents

Publication Publication Date Title
US7573504B2 (en) Image recording apparatus, image recording method, and image compressing apparatus processing moving or still images
US8009337B2 (en) Image display apparatus, method, and program
US20110157426A1 (en) Video processing apparatus and video processing method thereof
US20080316331A1 (en) Image processing apparatus and method for displaying captured image without time delay and computer readable medium stored thereon computer executable instructions for performing the method
US7583280B2 (en) Image display device
KR100902419B1 (en) Apparatus and method for image processing in capable of displaying captured image without time delay, and computer readable medium stored thereon computer executable instruction for performing the method
JP2007088806A (en) Image signal processor and image signal processing method
JP2015053644A (en) Imaging device
KR20100077940A (en) Apparatus and method for processing a digital image
US20090303332A1 (en) System and method for obtaining image of maximum clarity
US20100135644A1 (en) Photographing apparatus and method of controlling the same
JP5820720B2 (en) Imaging device
JP5959194B2 (en) Imaging device
WO2012111825A1 (en) Image processor, image processing method, and program
TWI424371B (en) Video processing device and processing method thereof
JP6021573B2 (en) Imaging device
US11202019B2 (en) Display control apparatus with image resizing and method for controlling the same
CN110049254B (en) Image processing method, image processing device, storage medium and electronic equipment
JP5906846B2 (en) Electronic camera
JP4987800B2 (en) Imaging device
JP2004312072A (en) Image processing device, camera, and image processing method
KR100902421B1 (en) Apparatus and method for image processing in capable of displaying captured image without time delay, and computer readable medium stored thereon computer executable instruction for performing the method
US9225908B2 (en) Imaging apparatus
US11310442B2 (en) Display control apparatus for displaying image with/without a frame and control method thereof
JP2008042326A (en) Imaging apparatus and simultaneous display control method

Legal Events

Date Code Title Description
MM4A Annulment or lapse of patent due to non-payment of fees