TWI741305B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
TWI741305B
TWI741305B TW108119192A TW108119192A TWI741305B TW I741305 B TWI741305 B TW I741305B TW 108119192 A TW108119192 A TW 108119192A TW 108119192 A TW108119192 A TW 108119192A TW I741305 B TWI741305 B TW I741305B
Authority
TW
Taiwan
Prior art keywords
image
frame
images
video
offset
Prior art date
Application number
TW108119192A
Other languages
Chinese (zh)
Other versions
TW202009876A (en
Inventor
雷華
陳凱
王隸楨
Original Assignee
大陸商虹軟科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 大陸商虹軟科技股份有限公司 filed Critical 大陸商虹軟科技股份有限公司
Publication of TW202009876A publication Critical patent/TW202009876A/en
Application granted granted Critical
Publication of TWI741305B publication Critical patent/TWI741305B/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/646Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/92
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration by non-spatial domain filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2628Alteration of picture size, shape, position or orientation, e.g. zooming, rotation, rolling, perspective, translation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/77Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • H04N5/211Ghost signal cancellation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Picture Signal Circuits (AREA)

Abstract

本發明實施例揭露一種圖像處理方法和裝置,該方法包括:對視頻中的每一幀圖像進行亮度調整;對經過亮度調整的每一幀圖像進行偏移量補償;對經過偏移量補償的每一幀圖像的像素作時域濾波。通過該實施例方案,有效去除了視頻中存在的亮度或者顏色波動。The embodiment of the present invention discloses an image processing method and device. The method includes: adjusting the brightness of each frame of image in the video; performing offset compensation for each frame of the image after the brightness adjustment; The pixels of each frame of image that are compensated by the amount are filtered in the time domain. Through the scheme of this embodiment, the brightness or color fluctuations existing in the video are effectively removed.

Description

圖像處理方法和裝置Image processing method and device

相關申請案Related applications

本發明基於申請號為201810961926.5、申請日為2018/8/22的中國專利申請提出,並要求該中國專利申請的優先權,該中國專利申請的全部內容在此引入本發明作為參考。The present invention is based on a Chinese patent application with an application number of 201810961926.5 and an application date of 2018/8/22, and claims the priority of the Chinese patent application. The entire content of the Chinese patent application is hereby incorporated by reference into the present invention.

本發明實施例涉及圖像處理技術,尤指一種圖像處理方法和裝置。The embodiment of the present invention relates to image processing technology, in particular to an image processing method and device.

由於光源不穩定(比如日常照明中,由於電源是交流電,光源的亮度會隨著交流電的振幅波動而變化),相機(包括數位相機、手機攝像頭等)拍攝的視頻會出現隨著光源變化而出現的幀與幀之間明暗波動甚至顏色波動的現象。特別是在拍攝幀率(拍攝視頻時使用的幀率,用以區分播放視頻時使用的幀率)為高幀率(大於或等於240幀每秒)的視頻時,這種現象尤為明顯。Because the light source is unstable (for example, in daily lighting, since the power source is AC, the brightness of the light source will change with the amplitude of the AC power), the video taken by the camera (including digital cameras, mobile phone cameras, etc.) will appear as the light source changes. The phenomenon of light and dark fluctuations and even color fluctuations from frame to frame. This phenomenon is especially obvious when shooting a video with a high frame rate (greater than or equal to 240 frames per second) when the frame rate (the frame rate used when shooting video is used to distinguish the frame rate used when playing the video) is high.

目前,針對光源的頻率為60Hz,幀率為240幀每秒的視頻,通過對相鄰兩幀的疊加平均可以明顯的改善視頻的亮度和顏色波動現象。原理上只要是拍攝幀率為光源頻率的整數倍的視頻都可以通過相鄰幾幀疊加的方法改善亮度波動現象(視頻中的每幀圖像的平均亮度或者圖像中每個像素的亮度隨著幀數變化出現明暗波動的現象)和顏色波動現象(由於每個顏色通道亮度波動不一致造成了顏色隨著亮度波動而變化的現象)。At present, for a video with a light source frequency of 60 Hz and a frame rate of 240 frames per second, the brightness and color fluctuations of the video can be significantly improved by superimposing and averaging two adjacent frames. In principle, as long as the shooting frame rate is an integer multiple of the light source frequency, the brightness fluctuation phenomenon can be improved by superimposing several adjacent frames (the average brightness of each frame of the image in the video or the brightness of each pixel in the image varies with The phenomenon of light and dark fluctuations due to changes in the number of frames) and color fluctuations (due to the inconsistency of the brightness fluctuations of each color channel, the color changes with the brightness fluctuations).

然而該方案只針對拍攝幀率為光源頻率的整數倍的視頻有明顯的效果,其它的視頻效果不明顯。並且通過該方法處理後的視頻,當拍攝視頻時有快速運動物體存在時,會出現移動物體的重影。However, this solution has obvious effects only for videos whose frame rate is an integer multiple of the frequency of the light source, and other video effects are not obvious. And the video processed by this method, when there are fast moving objects when shooting the video, the ghost of the moving objects will appear.

本發明的至少部分實施例提供了一種圖像處理方法和裝置,能夠去除視頻中存在的亮度或者顏色波動。At least some embodiments of the present invention provide an image processing method and device, which can remove brightness or color fluctuations existing in a video.

在本發明其中一個實施例中,提供了一種圖像處理方法,包括: 對視頻中的每一幀圖像進行亮度調整; 對經過亮度調整的每一幀圖像進行偏移量補償; 對經過偏移量補償的每一幀圖像的像素作時域濾波。In one of the embodiments of the present invention, an image processing method is provided, including: Adjust the brightness of each frame of the video; Perform offset compensation for each frame of image after brightness adjustment; Time-domain filtering is performed on the pixels of each frame of image after offset compensation.

在一個可選實施例中,所述對視頻中的每一幀圖像進行亮度調整包括: 對所述每一幀圖像分別進行如下處理: 分別統計三個顏色通道的顏色均值;三個顏色通道是指紅色R通道、綠色G通道和藍色B通道; 採用預設的第一濾波方案,分別根據每個顏色通道的顏色均值對相應顏色通道進行第一時域濾波。In an optional embodiment, the adjusting the brightness of each frame of image in the video includes: The following processing is performed on each frame of image: Count the color averages of the three color channels respectively; the three color channels refer to the red R channel, the green G channel and the blue B channel; The preset first filtering scheme is adopted, and the corresponding color channel is subjected to the first time-domain filtering according to the color average value of each color channel.

在一個可選實施例中,所述對經過亮度調整的每一幀圖像進行偏移量補償包括:採用預設的偏移量補償算法,獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量,並通過對所述偏移量進行補償,以使得所述視頻中任意相鄰的兩幀圖像位於相同的圖像座標下的圖像內容保持一致。In an optional embodiment, said performing offset compensation for each frame of image that has undergone brightness adjustment includes: using a preset offset compensation algorithm to obtain any two adjacent frames of the brightness-adjusted video The offset between the images is compensated for, so that the image content of any two adjacent frames in the video at the same image coordinates remains consistent.

在一個可選實施例中,所述對經過偏移量補償的每一幀圖像的像素作時域濾波包括:採用預設的第二濾波方案對每一幀視頻圖像中的每一個像素作第二時域濾波,以使得當前幀圖像和先於當前幀的幀圖像進行線性疊加。In an optional embodiment, the time-domain filtering of pixels of each frame of image after offset compensation includes: adopting a preset second filtering scheme for each pixel in each frame of video image Perform the second time domain filtering, so that the current frame image and the frame image prior to the current frame are linearly superimposed.

在一個可選實施例中,所述方法還包括:在對經過亮度調整的每一幀圖像進行偏移量補償以後,對圖像中是否包含運動物體進行判斷。In an optional embodiment, the method further includes: after performing offset compensation for each frame of image that has undergone brightness adjustment, judging whether the image contains a moving object.

在一個可選實施例中,所述對圖像中是否包含運動物體進行判斷包括: 分別對當前幀圖像和上一幀圖像進行分塊,獲得多個第一分塊圖像; 根據預設的差異計算算法,分別計算所述當前幀圖像和所述上一幀圖像中相對應的兩個第一分塊圖像的差異; 將所述兩個第一分塊圖像的差異與預設的差異閾值相比較; 當所述兩個第一分塊圖像的差異大於或等於所述差異閾值時,判定所述兩個第一分塊圖像為非相似的,並判定所述兩個第一分塊圖像在所述當前幀圖像和所述上一幀圖像中對應的圖像區域包含有運動物體;當所述兩個第一分塊圖像的差異小於所述差異閾值時,判定所述兩個第一分塊圖像為相似的,並判定所述兩個第一分塊圖像在所述當前幀圖像和所述上一幀圖像中對應的圖像區域不包含有運動物體。In an optional embodiment, the judging whether an image contains a moving object includes: Block the current frame image and the previous frame image respectively to obtain multiple first block images; According to a preset difference calculation algorithm, calculate the difference between the two corresponding first block images in the current frame image and the previous frame image respectively; Comparing the difference between the two first segmented images with a preset difference threshold; When the difference between the two first divided images is greater than or equal to the difference threshold, it is determined that the two first divided images are not similar, and the two first divided images are determined The corresponding image area in the current frame image and the previous frame image contains a moving object; when the difference between the two first segmented images is less than the difference threshold, it is determined that the two The first segmented images are similar, and it is determined that the corresponding image regions of the two first segmented images in the current frame image and the previous frame image do not contain moving objects.

在一個可選實施例中,所述獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量包括: 分別對經過亮度調整後的當前幀圖像和上一幀圖像進行分塊,獲得多個第二分塊圖像; 分別計算所述當前幀圖像和所述上一幀圖像中相對應的兩個第二分塊圖像之間的偏移量; 從所述多個第二分塊圖像中排除包含運動物體的第二分塊圖像,並將剩餘的第二分塊圖像的偏移量的平均值作為所述當前幀圖像和所述上一幀圖像之間的偏移量。In an optional embodiment, the acquiring the offset between any two adjacent frames of the video after the brightness adjustment includes: Block the current frame image and the previous frame image after brightness adjustment to obtain multiple second block images; Respectively calculating the offset between the two corresponding second block images in the current frame image and the previous frame image; Exclude the second segmented image containing the moving object from the plurality of second segmented images, and use the average value of the offsets of the remaining second segmented images as the current frame image and all Describe the offset between the previous frame of image.

在一個可選實施例中,所述方法還包括: 在判斷出任一幀圖像中包含有運動物體後,生成該幀圖像的遮罩版圖像;其中,該遮罩版圖像中包含有所述運動物體的圖像區域的像素值為1,不包含有所述運動物體的圖像區域的像素值為0; 根據預設的融合算法,利用所述遮罩版圖像將對像素作過所述時域濾波的相應幀圖像與未對像素作過時域濾波的相應幀圖像進行融合,以使得所述包含有所述運動物體的圖像區域得到保留。In an optional embodiment, the method further includes: After judging that any frame of image contains a moving object, generate a mask image of the frame of image; wherein, the pixel value of the image area containing the moving object in the mask image is 1 , The pixel value of the image area that does not contain the moving object is 0; According to a preset fusion algorithm, the mask image is used to fuse the corresponding frame image that has been filtered in the time domain with the pixels and the corresponding frame image that has not been filtered in the time domain, so that the The image area containing the moving object is preserved.

在一個可選實施例中,所述方法還包括:對經過融合的每一幀圖像進行空域濾波。In an optional embodiment, the method further includes: performing spatial filtering on each frame of the fused image.

在本發明的其中一個實施例中,還提供了一種圖像處理裝置,包括:處理器和電腦可讀儲存媒體,所述電腦可讀儲存媒體中儲存有指令,其特徵在於,當所述指令被所述處理器執行時,實現上述任意一項所述的圖像處理方法。In one of the embodiments of the present invention, there is also provided an image processing device, including: a processor and a computer-readable storage medium, the computer-readable storage medium stores instructions, characterized in that, when the instruction When executed by the processor, the image processing method described in any one of the above is implemented.

本發明實施例包括:對視頻中的每一幀圖像進行亮度調整;對經過亮度調整的每一幀圖像進行偏移量補償;對經過偏移量補償的每一幀圖像的像素作時域濾波。通過該實施例方案,有效去除了視頻中存在的亮度或者顏色波動。The embodiment of the present invention includes: adjusting the brightness of each frame of image in the video; performing offset compensation for each frame of the image after the brightness adjustment; performing offset compensation on the pixels of each frame of the image. Time domain filtering. Through the scheme of this embodiment, the brightness or color fluctuations existing in the video are effectively removed.

本發明實施例的其它特徵和優點將在隨後的說明書中闡述,並且,部分地從說明書中變得顯而易見,或者通過實施本發明而瞭解。本發明實施例的目的和其他優點可通過在說明書、申請專利範圍以及圖式中所特別指出的結構來實現和獲得。Other features and advantages of the embodiments of the present invention will be described in the following description, and partly become obvious from the description, or understood by implementing the present invention. The purpose and other advantages of the embodiments of the present invention can be realized and obtained through the structures specifically indicated in the specification, the scope of the patent application, and the drawings.

為使本發明的目的、技術方案和優點更加清楚明白,下文中將結合圖式對本發明的實施例進行詳細說明。需要說明的是,在不衝突的情況下,本發明中的實施例及實施例中的特徵可以相互任意組合。In order to make the objectives, technical solutions, and advantages of the present invention clearer, the embodiments of the present invention will be described in detail below in conjunction with the drawings. It should be noted that, in the case of no conflict, the embodiments of the present invention and the features in the embodiments can be combined with each other arbitrarily.

在圖式的流程圖示出的步驟可以在諸如一組電腦可執行指令的電腦系統中執行。並且,雖然在流程圖中示出了邏輯順序,但是在某些情況下,可以以不同於此處的循序執行所示出或描述的步驟。The steps shown in the flowchart of the figures can be executed in a computer system such as a set of computer-executable instructions. And, although a logical sequence is shown in the flowchart, in some cases, the steps shown or described may be performed in a different order than here.

實施例一Example one

一種圖像處理方法,如圖1所示,可以包括步驟S101-S103: S101:對視頻中的每一幀圖像進行亮度調整; S102:對經過亮度調整的每一幀圖像進行偏移量補償; S103:對經過偏移量補償的每一幀圖像的像素作時域濾波。An image processing method, as shown in Figure 1, may include steps S101-S103: S101: Perform brightness adjustment on each frame of image in the video; S102: Perform offset compensation for each frame of image that has undergone brightness adjustment; S103: Perform temporal filtering on the pixels of each frame of image that have undergone offset compensation.

在本發明實施例中,對於視頻中的每一幀圖像,首先進行整體的亮度調整,之後實施圖像匹配,即對經過亮度調整的每一幀圖像進行偏移量補償;根據圖像匹配後的結果進行時域濾波,有效地減少了高速拍攝視頻在波動光源下產生的亮度波動效應,去除了視頻中存在的亮度或者顏色波動。In the embodiment of the present invention, for each frame of image in the video, the overall brightness adjustment is performed first, and then image matching is performed, that is, offset compensation is performed on each frame of image that has undergone brightness adjustment; The matched result is filtered in the time domain, which effectively reduces the brightness fluctuation effect of high-speed video shooting under fluctuating light sources, and removes the brightness or color fluctuations in the video.

在本發明實施例中,對於具體的亮度調整、圖像匹配以及時域濾波的方法、算法和裝置均不作限制,可以根據不同的應用場景,通過目前存在的任意方法、算法或裝置實現。In the embodiments of the present invention, the specific methods, algorithms, and devices for brightness adjustment, image matching, and time-domain filtering are not limited, and can be implemented by any currently existing method, algorithm, or device according to different application scenarios.

在本發明實施例中,需要說明的是,該實施例方案可以應用於高幀率(如大於或等於240幀每秒的幀率)中,可以包括但不限於:幀率大於或等於240幀每秒,小於或等於的960幀每秒的視頻。光源的頻率可以包括但不限於:60HZ和50HZ。任何能夠採用本發明實施例方案的視頻均在本發明實施例的保護範圍之內。In the embodiment of the present invention, it should be noted that the solution in this embodiment can be applied to a high frame rate (such as a frame rate greater than or equal to 240 frames per second), which may include, but is not limited to: a frame rate greater than or equal to 240 frames Video per second, less than or equal to 960 frames per second. The frequency of the light source may include, but is not limited to: 60HZ and 50HZ. Any video that can adopt the solution of the embodiment of the present invention is within the protection scope of the embodiment of the present invention.

實施例二Example two

該實施例在實施例一的基礎上,給出了亮度調整的一個具體實施方式。This embodiment provides a specific implementation of brightness adjustment on the basis of the first embodiment.

在本發明實施例中,如圖2所示,所述對視頻中的每一幀圖像進行亮度調整可以包括:對所述每一幀圖像分別進行如下步驟S201-S202步驟的處理: S201:分別統計三個顏色通道的顏色均值;三個顏色通道是指紅色R通道、綠色G通道和藍色B通道; S202:採用預設的第一濾波方案,分別根據每個顏色通道的顏色均值對相應顏色通道進行第一時域濾波。In the embodiment of the present invention, as shown in FIG. 2, the adjusting the brightness of each frame of the video may include: performing the following steps S201-S202 on each frame of the image: S201: Count the color averages of the three color channels respectively; the three color channels refer to the red R channel, the green G channel and the blue B channel; S202: Using a preset first filtering scheme, perform first time-domain filtering on the corresponding color channel according to the color average of each color channel.

在本發明實施例中,對每一幀圖像,分別統計三個顏色通道的顏色均值,假設當前幀圖像的RGB三個顏色通道的顏色均值分別為

Figure 02_image001
Figure 02_image003
Figure 02_image005
。可以採用預設的第一濾波方案,分別對三個顏色通道的顏色均值進行第一時域濾波。In the embodiment of the present invention, for each frame of image, the color averages of the three color channels are respectively counted. It is assumed that the color averages of the three RGB color channels of the current frame image are respectively
Figure 02_image001
,
Figure 02_image003
,
Figure 02_image005
. The preset first filtering scheme may be used to perform the first time domain filtering on the color averages of the three color channels respectively.

在本發明實施例中,該第一濾波方案可以包括但不限於:有限脈衝回應濾波器或無限脈衝回應濾波器。In the embodiment of the present invention, the first filtering scheme may include, but is not limited to: a finite impulse response filter or an infinite impulse response filter.

在本發明實施例中,這裡以

Figure 02_image001
為例解釋如何通過有限脈衝回應濾波器進行時域濾波:假設
Figure 02_image007
為先於當前幀圖像
Figure 02_image009
幀圖像的
Figure 02_image011
通道的顏色均值,對
Figure 02_image011
通道的顏色均值進行時域濾波可以用以下關係式描述:
Figure 02_image013
其中,
Figure 02_image015
Figure 02_image011
通道時域濾波的結果,
Figure 02_image017
為濾波係數。令
Figure 02_image019
,用
Figure 02_image021
乘以當前幀圖像中每個像素中
Figure 02_image011
通道的像素值,作為該幀圖像中每個像素中
Figure 02_image011
通道的像素值。In the embodiment of the present invention, here is
Figure 02_image001
Take an example to explain how to perform time-domain filtering through a finite impulse response filter: hypothesis
Figure 02_image007
Is prior to the current frame of image
Figure 02_image009
Frame image
Figure 02_image011
The color average of the channel, right
Figure 02_image011
The time-domain filtering of the color mean value of the channel can be described by the following relationship:
Figure 02_image013
in,
Figure 02_image015
for
Figure 02_image011
The result of channel time domain filtering,
Figure 02_image017
Is the filter coefficient. make
Figure 02_image019
,use
Figure 02_image021
Multiply by each pixel in the current frame of image
Figure 02_image011
The pixel value of the channel, as each pixel in the frame of the image
Figure 02_image011
The pixel value of the channel.

在本發明實施例中,對其他通道(G通道和B通道)進行類似的操作,以實現對當前幀圖像中每個像素中

Figure 02_image023
顏色通道的顏色均值進行時域濾波,並利用時域濾波後的結果去調整每個像素的每個顏色通道的像素值,從而實現對視頻中的每一幀圖像進行亮度調整,在此不再一一贅述。In the embodiment of the present invention, similar operations are performed on other channels (G channel and B channel) to realize the calculation of each pixel in the current frame image.
Figure 02_image023
The color average value of the color channel is filtered in the time domain, and the result of the time domain filtering is used to adjust the pixel value of each color channel of each pixel, so as to realize the brightness adjustment of each frame of the video. Let me repeat them one by one.

實施例三Example three

該實施例在實施例一或實施例二的基礎上,給出了圖像匹配的一個具體實施方式。This embodiment provides a specific implementation of image matching on the basis of Embodiment 1 or Embodiment 2.

在本發明實施例中,所述對經過亮度調整的每一幀圖像進行偏移量補償可以包括:採用預設的偏移量補償算法,獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量,並通過對所述偏移量進行補償,以使得所述視頻中任意相鄰的兩幀圖像位於相同的圖像座標下的圖像內容保持一致。In the embodiment of the present invention, the offset compensation for each frame of the image that has undergone brightness adjustment may include: using a preset offset compensation algorithm to obtain any two adjacent frames of the brightness-adjusted video The offset between the images is compensated for, so that the image content of any two adjacent frames in the video at the same image coordinates remains consistent.

在本發明實施例中,由於相機的晃動(比如手持相機時手的抖動會引起相機的晃動),當前幀圖像相對於上一幀圖像會有一定的偏移量,這樣就造成了兩幀圖像中位於相同的圖像座標下的圖像內容不一致,這種現象會對後續的圖像時域濾波造成不良的影響。除了偏移量之外,還有可能存在相機的旋轉,但是如果只考慮相鄰兩幀圖像的話,相機的旋轉可以忽略。圖像匹配的目的就是為了找到圖像之間的偏移量,並通過偏移量的補償,消除圖像內容不一致的現象。In the embodiment of the present invention, due to the shaking of the camera (for example, the shaking of the hand when holding the camera can cause the shaking of the camera), the current frame image will have a certain offset relative to the previous frame image, which causes two problems. The content of the images under the same image coordinates in the frame images are inconsistent, and this phenomenon will have an adverse effect on the subsequent image temporal filtering. In addition to the offset, there may be camera rotation, but if only two adjacent frames of images are considered, the camera rotation can be ignored. The purpose of image matching is to find the offset between images, and to eliminate the inconsistency of image content through offset compensation.

在本發明實施例中,視頻中任意相鄰的兩幀圖像位於相同的圖像座標下的圖像內容保持一致可以是指:任意相鄰的兩幀圖像之間,任意兩個圖像內容相同的區域在相同的圖像座標下所處的位置相同,或者說,所處的位置的偏差量小於或等於預設的偏差閾值。In the embodiment of the present invention, the consistency of the image content of any two adjacent frames of images located at the same image coordinates in the video may mean: between any two adjacent frames of images, any two images Regions with the same content are at the same position under the same image coordinates, or in other words, the deviation of the position is less than or equal to a preset deviation threshold.

在本發明實施例中,該偏移量補償算法可以包括但不限於:範本匹配算法和/或基於特徵點的匹配算法,下面將分別對兩種偏移量補償算法進行詳細說明。In the embodiment of the present invention, the offset compensation algorithm may include, but is not limited to: a template matching algorithm and/or a matching algorithm based on feature points. The two offset compensation algorithms will be described in detail below.

在本發明實施例中,最簡單的偏移量補償算法可以是經典的範本匹配算法。其基本原理可簡單描述如下:在參考圖像中截取與模板圖大小一樣的區域,作為截取圖,將模板圖和截取圖進行比較,計算它們的差異。其中,評價圖像差異的技術指標可以包括但不限於:歸一化互相關、平均絕對差、誤差平方和及絕對誤差和等。可以人為設定截取圖在參考圖中的起始位置的範圍,計算在這個範圍內所有截取圖和模板圖的差異,將最小差異對應的截取圖的起始位置作為模板圖和參考圖的偏移量。In the embodiment of the present invention, the simplest offset compensation algorithm may be a classic template matching algorithm. The basic principle can be briefly described as follows: intercept an area of the same size as the template image in the reference image, and use it as the intercepted image. Compare the template image with the intercepted image, and calculate the difference between them. Among them, the technical indicators for evaluating image differences may include, but are not limited to: normalized cross-correlation, average absolute difference, error sum of squares, and absolute error sum, etc. You can manually set the range of the starting position of the captured image in the reference image, calculate the difference between all captured images and the template image in this range, and use the starting position of the captured image corresponding to the smallest difference as the offset between the template image and the reference image. quantity.

在本發明實施例中,基於特徵點的匹配算法的基本原理可簡單描述如下:在待匹配的兩幅圖像中分別提取特徵點,通過特徵點匹配的算法求得特徵點匹配對,根據這些特徵點匹配對計算兩幅圖像之間的偏移量。In the embodiment of the present invention, the basic principle of the matching algorithm based on feature points can be simply described as follows: extract feature points from the two images to be matched, and obtain feature point matching pairs through the feature point matching algorithm. The feature point matching pair calculates the offset between the two images.

在本發明實施例中,提取特徵點的算法也有很多種,例如,可以包括但不限於經典的SIFT算法(SIFT,即Scale-invariant feature transform,尺度不變特徵變換)、HARRIS算法等。In the embodiment of the present invention, there are many algorithms for extracting feature points. For example, they may include, but are not limited to, the classic SIFT algorithm (SIFT, that is, Scale-invariant feature transform), HARRIS algorithm, and the like.

在本發明實施例中,特徵點匹配的算法可以包括但不限於SIFT算法、SURF(Speeded Up Robust Features快速增強的健壯特性)算法等。In the embodiment of the present invention, the algorithm for feature point matching may include, but is not limited to, the SIFT algorithm, the SURF (Speeded Up Robust Features) algorithm, and the like.

在本發明實施例中,除根據這些特徵點匹配對計算兩幅圖像之間的偏移量之外,還可以求得特徵點的光流(如,採用經典Lucas-Kanade算法),通過排除異常的光流(簡單的方法可以是設定一個閾值,大於或等於這個閾值的光流被認定為是異常光流,小於這個閾值的光流被認定為是非異常光流),將剩下的特徵點的光流求平均作為整幅圖像的偏移量。In the embodiment of the present invention, in addition to calculating the offset between the two images according to these feature point matching pairs, the optical flow of the feature points can also be obtained (for example, using the classic Lucas-Kanade algorithm), and by excluding Abnormal optical flow (a simple method can be to set a threshold, optical flow greater than or equal to this threshold is regarded as abnormal optical flow, optical flow less than this threshold is regarded as non-abnormal optical flow), and the remaining features The optical flow of the points is averaged as the offset of the entire image.

實施例四Example four

該實施例在實施例三的基礎上,為了消除移動物體對偏移量補償過程的影響,給出了獲取圖像的偏移量的另一個具體實施方式。In this embodiment, on the basis of the third embodiment, in order to eliminate the influence of the moving object on the offset compensation process, another specific implementation manner for obtaining the offset of the image is given.

在本發明實施例中,如圖3所示,所述獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量可以包括步驟S301-S303: S301:分別對經過亮度調整後的當前幀圖像和上一幀圖像進行分塊,獲得多個第二分塊圖像; S302:分別計算所述當前幀圖像和所述上一幀圖像中相對應的兩個第二分塊圖像之間的偏移量; S303:從所述多個第二分塊圖像中排除包含運動物體的第二分塊圖像,並將剩餘的第二分塊圖像的偏移量的平均值作為所述當前幀圖像和所述上一幀圖像之間的偏移量。In the embodiment of the present invention, as shown in FIG. 3, the obtaining the offset between any two adjacent frames of the video after the brightness adjustment may include steps S301-S303: S301: Separately block the current frame image and the previous frame image after brightness adjustment, to obtain multiple second block images; S302: Calculate the offsets between the two corresponding second block images in the current frame image and the previous frame image respectively; S303: Excluding the second segmented image containing the moving object from the plurality of second segmented images, and use the average value of the offsets of the remaining second segmented images as the current frame image The offset between the image and the previous frame.

在本發明實施例中,為了消除運動物體對偏移量補償的影響,可以將圖像先進行分塊,分別計算每一個圖像塊(即上述的第二分塊圖像)對應的偏移量,然後排除那些受運動物體影響的圖像塊,將其餘圖像塊的偏移量求平均值作為整個圖像的偏移量。In the embodiment of the present invention, in order to eliminate the influence of the moving object on the offset compensation, the image may be divided into blocks first, and the offset corresponding to each image block (that is, the second block image mentioned above) can be calculated separately. Then exclude those image blocks that are affected by moving objects, and average the offset of the remaining image blocks as the offset of the entire image.

實施例五Example five

該實施例在實施例三或實施例四的基礎上,給出了對經過偏移量補償後的視頻進行進一步時域濾波的一個具體實施方式。On the basis of the third or fourth embodiment, this embodiment provides a specific implementation manner for further time-domain filtering of the video after offset compensation.

在本發明實施例中,所述對經過偏移量補償的每一幀圖像的像素作時域濾波可以包括:採用預設的第二濾波方案對每一幀視頻圖像中的每一個像素作第二時域濾波,以使得當前幀圖像和先於當前幀的圖像進行線性疊加。In the embodiment of the present invention, the time-domain filtering of pixels of each frame of image after offset compensation may include: adopting a preset second filtering scheme to each pixel in each frame of video image Perform the second time domain filtering, so that the current frame image and the image prior to the current frame are linearly superimposed.

在本發明實施例中,該步驟中的時域濾波(即第二時域濾波)與步驟S101中的時域濾波(即第一時域濾波)類似,不同的是該步驟是對每一個像素做時域濾波。In the embodiment of the present invention, the time-domain filtering in this step (ie, the second time-domain filtering) is similar to the time-domain filtering (ie, the first time-domain filtering) in step S101, except that this step is performed for each pixel. Do time domain filtering.

在本發明實施例中,需要說明的時,該第一時域濾波和第二時域濾波僅用於區分兩個不同步驟中的時域濾波,不用於限制兩次時域濾波的具體方案和實施順序等。第一時域濾波和第二時域濾波可以採用相同或不同的時域濾波方案。In the embodiment of the present invention, when it needs to be explained, the first time-domain filtering and the second time-domain filtering are only used to distinguish the time-domain filtering in two different steps, and are not used to limit the specific scheme and the second time-domain filtering. Implementation order, etc. The first time domain filtering and the second time domain filtering may adopt the same or different time domain filtering schemes.

在本發明實施例中,該第二濾波方案可以包括但不限於:有限脈衝回應濾波器或無限脈衝回應濾波器。In the embodiment of the present invention, the second filtering scheme may include, but is not limited to: a finite impulse response filter or an infinite impulse response filter.

在本發明實施例中,下面以採用有限脈衝回應濾波器濾波為例進行詳細說明。具體地,通過有限脈衝回應濾波器濾波可以通過下述的關係式實現:

Figure 02_image025
其中,
Figure 02_image027
為當前幀圖像,
Figure 02_image029
為先於當前幀圖像
Figure 02_image009
幀的圖像,
Figure 02_image031
為視頻的第一幀圖像,
Figure 02_image033
為時域濾波後的結果,
Figure 02_image035
為圖像像素座標,
Figure 02_image037
為濾波係數。In the embodiment of the present invention, the following takes the filtering of the finite impulse response filter as an example for detailed description. Specifically, filtering through a finite impulse response filter can be achieved through the following relationship:
Figure 02_image025
in,
Figure 02_image027
Is the current frame image,
Figure 02_image029
Is prior to the current frame of image
Figure 02_image009
Frame image,
Figure 02_image031
Is the first frame of the video,
Figure 02_image033
Is the result of time domain filtering,
Figure 02_image035
Is the image pixel coordinates,
Figure 02_image037
Is the filter coefficient.

在本發明實施例中,對視頻中的每一幀幀圖像進行時域濾波,濾波後的結果是當前幀圖像和歷史幀圖像(即先於當前幀的圖像)的線性疊加。In the embodiment of the present invention, time-domain filtering is performed on each frame image in the video, and the filtered result is a linear superposition of the current frame image and the historical frame image (that is, the image prior to the current frame).

實施例六Example Six

該實施例在實施例三或實施例四的基礎上,為了保護好視頻中的運動物體不被模糊或者不出現重影現象,並且為了上述實施例四種的方案順利實施,給出了對圖像中是否包含運動物體進行判斷的實施例方案,並給出了一個具體實施方式。This embodiment is based on the third or fourth embodiment, in order to protect the moving objects in the video from being blurred or ghosting, and for the smooth implementation of the four solutions in the above-mentioned embodiments, the corresponding diagrams are given. An example scheme for judging whether the image contains a moving object, and a specific implementation is given.

在本發明實施例中,所述方法還可以包括:在對經過亮度調整的每一幀圖像進行偏移量補償以後,對圖像中的運動物體進行判斷。In the embodiment of the present invention, the method may further include: after performing offset compensation on each frame of image that has undergone brightness adjustment, determining the moving object in the image.

在本發明實施例中,如圖4所示,所述對圖像中是否包含運動物體進行判斷可以包括步驟S401-S404: S401:分別對當前幀圖像和上一幀圖像進行分塊,獲得多個第一分塊圖像; S402:根據預設的差異計算算法,分別計算所述當前幀圖像和所述上一幀圖像中相對應的兩個第一分塊圖像的差異; S403:將所述兩個第一分塊圖像的差異與預設的差異閾值相比較; S404:當所述兩個第一分塊圖像的差異大於或等於所述差異閾值時,判定所述兩個第一分塊圖像為非相似的,並判定所述兩個第一分塊圖像在所述當前幀圖像和所述上一幀圖像中對應的圖像區域包含有運動物體;當所述兩個第一分塊圖像的差異小於所述差異閾值時,判定所述兩個第一分塊圖像為相似的,並判定所述兩個第一分塊圖像在所述當前幀圖像和所述上一幀圖像中對應的圖像區域不包含有運動物體。In the embodiment of the present invention, as shown in FIG. 4, the judging whether an image contains a moving object may include steps S401-S404: S401: block the current frame image and the previous frame image respectively to obtain multiple first block images; S402: According to a preset difference calculation algorithm, respectively calculate the difference between the two corresponding first block images in the current frame image and the previous frame image; S403: Compare the difference between the two first segmented images with a preset difference threshold; S404: When the difference between the two first block images is greater than or equal to the difference threshold, determine that the two first block images are not similar, and determine the two first block images The image contains a moving object in the corresponding image area of the current frame image and the previous frame image; when the difference between the two first segmented images is less than the difference threshold, it is determined The two first block images are similar, and it is determined that the two first block images in the current frame image and the corresponding image area of the previous frame image do not contain motion object.

在本發明實施例中,將當前幀圖像與上一幀圖像進行對比之前,首先將當前幀圖像和上一幀圖像進行分塊,在每一個小塊內,計算當前幀圖像和上一幀圖像之間的差異(該差異的獲取可以採用但不限於:歸一化互相關、平均絕對差、誤差平方和以及絕對誤差和等算法實現),並可以預先設定一個閾值用以判斷這兩幀圖像在這小塊內是否相似,其中,當差異大於或等於該閾值時可以被判斷為非相似,否則判斷為相似。如果判斷結果為非相似,就可以認為該區域包含有運動物體,否則可以判斷該區域不包含運動物體。In the embodiment of the present invention, before comparing the current frame image with the previous frame image, first divide the current frame image and the previous frame image into blocks, and calculate the current frame image in each small block The difference between the image and the previous frame (the difference can be obtained by but not limited to: normalized cross-correlation, average absolute difference, error sum of squares, and absolute error sum algorithms), and a threshold can be preset It can be judged whether the two frames of images are similar in this small block, wherein when the difference is greater than or equal to the threshold, it can be judged as non-similar, otherwise it can be judged as similar. If the judgment result is not similar, it can be considered that the area contains moving objects, otherwise it can be judged that the area does not contain moving objects.

在本發明實施例中,需要說明的是,該實施例中的第一分塊圖像和前述實施例中的第二分塊圖像僅是兩個不同的稱呼或標記而已,主要為了區分用於不同目的兩次分塊操作中獲得的分塊圖像,以避免混淆,並沒有任何順序、大小等屬性區分。In the embodiment of the present invention, it should be noted that the first segmented image in this embodiment and the second segmented image in the foregoing embodiment are only two different names or labels, mainly for distinguishing purposes. The segmented images obtained in two segmentation operations for different purposes are used to avoid confusion, and there is no distinction in attributes such as order and size.

在本發明實施例中,需要說明的是,該步驟可以在對經過亮度調整的每一幀圖像進行偏移量補償以後進行,也可以在對經過亮度調整的每一幀圖像進行偏移量補償之前進行,對於其具體實施時間和順序不做詳細限制。In the embodiment of the present invention, it should be noted that this step can be performed after offset compensation is performed on each frame of image that has undergone brightness adjustment, or it can be performed after offset of each frame of image that has undergone brightness adjustment. Before the compensation, there are no detailed restrictions on the specific implementation time and sequence.

實施例七Example Seven

該實施例在實施例六的基礎上,為了保護好視頻中的運動物體不被模糊或者不出現重影現象,給出了進一步的具體實施方式。In this embodiment, on the basis of the sixth embodiment, in order to protect the moving objects in the video from being blurred or ghosting, a further specific implementation manner is given.

在本發明實施例中,如圖5所示,該方法還可以包括步驟S501-S502: S501:在判斷出任一幀圖像中包含有運動物體後,生成該幀圖像的遮罩版圖像;其中,該遮罩版圖像中包含有所述運動物體的圖像區域的像素值為1,不包含有所述運動物體的圖像區域的像素值為0; S502:根據預設的融合算法,利用所述遮罩版圖像將對像素作過所述時域濾波(即第二時域濾波)後的相應幀圖像與未對像素作過時域濾波(即第二時域濾波)的相應幀圖像進行融合,以使得所述包含有所述運動物體的圖像區域得到保留。In the embodiment of the present invention, as shown in FIG. 5, the method may further include steps S501-S502: S501: After judging that any frame of image contains a moving object, generate a mask image of the frame of image; wherein, the mask image contains the pixel value of the image area of the moving object Is 1, the pixel value of the image area that does not contain the moving object is 0; S502: According to the preset fusion algorithm, use the masked image to compare the corresponding frame image after the time domain filtering (ie, the second time domain filtering) on the pixels and the time domain filtering on the pixels without time domain filtering ( That is, the corresponding frame images of the second temporal filtering) are fused, so that the image area containing the moving object is preserved.

在本發明實施例中,當任何一幀圖像被判斷出包含運動物體時,可以生成該運動圖像的一幅遮罩版圖像

Figure 02_image039
,該遮罩版圖像
Figure 02_image039
為二值圖像,其中,有運動物體的區域的像素值為1,沒有運動物體的區域的像素值為0。In the embodiment of the present invention, when any frame of image is determined to contain a moving object, a masked image of the moving image can be generated
Figure 02_image039
, The masked image
Figure 02_image039
It is a binary image, where the pixel value of the area with moving objects is 1, and the pixel value of the area without moving objects is 0.

在本發明實施例中,根據運動物體判斷的結果,可以利用遮罩版圖像

Figure 02_image041
將經過時域濾波後的相應幀圖像與未對像素作過時域濾波的相應幀圖像進行融合,以使得包含有運動物體的圖像區域得到保留,從而消除了時域濾波對運動物體造成的模糊、重影等現象。In the embodiment of the present invention, according to the result of the judgment of the moving object, the mask image can be used
Figure 02_image041
The corresponding frame image after time-domain filtering is fused with the corresponding frame image without time-domain filtering of pixels, so that the image area containing moving objects is preserved, thereby eliminating the effect of time-domain filtering on moving objects. Blur, ghosting and other phenomena.

在本發明實施例中,該對像素作過所述時域濾波(即第二時域濾波)後的相應幀圖像,以及該未對像素作過時域濾波的相應幀圖像,均是指與上述的遮罩版圖像相對應的幀圖像。In the embodiment of the present invention, the corresponding frame image after the time-domain filtering (ie, the second time-domain filtering) of the pair of pixels, and the corresponding frame image without the time-domain filtering of the pixels, all refer to The frame image corresponding to the above-mentioned mask image.

在本發明實施例中,根據該遮罩版圖像

Figure 02_image041
可以採用下述的關係式實現圖像融合:
Figure 02_image043
其中,
Figure 02_image045
為融合後的結果,
Figure 02_image033
為圖像時域濾波的結果,
Figure 02_image047
為當前幀圖像,
Figure 02_image049
為圖像的座標。In the embodiment of the present invention, according to the mask image
Figure 02_image041
The following relationship can be used to achieve image fusion:
Figure 02_image043
in,
Figure 02_image045
Is the result of the fusion,
Figure 02_image033
Is the result of image temporal filtering,
Figure 02_image047
Is the current frame image,
Figure 02_image049
Is the coordinates of the image.

在本發明實施例中,通過上述關係式,可以實現時域濾波後的當前幀圖像與未經過時域濾波的當前幀圖像進行簡單疊加,即包含有運動物體的區域採用當前幀圖像的像素值,而沒有包含運動物體的區域採用時域濾波後獲得的結果的像素值。In the embodiment of the present invention, the current frame image after time domain filtering can be simply superimposed with the current frame image without time domain filtering through the above relationship, that is, the area containing moving objects adopts the current frame image The pixel value of the result obtained after time domain filtering is used for the area that does not contain moving objects.

在本發明實施例中,該預設的融合算法還可以包括:經典的拉普拉斯金字塔融合算法。In the embodiment of the present invention, the preset fusion algorithm may further include: the classic Laplacian pyramid fusion algorithm.

在本發明實施例中,經典的拉普拉斯金字塔融合算法基本原理如下:首先產生一系列的模糊圖像

Figure 02_image051
Figure 02_image053
Figure 02_image055
,其中
Figure 02_image051
為原圖,後面的每層圖像
Figure 02_image057
都是通過對上一層圖像
Figure 02_image059
進行卷積模糊並下採樣生成的,比如
Figure 02_image053
是由
Figure 02_image051
進行卷積模糊並下採樣獲得,卷積模糊核通常採用高斯核,所以這一系列圖像也稱作高斯金字塔。為了簡單起見這裡用高斯金字塔來表示模糊圖像
Figure 02_image051
Figure 02_image053
Figure 02_image055
序列,儘管有時模糊圖像序列並非由高斯模糊核產生。假設拉普拉斯金字塔標記為
Figure 02_image061
Figure 02_image063
Figure 02_image065
,那麼拉普拉斯金字塔每一層的圖像可以由該等式獲得:
Figure 02_image067
,其中
Figure 02_image069
函數可以理解為上採樣。即拉普拉斯金字塔的每一層都是由高斯金字塔中的該層對應的圖像減去高斯金字塔中的下一層的圖像經過上採樣的圖像。值得注意的是最後一層
Figure 02_image071
。通過拉普拉斯金字塔重構圖像是上述過程的逆過程。所以通過拉普拉斯金字塔進行融合的步驟描述如下: 1、對時域濾波後的結果
Figure 02_image033
和當前幀
Figure 02_image047
建立拉普拉斯金字塔分別為
Figure 02_image073
Figure 02_image075
。 2、對遮罩版圖像
Figure 02_image041
建立高斯金字塔,記為
Figure 02_image077
。 3、構建新的拉普拉斯金字塔
Figure 02_image079
Figure 02_image081
。其中
Figure 02_image083
為金字塔層數下標,為正整數,
Figure 02_image049
為圖像的座標。 4、通過
Figure 02_image079
重構圖像得到結果圖。In the embodiment of the present invention, the basic principle of the classic Laplacian pyramid fusion algorithm is as follows: First, a series of blurred images are generated
Figure 02_image051
,
Figure 02_image053
Figure 02_image055
,in
Figure 02_image051
For the original image, each layer of images behind
Figure 02_image057
Through the previous layer of image
Figure 02_image059
Convolutional blur and down-sampling generated, such as
Figure 02_image053
By
Figure 02_image051
Convolutional blur is performed and down-sampling is performed. The convolutional blur kernel usually uses a Gaussian kernel, so this series of images is also called a Gaussian pyramid. For the sake of simplicity, a Gaussian pyramid is used to represent the blurred image.
Figure 02_image051
,
Figure 02_image053
Figure 02_image055
Sequence, although sometimes the blurred image sequence is not produced by the Gaussian blur kernel. Suppose the Laplacian pyramid is marked as
Figure 02_image061
,
Figure 02_image063
Figure 02_image065
, Then the images of each layer of the Laplace Pyramid can be obtained by the equation:
Figure 02_image067
,in
Figure 02_image069
The function can be understood as upsampling. That is, each layer of the Laplacian pyramid is an image obtained by subtracting the image of the next layer in the Gaussian pyramid from the image corresponding to the layer in the Gaussian pyramid after being up-sampled. It is worth noting that the last layer
Figure 02_image071
. The reconstruction of the image through the Laplacian pyramid is the inverse process of the above process. Therefore, the steps of fusion through the Laplacian pyramid are described as follows: 1. The result after filtering in the time domain
Figure 02_image033
And the current frame
Figure 02_image047
The establishment of the Laplace Pyramid is
Figure 02_image073
with
Figure 02_image075
. 2. For the masked image
Figure 02_image041
Build a Gaussian pyramid, denoted as
Figure 02_image077
. 3. Build a new Laplace pyramid
Figure 02_image079
:
Figure 02_image081
. in
Figure 02_image083
Is the subscript of the number of pyramid levels, which is a positive integer,
Figure 02_image049
Is the coordinates of the image. 4. Pass
Figure 02_image079
Reconstruct the image to get the result map.

實施例八Example eight

該實施例在實施例七的基礎上,為了去除經過以上步驟後遺留的殘影,給出了進一步的具體實施方式。In this embodiment, on the basis of the seventh embodiment, in order to remove the residual image remaining after the above steps, a further specific implementation manner is given.

在本發明實施例中,所述方法還可以包括:對經過融合的每一幀圖像進行空域濾波。In the embodiment of the present invention, the method may further include: performing spatial filtering on each frame of the fused image.

在本發明實施例中,時域濾波描述的是幀與幀之間的濾波,而空域濾波是指對單幀圖像進行濾波,主要目的是去除經過以上步驟後遺留的殘影。In the embodiment of the present invention, temporal filtering describes the filtering between frames, while spatial filtering refers to filtering a single frame image, and the main purpose is to remove the residual image left after the above steps.

在本發明實施例中,空域濾波方法可以包括但不限於:保邊濾波器(edge preserved filter);例如,引導濾波(guided filter)、雙邊濾波(bilateral filter)等濾波器,對圖像融合後的結果進行圖像空域濾波得到最終的結果。In the embodiment of the present invention, the spatial filtering method may include, but is not limited to: an edge preserved filter; for example, a guided filter, a bilateral filter, etc., filters the image after fusion The result is filtered in the spatial domain of the image to get the final result.

實施例九Example 9

該實施例在上述任意實施例的基礎上,給出了一種在視頻圖像縮小的基礎上對視頻圖像進行處理的具體實施方式。On the basis of any of the foregoing embodiments, this embodiment provides a specific implementation manner for processing video images on the basis of video image reduction.

在本發明實施例中,實施例九與實施例一到八的主要流程基本一致,主要的差異在於實施例九中的大部分操作都是在小圖上進行的。其流程圖如圖6所示。具體的如下:在整體亮度調整之後將圖像進行縮小,將整體亮度調整後的圖像標記為

Figure 02_image085
Figure 02_image085
縮小後的圖像標記為
Figure 02_image087
。然後對
Figure 02_image087
進行圖像對齊(即進行偏移量補償)、第二時域濾波、運動物體判斷、圖像融合、空域濾波(這些步驟同實施例一到八),將結果記為
Figure 02_image089
。由此可以求得差異圖
Figure 02_image091
Figure 02_image093
,其中
Figure 02_image095
為圖像的座標。然後將差異圖
Figure 02_image097
放大到與
Figure 02_image099
同尺寸,得到放大後的差異圖
Figure 02_image101
,將
Figure 02_image101
疊加到
Figure 02_image099
得到最終結果
Figure 02_image103
,其中,
Figure 02_image049
為圖像的座標:
Figure 02_image105
In the embodiment of the present invention, the main flow of the ninth embodiment is basically the same as that of the first to eighth embodiments, and the main difference is that most of the operations in the ninth embodiment are performed on a small diagram. The flow chart is shown in Figure 6. The details are as follows: after the overall brightness adjustment, the image is reduced, and the image after the overall brightness adjustment is marked as
Figure 02_image085
,
Figure 02_image085
The reduced image is marked as
Figure 02_image087
. Then right
Figure 02_image087
Perform image alignment (ie, offset compensation), second time domain filtering, moving object judgment, image fusion, and spatial filtering (these steps are the same as those in Examples 1 to 8), and record the result as
Figure 02_image089
. From this, the difference map can be obtained
Figure 02_image091
:
Figure 02_image093
,in
Figure 02_image095
Is the coordinates of the image. Then the difference map
Figure 02_image097
Zoom in to
Figure 02_image099
The same size, get the enlarged difference map
Figure 02_image101
,will
Figure 02_image101
Superimposed to
Figure 02_image099
Get the final result
Figure 02_image103
,in,
Figure 02_image049
Is the coordinates of the image:
Figure 02_image105

在本發明實施例中,採用了在縮小的圖像上進行大部分處理,然後將處理後的結果和處理前的小圖的差異圖放大後應用到大圖中,該實施例方案在保證效果的前提下可以大大的減少運算時間。In the embodiment of the present invention, most of the processing is performed on the reduced image, and then the difference between the processed result and the small image before the processing is enlarged and applied to the large image. The solution of this embodiment guarantees the effect Under the premise of, the computing time can be greatly reduced.

實施例十Example ten

一種圖像處理裝置1,如圖7所示,包括:處理器11和電腦可讀儲存媒體12,所述電腦可讀儲存媒體12中儲存有指令,其中,當所述指令被所述處理器11執行時,實現上述任意一項實施例所述的圖像處理方法。An image processing device 1, as shown in FIG. 7, includes a processor 11 and a computer-readable storage medium 12, the computer-readable storage medium 12 stores instructions, wherein, when the instructions are received by the processor When 11 is executed, the image processing method described in any one of the above embodiments is implemented.

本發明實施例包括:對視頻中的每一幀圖像進行亮度調整;對經過亮度調整的每一幀圖像進行偏移量補償;對經過偏移量補償的每一幀圖像的像素作時域濾波。通過該實施例方案,有效去除了視頻中存在的亮度或者顏色波動,並且很好的保留了運動物體,使其不模糊且無殘影或重影。The embodiment of the present invention includes: adjusting the brightness of each frame of image in the video; performing offset compensation for each frame of the image after the brightness adjustment; performing offset compensation on the pixels of each frame of the image. Time domain filtering. Through the scheme of this embodiment, the brightness or color fluctuations in the video are effectively removed, and moving objects are well preserved, so that they are not blurred and have no afterimages or ghosting.

本發明實施例所公開的技術可被應用於靜態圖像、運動圖像(例如視頻)中,並且可以被應用於任何合適類型的圖像處理裝置中,例如數碼相機、手機、具有集成數碼相機的電子設備、安全或視頻監視系統、醫療成像系統等等。The technology disclosed in the embodiments of the present invention can be applied to static images, moving images (such as video), and can be applied to any suitable type of image processing device, such as digital cameras, mobile phones, and integrated digital cameras. Electronic equipment, security or video surveillance systems, medical imaging systems, etc.

與硬體的結合Combination with hardware

本發明所屬技術領域中具有通常知識者可以理解,上文中所公開方法中的全部或某些步驟、系統、裝置中的功能模組/單元可以被實施為軟體、固件、硬體及其適當的組合。在硬體實施方式中,在以上描述中提及的功能模組/單元之間的劃分不一定對應於物理組件的劃分;例如,一個物理組件可以具有多個功能,或者一個功能或步驟可以由若干物理組件合作執行。某些組件或所有組件可以被實施為由處理器,如數位訊號處理器或微處理器執行的軟體,或者被實施為硬體,或者被實施為積體電路,如專用積體電路。這樣的軟體可以分布在電腦可讀媒體上,電腦可讀媒體可以包括電腦儲存媒體(或非暫時性媒體)和通訊媒體(或暫時性媒體)。如本發明所屬技術領域中具有通常知識者公知的,術語電腦儲存媒體包括在用於儲存資訊(諸如電腦可讀指令、資料結構、程式模組或其他資料)的任何方法或技術中實施的易失性和非易失性、可移除和不可移除媒體。電腦儲存媒體包括但不限於 RAM、ROM、EEPROM、快閃記憶體或其他記憶體技術、CD-ROM、數位多功能盤(DVD)或其他光碟儲存、磁盒、磁帶、磁片儲存或其他磁儲存裝置、或者可以用於儲存期望的資訊並且可以被電腦訪問的任何其他的媒體。此外,本發明所屬技術領域中具有通常知識者公知的是,通訊媒體通常包含電腦可讀指令、資料結構、程式模組或者諸如載波或其他傳輸機制之類的調製資料信號中的其他資料,並且可包括任何資訊遞送媒體。Those with ordinary knowledge in the technical field to which the present invention pertains can understand that all or some of the steps in the method disclosed above, the functional modules/units in the system, and the device can be implemented as software, firmware, hardware, and appropriate combination. In hardware implementations, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, a physical component may have multiple functions, or a function or step may be defined by Several physical components are executed in cooperation. Some or all of the components can be implemented as software executed by a processor, such as a digital signal processor or a microprocessor, or as hardware, or as an integrated circuit, such as a dedicated integrated circuit. Such software can be distributed on computer-readable media, and computer-readable media can include computer storage media (or non-transitory media) and communication media (or temporary media). As known to those with ordinary knowledge in the technical field to which the present invention pertains, the term computer storage media includes easy implementations in any method or technology for storing information (such as computer-readable instructions, data structures, program modules, or other data). Loss and non-volatile, removable and non-removable media. Computer storage media include but are not limited to RAM, ROM, EEPROM, flash memory or other memory technologies, CD-ROM, digital versatile disk (DVD) or other optical disk storage, magnetic cassettes, magnetic tapes, floppy disk storage or other magnetic Storage device, or any other medium that can be used to store desired information and that can be accessed by a computer. In addition, as is well known to those with ordinary knowledge in the technical field to which the present invention pertains, communication media usually include computer-readable instructions, data structures, program modules, or other data in modulated data signals such as carrier waves or other transmission mechanisms, and Can include any information delivery media.

1‧‧‧圖像處理裝置 11‧‧‧處理器 12‧‧‧電腦可讀儲存媒體 S101~S103‧‧‧步驟 S201~S202‧‧‧步驟 S301~S303‧‧‧步驟 S401~S404‧‧‧步驟 S501~S502‧‧‧步驟1‧‧‧Image processing device 11‧‧‧Processor 12‧‧‧Computer readable storage media S101~S103‧‧‧Step S201~S202‧‧‧Step S301~S303‧‧‧Step S401~S404‧‧‧Step S501~S502‧‧‧Step

圖式用來提供對本發明技術方案的進一步理解,並且構成說明書的一部分,與本申請的實施例一起用於解釋本發明的技術方案,並不構成對本發明技術方案的限制。The drawings are used to provide a further understanding of the technical solution of the present invention, and constitute a part of the specification. Together with the embodiments of the present application, they are used to explain the technical solution of the present invention, and do not constitute a limitation to the technical solution of the present invention.

圖1為本發明實施例的圖像處理方法流程圖。Fig. 1 is a flowchart of an image processing method according to an embodiment of the present invention.

圖2為本發明實施例的對視頻中的每一幀圖像進行亮度調整的方法流程圖。FIG. 2 is a flowchart of a method for adjusting the brightness of each frame of video in an embodiment of the present invention.

圖3為本發明實施例的獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量的方法流程圖。FIG. 3 is a flowchart of a method for obtaining the offset between any two adjacent frames of images in a video that has undergone brightness adjustment according to an embodiment of the present invention.

圖4為本發明實施例的對圖像中是否包含運動物體進行判斷的方法流程圖。Fig. 4 is a flowchart of a method for judging whether an image contains a moving object according to an embodiment of the present invention.

圖5為本發明實施例的對包含有運動物體的圖像進行圖像融合的方法流程圖。FIG. 5 is a flowchart of a method for image fusion of an image containing a moving object according to an embodiment of the present invention.

圖6為本發明實施例的在視頻圖像縮小的基礎上對視頻圖像進行處理的方法流程圖。Fig. 6 is a flowchart of a method for processing a video image on the basis of video image reduction according to an embodiment of the present invention.

圖7為本發明實施例的圖像處理裝置組成框圖。FIG. 7 is a block diagram of the composition of an image processing device according to an embodiment of the present invention.

S101~S103‧‧‧步驟 S101~S103‧‧‧Step

Claims (10)

一種圖像處理方法,包括以下步驟:對視頻中的每一幀圖像進行亮度調整;在亮度調整後將每一幀圖像進行縮小處理;對經過亮度調整的每一幀縮小後圖像進行偏移量補償;及對經過偏移量補償的每一幀縮小後圖像的像素作時域濾波。 An image processing method, including the following steps: adjust the brightness of each frame of image in the video; after the brightness adjustment, perform the reduction processing on each frame of the image; perform the reduction processing on each frame of the reduced image after the brightness adjustment Offset compensation; and time-domain filtering is performed on the pixels of each frame of the reduced image after offset compensation. 如請求項1所述之圖像處理方法,其中,所述對視頻中的每一幀圖像進行亮度調整包括:對所述每一幀圖像分別進行如下處理:分別統計三個顏色通道的顏色均值;三個顏色通道是指紅色R通道、綠色G通道和藍色B通道;採用預設的第一濾波方案,分別根據每個顏色通道的顏色均值對相應顏色通道進行第一時域濾波。 The image processing method according to claim 1, wherein the adjusting the brightness of each frame of image in the video includes: performing the following processing on each frame of image: separately counting the three color channels Color average; the three color channels refer to the red R channel, the green G channel, and the blue B channel; the preset first filtering scheme is adopted, and the corresponding color channel is filtered in the first time domain according to the color average of each color channel. . 如請求項1所述之圖像處理方法,其中,所述對經過亮度調整的每一幀縮小後圖像進行偏移量補償包括:採用預設的偏移量補償算法,獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量,並通過對所述偏移量進行補償,以使得所述視頻中任意相鄰的兩幀圖像位於相同的圖像座標下的圖像內容保持一致。 The image processing method according to claim 1, wherein said performing offset compensation on each reduced image frame after brightness adjustment includes: adopting a preset offset compensation algorithm to obtain the brightness adjusted The offset between any two adjacent frames in the video, and the offset is compensated so that any two adjacent frames in the video are located at the same image coordinates. The image content remains consistent. 如請求項1所述之圖像處理方法,其中,所述對經過偏移量補償的每一幀縮小後圖像的像素作時域濾波包括:採用預設的第二濾波方案對每一幀視頻圖像中的每一個像素作第二時域濾波,以使得當前幀圖像和先於當前幀的幀圖像進行線性疊加。 The image processing method according to claim 1, wherein the time-domain filtering of pixels of each frame of the reduced image after offset compensation includes: adopting a preset second filtering scheme for each frame Each pixel in the video image is filtered in the second time domain, so that the current frame image and the frame image prior to the current frame are linearly superimposed. 如請求項3所述之圖像處理方法,其中,所述方法還包括:在 所述對經過亮度調整的每一幀縮小後圖像進行偏移量補償以後,對圖像中是否包含運動物體進行判斷。 The image processing method according to claim 3, wherein the method further includes: After the offset compensation is performed on the reduced image of each frame that has undergone brightness adjustment, it is determined whether the image contains a moving object. 如請求項5所述之圖像處理方法,其中,所述對圖像中是否包含運動物體進行判斷包括:分別對當前幀圖像和上一幀圖像進行分塊,獲得多個第一分塊圖像;根據預設的差異計算算法,分別計算所述當前幀圖像和所述上一幀圖像中相對應的兩個第一分塊圖像的差異;將所述兩個第一分塊圖像的差異與預設的差異閾值相比較;當所述兩個第一分塊圖像的差異大於或等於所述差異閾值時,判定所述兩個第一分塊圖像為非相似的,並判定所述兩個第一分塊圖像在所述當前幀圖像和所述上一幀圖像中對應的圖像區域包含有運動物體;當所述兩個第一分塊圖像的差異小於所述差異閾值時,判定所述兩個第一分塊圖像為相似的,並判定所述兩個第一分塊圖像在所述當前幀圖像和所述上一幀圖像中對應的圖像區域不包含有運動物體。 The image processing method according to claim 5, wherein the judging whether the image contains a moving object includes: separately dividing the current frame image and the previous frame image to obtain a plurality of first divisions Block image; according to a preset difference calculation algorithm, respectively calculate the difference between the two first block images in the current frame image and the previous frame image; compare the two first block images The difference between the divided images is compared with a preset difference threshold; when the difference between the two first divided images is greater than or equal to the difference threshold, it is determined that the two first divided images are not Similar, and it is determined that the corresponding image regions of the two first block images in the current frame image and the previous frame image contain moving objects; when the two first block images When the difference between the images is less than the difference threshold, it is determined that the two first segmented images are similar, and it is determined that the two first segmented images are between the current frame image and the previous The corresponding image area in the frame image does not contain moving objects. 如請求項3所述之圖像處理方法,其中,所述獲取經過亮度調整的視頻中任意相鄰的兩幀圖像之間的偏移量包括:分別對經過亮度調整後的當前幀圖像和上一幀圖像進行分塊,獲得多個第二分塊圖像;分別計算所述當前幀圖像和所述上一幀圖像中相對應的兩個第二分塊圖像之間的偏移量;從所述多個第二分塊圖像中排除包含運動物體的第二分塊圖像,並將剩餘的第二分塊圖像的偏移量的平均值作為所述當 前幀圖像和所述上一幀圖像之間的偏移量。 The image processing method according to claim 3, wherein the obtaining the offset between any two adjacent frames of the video that has undergone brightness adjustment includes: separately adjusting the current frame image after brightness adjustment Perform block division with the previous frame of image to obtain multiple second block images; respectively calculate the distance between the two corresponding second block images in the current frame image and the previous frame image的Offset; Exclude the second block image containing the moving object from the plurality of second block images, and use the average value of the offset of the remaining second block images as the current The offset between the previous frame image and the previous frame image. 如請求項6所述之圖像處理方法,其中,所述方法還包括:在判斷出任一幀圖像中包含有運動物體後,生成所述幀圖像的遮罩版圖像;其中,所述遮罩版圖像中包含有所述運動物體的圖像區域的像素值為1,不包含有所述運動物體的圖像區域的像素值為0;根據預設的融合算法,利用所述遮罩版圖像將對像素作過所述時域濾波後的相應幀圖像與未對像素作過時域濾波的相應幀圖像進行融合,以使得所述包含有所述運動物體的圖像區域得到保留。 The image processing method according to claim 6, wherein the method further comprises: generating a mask image of the frame image after determining that a moving object is contained in any frame image; wherein The pixel value of the image area that contains the moving object in the mask image is 1, and the pixel value of the image area that does not contain the moving object is 0; according to the preset fusion algorithm, the pixel value is 0; For the masked image, the corresponding frame image after the time-domain filtering of the pixels is fused with the corresponding frame image without the time-domain filtering of the pixels, so that the image containing the moving object The area is reserved. 如請求項8所述之圖像處理方法,其中,所述方法還包括:對經過融合的每一幀圖像進行空域濾波。 The image processing method according to claim 8, wherein the method further comprises: performing spatial filtering on each frame of the fused image. 一種圖像處理裝置,包括:處理器和電腦可讀儲存媒體,所述電腦可讀儲存媒體中儲存有指令,其特徵在於,當所述指令被所述處理器執行時,實現如請求項1至9中任一項所述之圖像處理方法。 An image processing device, comprising: a processor and a computer-readable storage medium, the computer-readable storage medium stores instructions, and is characterized in that, when the instructions are executed by the processor, the request item 1 The image processing method described in any one of to 9.
TW108119192A 2018-08-22 2019-06-03 Image processing method and device TWI741305B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810961926.5 2018-08-22
CN201810961926.5A CN110858895B (en) 2018-08-22 2018-08-22 Image processing method and device

Publications (2)

Publication Number Publication Date
TW202009876A TW202009876A (en) 2020-03-01
TWI741305B true TWI741305B (en) 2021-10-01

Family

ID=69586383

Family Applications (1)

Application Number Title Priority Date Filing Date
TW108119192A TWI741305B (en) 2018-08-22 2019-06-03 Image processing method and device

Country Status (5)

Country Link
US (1) US11373279B2 (en)
JP (1) JP6814849B2 (en)
KR (1) KR102315471B1 (en)
CN (1) CN110858895B (en)
TW (1) TWI741305B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109767401B (en) * 2019-01-15 2021-02-12 深圳看到科技有限公司 Picture optimization method, device, terminal and corresponding storage medium
CN111935425B (en) * 2020-08-14 2023-03-24 字节跳动有限公司 Video noise reduction method and device, electronic equipment and computer readable medium
CN115359085B (en) * 2022-08-10 2023-04-04 哈尔滨工业大学 Dense clutter suppression method based on detection point space-time density discrimination

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102547301A (en) * 2010-09-30 2012-07-04 苹果公司 System and method for processing image data using an image signal processor
US20120240005A1 (en) * 2007-07-04 2012-09-20 Lg Electronics Inc. Digital broadcasting system and method of processing data
TW201611564A (en) * 2010-12-27 2016-03-16 Rohm Co Ltd Transmitter/receiver unit and receiver unit
CN106464857A (en) * 2014-03-26 2017-02-22 驼鹿科技公司 Compact 3D depth capture systems

Family Cites Families (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2687670B2 (en) 1990-04-19 1997-12-08 松下電器産業株式会社 Motion detection circuit and image stabilization device
JPH11122513A (en) 1997-10-09 1999-04-30 Matsushita Electric Ind Co Ltd Fluorescent light flicker correction device and camera using it
US6747757B1 (en) * 1998-05-20 2004-06-08 Fuji Photo Film Co., Ltd. Image processing method and apparatus
KR100405150B1 (en) 2001-06-29 2003-11-10 주식회사 성진씨앤씨 Method of adaptive noise smoothing/restoration in spatio-temporal domain and high-definition image capturing device thereof
US7570305B2 (en) * 2004-03-26 2009-08-04 Euresys S.A. Sampling of video data and analyses of the sampled data to determine video properties
JP2006024993A (en) * 2004-07-06 2006-01-26 Matsushita Electric Ind Co Ltd Image pickup signal processing apparatus
JP2007180808A (en) * 2005-12-27 2007-07-12 Toshiba Corp Video image encoding device, video image decoding device, and video image encoding method
JP2008109370A (en) 2006-10-25 2008-05-08 Sanyo Electric Co Ltd Image correcting device and method, and imaging apparatus
JP4245045B2 (en) * 2006-12-26 2009-03-25 ソニー株式会社 Imaging apparatus, imaging signal processing method, and program
TWI351212B (en) * 2006-12-28 2011-10-21 Altek Corp Brightness adjusting method
JP2009069185A (en) * 2007-09-10 2009-04-02 Toshiba Corp Video processing apparatus and method
US8484028B2 (en) * 2008-10-24 2013-07-09 Fuji Xerox Co., Ltd. Systems and methods for document navigation with a text-to-speech engine
US8538200B2 (en) * 2008-11-19 2013-09-17 Nec Laboratories America, Inc. Systems and methods for resolution-invariant image representation
US8681185B2 (en) * 2009-03-05 2014-03-25 Ostendo Technologies, Inc. Multi-pixel addressing method for video display drivers
JP5374220B2 (en) * 2009-04-23 2013-12-25 キヤノン株式会社 Motion vector detection device, control method therefor, and imaging device
KR101089029B1 (en) 2010-04-23 2011-12-01 동명대학교산학협력단 Crime Preventing Car Detection System using Optical Flow
CN101964863B (en) * 2010-05-07 2012-10-24 镇江唐桥微电子有限公司 Self-adaptive time-space domain video image denoising method
US8683377B2 (en) * 2010-05-12 2014-03-25 Adobe Systems Incorporated Method for dynamically modifying zoom level to facilitate navigation on a graphical user interface
US20130169834A1 (en) * 2011-12-30 2013-07-04 Advanced Micro Devices, Inc. Photo extraction from video
JP5362878B2 (en) 2012-05-09 2013-12-11 株式会社日立国際電気 Image processing apparatus and image processing method
US9292959B2 (en) * 2012-05-16 2016-03-22 Digizig Media Inc. Multi-dimensional stacking with self-correction
US9495783B1 (en) * 2012-07-25 2016-11-15 Sri International Augmented reality vision system for tracking and geolocating objects of interest
JP6429500B2 (en) * 2013-06-14 2018-11-28 キヤノン株式会社 Optical apparatus, interchangeable lens, and image blur correction method
KR101652658B1 (en) 2014-02-07 2016-08-30 가부시키가이샤 모르포 Image processing device, image processing method, image processing program, and recording medium
JP5909711B1 (en) * 2015-06-15 2016-04-27 パナソニックIpマネジメント株式会社 Flow line analysis system and flow line display method
JP6558579B2 (en) * 2015-12-24 2019-08-14 パナソニックIpマネジメント株式会社 Flow line analysis system and flow line analysis method
US20170209125A1 (en) * 2016-01-22 2017-07-27 General Electric Company Diagnostic system and method for obtaining measurements from a medical image
US10497130B2 (en) * 2016-05-10 2019-12-03 Panasonic Intellectual Property Management Co., Ltd. Moving information analyzing system and moving information analyzing method
JP6789682B2 (en) * 2016-06-13 2020-11-25 キヤノン株式会社 Imaging device, its control method, and program
CN106504717B (en) * 2016-12-27 2018-01-12 惠科股份有限公司 The driving method and display device of a kind of display device
CN106878787B (en) * 2017-03-08 2020-02-14 深圳创维-Rgb电子有限公司 Method and device for realizing television cinema mode
US20190028766A1 (en) * 2017-07-18 2019-01-24 Audible Magic Corporation Media classification for media identification and licensing
CA3076494A1 (en) * 2017-09-27 2019-04-04 Equifax Inc. Synchronizing data-entry fields with corresponding image regions

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120240005A1 (en) * 2007-07-04 2012-09-20 Lg Electronics Inc. Digital broadcasting system and method of processing data
CN102547301A (en) * 2010-09-30 2012-07-04 苹果公司 System and method for processing image data using an image signal processor
TW201611564A (en) * 2010-12-27 2016-03-16 Rohm Co Ltd Transmitter/receiver unit and receiver unit
CN106464857A (en) * 2014-03-26 2017-02-22 驼鹿科技公司 Compact 3D depth capture systems

Also Published As

Publication number Publication date
US20200065949A1 (en) 2020-02-27
CN110858895B (en) 2023-01-24
JP6814849B2 (en) 2021-01-20
KR20200022334A (en) 2020-03-03
TW202009876A (en) 2020-03-01
KR102315471B1 (en) 2021-10-20
US11373279B2 (en) 2022-06-28
CN110858895A (en) 2020-03-03
JP2020031422A (en) 2020-02-27

Similar Documents

Publication Publication Date Title
US20200374461A1 (en) Still image stabilization/optical image stabilization synchronization in multi-camera image capture
US9591237B2 (en) Automated generation of panning shots
CN108335279B (en) Image fusion and HDR imaging
TWI741305B (en) Image processing method and device
CN111353948B (en) Image noise reduction method, device and equipment
WO2018082185A1 (en) Image processing method and device
TW201742001A (en) Method and device for image noise estimation and image capture apparatus
US20210243426A1 (en) Method for generating multi-view images from a single image
CN105163047A (en) HDR (High Dynamic Range) image generation method and system based on color space conversion and shooting terminal
US8995784B2 (en) Structure descriptors for image processing
WO2023273868A1 (en) Image denoising method and apparatus, terminal, and storage medium
CN114429191B (en) Electronic anti-shake method, system and storage medium based on deep learning
CN110689565B (en) Depth map determination method and device and electronic equipment
CN108470327B (en) Image enhancement method and device, electronic equipment and storage medium
WO2017143654A1 (en) Method for selecting photo to be outputted, photographing method, device and storage medium
CN110689502B (en) Image processing method and related device
CN108810320B (en) Image quality improving method and device
Manne et al. Asymmetric wide tele camera fusion for high fidelity digital zoom
CN108122214B (en) Method and device for removing false color
CN110827287B (en) Method, device and equipment for determining background color confidence and image processing
US9531943B2 (en) Block-based digital refocusing system and method thereof
Ma et al. Study on the noise removal processing of TV picture at high speed
CN115293998A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN116137665A (en) Enhanced picture generation method and device, storage medium and electronic device
JP2015035698A (en) Image processing system, image processing method and image processing program