TWM625817U - Image simulation system with time sequence smoothness - Google Patents

Image simulation system with time sequence smoothness Download PDF

Info

Publication number
TWM625817U
TWM625817U TW110211934U TW110211934U TWM625817U TW M625817 U TWM625817 U TW M625817U TW 110211934 U TW110211934 U TW 110211934U TW 110211934 U TW110211934 U TW 110211934U TW M625817 U TWM625817 U TW M625817U
Authority
TW
Taiwan
Prior art keywords
image
parameter
depth map
classification
adjusted
Prior art date
Application number
TW110211934U
Other languages
Chinese (zh)
Inventor
朱宏國
黃信霖
Original Assignee
鈊象電子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 鈊象電子股份有限公司 filed Critical 鈊象電子股份有限公司
Priority to TW110211934U priority Critical patent/TWM625817U/en
Publication of TWM625817U publication Critical patent/TWM625817U/en

Links

Images

Landscapes

  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

An image simulation system is provided, which includes an image capture circuit, a memory and a processor. The image capture circuit is configured to capture an image. The memory is configured to store multiple commands and a weather image. The processor is connected to the image capture circuit, and accesses the multiple commands to perform the following operations: generating a quadruple map and a classification parameter according to the image, where the classification parameter is related to a background and a foreground in the image; adjusting the quadruple map according to the classification parameters, and generate a guide mask according to the adjusted quadruple map, where the guide mask is configured to indicate the background and foreground in the image; and generating a special effects image according to the image, the guide mask, and the weather image, where The special effect image is configured to simulate weather corresponding to the weather image in the image.

Description

具時序平滑性之影像模擬系統 Image simulation system with timing smoothness

本揭示有關於一種影像處理技術,且特別是有關於具時序平滑性之破碎深度圖補正系統。 The present disclosure relates to an image processing technique, and more particularly, to a broken depth map correction system with temporal smoothness.

近年來得益於深度學習在電腦視覺領域的快速發展,許多以往難以用方程式描述的複雜問題都能夠利用卷積神經網路來得到不錯的成果。然而,當需要在街景影像產生天氣效果時,目前技術套用在街景影像上面都會產生各式各樣的問題,導致我們合成的效果並不理想。因此,要如何在影像上模擬出接近真實的天氣效果是本領域技術人員急欲解決的問題。 In recent years, thanks to the rapid development of deep learning in the field of computer vision, many complex problems that were difficult to describe with equations in the past can use convolutional neural networks to achieve good results. However, when it is necessary to generate weather effects on street view images, various problems will arise when the current technology is applied to street view images, resulting in unsatisfactory synthetic effects. Therefore, how to simulate a near-real weather effect on an image is a problem that those skilled in the art are eager to solve.

本揭示的一態樣揭露一種影像模擬系統,包括影像擷取電路、記憶體以及處理器。影像擷取電路用以擷取影像。記憶體用以儲存多個指令以及一破碎深度圖。處理器連接影像擷取電路,並存取多個指令以進行下列操作: 依據該影像產生四聯通圖以及分類參數,其中分類參數相關於該影像中的背景以及前景;依據分類參數調整四聯通圖,並依據調整後的四聯通圖產生引導遮罩,其中引導遮罩用以指示影像中的背景以及前景;以及依據影像、引導遮罩以及破碎深度圖產生特效影像,其中特效影像用以在影像中模擬與破碎深度圖對應的天氣。 An aspect of the present disclosure discloses an image simulation system including an image capture circuit, a memory, and a processor. The image capturing circuit is used for capturing images. The memory is used to store a plurality of instructions and a fragmentation depth map. The processor is connected to the image capture circuit and accesses a plurality of instructions to perform the following operations: Generate a quadruplet map and classification parameters according to the image, wherein the classification parameters are related to the background and foreground in the image; adjust the quadruplet map according to the classification parameters, and generate a guide mask according to the adjusted quadruplet map, wherein the guide mask uses to indicate the background and foreground in the image; and generate a special effect image according to the image, the guide mask and the broken depth map, wherein the special effect image is used to simulate the weather corresponding to the broken depth map in the image.

基於上述,本揭示實施例可依據影像的分類參數以及四聯通圖產生引導遮罩,並利用引導遮罩優化由深度預測產生的深度圖以明確分辨出影像中的前景以及背景,進而防止影像中接近地面的部分的深度會與地面的深度混淆。 Based on the above, the embodiment of the present disclosure can generate a guide mask according to the classification parameters of the image and the quadruplet map, and use the guide mask to optimize the depth map generated by the depth prediction to clearly distinguish the foreground and the background in the image, thereby preventing the The depth of the part close to the ground can be confused with the depth of the ground.

100:影像模擬系統 100: Video Simulation System

110:影像擷取電路 110: Image capture circuit

120:記憶體 120: memory

130:處理器 130: Processor

S210~S230:步驟 S210~S230: Steps

PPM:預處理模型 PPM: Preprocessing Model

MGM:遮罩產生模型 MGM: Mask Generation Model

TSSM:時序平滑化模型 TSSM: Time Series Smoothing Model

IMG:影像 img:image

IMG’:另一影像 IMG': another image

SI:語意分割影像 SI: Semantic Segmentation Image

DI:深度圖 DI: Depth Map

GDI:虛擬地面圖 GDI: Virtual Ground Map

GM:引導遮罩 GM: Guide Mask

SYI:特效影像 SYI: special effects images

k:第k個輔助節點 k: the kth secondary node

P_IMG:機率圖 P_IMG: Probability Map

第1圖是本揭示的影像模擬系統的方塊圖。 FIG. 1 is a block diagram of an image simulation system of the present disclosure.

第2圖是本揭示的影像模擬方法的流程圖。 FIG. 2 is a flowchart of the image simulation method of the present disclosure.

第3圖是依據本揭示一些實施例的影像模擬方法的示意圖。 FIG. 3 is a schematic diagram of an image simulation method according to some embodiments of the present disclosure.

第4圖是依據本揭示一些實施例的輔助節點以及分類標籤的示意圖。 FIG. 4 is a schematic diagram of auxiliary nodes and classification labels according to some embodiments of the present disclosure.

第5圖是依據本揭示一些實施例的與陰影機率對應的機率圖的示意圖。 FIG. 5 is a schematic diagram of a probability map corresponding to shadow probability according to some embodiments of the present disclosure.

參照第1圖,第1圖是本揭示的影像模擬系統100的方塊圖。於一實施例中,影像模擬系統100包括影像擷取電路110、記憶體120以及處理器130。影像擷取電路110用以擷取影像。記憶體120用以儲存多個指令以及破碎深度圖。處理器130連接影像擷取電路110以及記憶體120,並用以存取這些指令。 Referring to FIG. 1, FIG. 1 is a block diagram of an image simulation system 100 of the present disclosure. In one embodiment, the image simulation system 100 includes an image capture circuit 110 , a memory 120 and a processor 130 . The image capturing circuit 110 is used for capturing images. The memory 120 is used to store a plurality of instructions and a fragmentation depth map. The processor 130 is connected to the image capture circuit 110 and the memory 120, and is used for accessing these instructions.

在一些實施例中,影像模擬系統100可由電腦、伺服器或處理中心建立。在一些實施例中,影像擷取電路110可以是用以擷取影像的攝影機或可以連續拍照之照相機。在一些實施例中,處理器130可由處理單元、中央處理單元或計算單元實現。在一些實施例中,破碎深度圖可以是與道路上可能會產生的特定天氣對應的深度圖。例如,道路上的下雨、霧氣、下雪或冰雹的深度圖。 In some embodiments, the image simulation system 100 can be established by a computer, a server or a processing center. In some embodiments, the image capture circuit 110 may be a camera for capturing images or a camera capable of taking pictures continuously. In some embodiments, the processor 130 may be implemented by a processing unit, a central processing unit, or a computing unit. In some embodiments, the broken depth map may be a depth map corresponding to specific weather that may occur on the road. For example, a depth map of rain, fog, snow, or hail on a road.

在一些實施例中,影像模擬系統100並不限於包括影像擷取電路110、記憶體120以及處理器130,影像模擬系統100可以進一步包括操作以及應用中所需的其他元件,舉例來說,影像模擬系統100可更包括輸出介面(例如,用於顯示資訊的顯示面板)、輸入介面(例如,觸控面板、鍵盤、麥克風、掃描器或快閃記憶體讀取器)以及通訊電路(例如,WiFi通訊模型、藍芽通訊模型、無線電信網路通訊模型等)。 In some embodiments, the image simulation system 100 is not limited to include the image capture circuit 110 , the memory 120 and the processor 130 , and the image simulation system 100 may further include other components required for operation and applications, for example, an image The simulation system 100 may further include an output interface (eg, a display panel for displaying information), an input interface (eg, a touch panel, a keyboard, a microphone, a scanner or a flash memory reader), and a communication circuit (eg, WiFi communication model, Bluetooth communication model, wireless telecommunication network communication model, etc.).

參照第2圖,第2圖是本揭示的影像模擬方法的流程圖。第2圖所示實施例的方法適用於第1圖的影像模 擬系統100,但不以此為限。為方便及清楚說明起見,下述同時參照第1圖以及第2圖,以影像模擬系統100中各元件之間的作動關係來說明第2圖所示影像模擬方法的詳細步驟。 Referring to FIG. 2, FIG. 2 is a flowchart of the image simulation method of the present disclosure. The method of the embodiment shown in Fig. 2 is applicable to the image model of Fig. 1 system 100, but not limited thereto. For the sake of convenience and clarity, the following describes the detailed steps of the image simulation method shown in FIG. 2 with reference to both FIG. 1 and FIG.

在一實施例中,影像模擬方法包括步驟S210~S230,這些步驟皆可藉由處理器130執行。首先,於步驟S210中,擷取影像,並依據影像產生四聯通圖以及分類參數,其中分類參數相關於該影像中的背景以及前景。 In one embodiment, the image simulation method includes steps S210 - S230 , all of which can be performed by the processor 130 . First, in step S210, an image is captured, and a quad-connection map and a classification parameter are generated according to the image, wherein the classification parameter is related to the background and the foreground in the image.

在一些實施例中,可對影像執行語意分割處理以產生語意分割影像,並依據語意分割影像產生分別與影像的多個像素對應的多個分類標籤。接著,可依據多個像素的RGB值產生與多個數值類別對應的直方圖,並依據多個數值類別以及多個分類標籤產生分類參數。 In some embodiments, a semantic segmentation process may be performed on the image to generate a semantically segmented image, and a plurality of classification labels corresponding to a plurality of pixels of the image may be generated according to the semantically segmented image. Then, a histogram corresponding to a plurality of numerical categories can be generated according to the RGB values of the plurality of pixels, and classification parameters can be generated according to the plurality of numerical categories and a plurality of classification labels.

在一些實施例中,可依據語意分割影像產生影像中的多個像素之間的權重。接著,可將多個像素做為四聯通圖中的節點,並將這些權重對應於四聯通圖中的各節點之間的連線,其中四聯通圖中的節點依序對應於影像中的像素(例如,四聯通圖中的第一列的節點對應於影像中的第一列的像素)。 In some embodiments, the image may be semantically segmented to generate weights between multiple pixels in the image. Next, a plurality of pixels can be used as nodes in the quadruplet graph, and these weights correspond to the connections between the nodes in the quadruplet graph, wherein the nodes in the quadruplet graph correspond to the pixels in the image in sequence (For example, the nodes in the first column of the quadruplet graph correspond to the pixels in the first column in the image).

在一些實施例中,可將多個數值類別做為多個輔助節點,並將多個輔助節點連接到四聯通圖中與之對應的節點以產生調整四聯通圖。 In some embodiments, a plurality of numerical categories can be used as a plurality of auxiliary nodes, and the plurality of auxiliary nodes can be connected to the corresponding nodes in the quadruplet graph to generate an adjusted quadruplet graph.

在一些實施例中,多個分類標籤包括前景標籤、後景標籤以及未確定標籤。在一些實施例中,可從與多個數值類別中的各者對應的前景標籤的數量以及未確定標籤的數量之中選擇最小數量。接著,可將多個數值類別各自的最小數量相加以產生最小總和值,並依據最小總和值以及分類成本參數產生分類參數。 In some embodiments, the plurality of classification labels include foreground labels, background labels, and undetermined labels. In some embodiments, the minimum number may be selected from among the number of foreground labels and the number of undetermined labels corresponding to each of the plurality of numerical categories. Next, the respective minimum numbers of the plurality of numerical categories may be added to generate a minimum sum value, and a classification parameter may be generated based on the minimum sum value and the classification cost parameter.

再者,於步驟S220中,依據分類參數調整四聯通圖,並依據調整後的四聯通圖產生引導遮罩,其中引導遮罩用以指示影像中的背景以及前景。 Furthermore, in step S220, the quadruplet map is adjusted according to the classification parameter, and a guide mask is generated according to the adjusted quadruplet map, wherein the guide mask is used to indicate the background and the foreground in the image.

在一些實施例中,可依據多個像素中的各者的RGB值以及多個像素中的各者的周圍的像素的RGB值產生平滑參數。接著,可依據多個像素中的各者的平滑參數以及分類參數調整四聯通圖。 In some embodiments, the smoothing parameters may be generated from RGB values of each of the plurality of pixels and RGB values of surrounding pixels of each of the plurality of pixels. Next, the quadruplet map may be adjusted according to the smoothing parameter and the classification parameter of each of the plurality of pixels.

在一些實施例中,可將影像由RGB域轉換至HSV域以產生影像的HSV數值。接著,可依據影像的HSV數值以及陰影數值範圍進行距離運算以產生影像的多個像素的陰影機率。接著,可依據多個像素的陰影機率調整分類參數以及平滑參數,並依據調整後的平滑參數以及調整後的分類參數調整多個分類標籤以及四聯通圖。 In some embodiments, the image can be converted from the RGB domain to the HSV domain to generate the HSV value of the image. Then, a distance calculation may be performed according to the HSV value of the image and the range of shadow values to generate shadow probabilities of a plurality of pixels of the image. Then, the classification parameter and the smoothing parameter can be adjusted according to the shadow probability of the plurality of pixels, and the plurality of classification labels and the quadruplet map can be adjusted according to the adjusted smoothing parameter and the adjusted classification parameter.

在一些實施例中,陰影數值範圍可包括色相通道的數值範圍、飽和度通道的數值範圍以及明度通道的數值範圍,其中這些數值範圍可以是依據過往經驗當中取得的陰影的平均值、人工給定的陰影的預設值、或是隨機數值。 In some embodiments, the shade value range may include the value range of the hue channel, the value range of the saturation channel, and the value range of the lightness channel, wherein these value ranges may be the average value of shades obtained in the past, or given manually. The default value of the shadow, or a random value.

在一些實施例中,可對調整後的平滑參數以及調整後的分類參數對四聯通圖進行最大流最小分割運算以調整分別與影像的多個像素對應的多個分類標籤,並依據調整後的多個分類標籤以及四聯通圖產生引導遮罩。 In some embodiments, a maximum-flow minimum segmentation operation may be performed on the quad-connection graph on the adjusted smoothing parameter and the adjusted classification parameter to adjust the classification labels corresponding to the pixels of the image, and according to the adjusted Multiple classification labels and quadruplet graphs generate guidance masks.

再者,於步驟S230中,依據影像、引導遮罩以及破碎深度圖產生特效影像,其中特效影像用以在影像中模擬與氣候影像對應的天氣。 Furthermore, in step S230, a special effect image is generated according to the image, the guide mask and the broken depth map, wherein the special effect image is used to simulate the weather corresponding to the climate image in the image.

在一些實施例中,可依據影像產生深度圖,並利用引導遮罩以及深度圖在影像上產生與破碎深度圖對應的天氣。 In some embodiments, a depth map may be generated from the image, and the weather corresponding to the broken depth map may be generated on the image using the guide mask and the depth map.

在一些實施例中,可藉由影像擷取電路110擷取在影像前一幀的另一影像。在一些實施例中,可依據影像產生深度圖,並依據深度圖產生點雲圖。接著,可從點雲圖中辨識與影像中的地平線對應的虛擬地面位置,並依據虛擬地面位置調整引導遮罩。接著,依據影像、另一影像、調整後的引導遮罩以及深度圖產生時序參數以及邊緣參數,其中時序參數用以解決影像與另一影像之間的深度不連續,且邊緣參數用以強化該影像以及另一影像的深度的邊緣強化。接著,可依據時序參數以及邊緣參數調整深度圖,並依據調整後的深度圖以及破碎深度圖產生特效影像。 In some embodiments, another image of a frame preceding the image may be captured by the image capturing circuit 110 . In some embodiments, a depth map can be generated from the image, and a point cloud map can be generated from the depth map. Then, the virtual ground position corresponding to the horizon in the image can be identified from the point cloud image, and the guidance mask can be adjusted according to the virtual ground position. Then, timing parameters and edge parameters are generated according to the image, the other image, the adjusted guide mask, and the depth map, wherein the timing parameters are used to resolve the depth discontinuity between the image and the other image, and the edge parameters are used to enhance the Edge enhancement of the depth of an image as well as another image. Then, the depth map can be adjusted according to the timing parameters and the edge parameters, and a special effect image can be generated according to the adjusted depth map and the broken depth map.

在一些實施例中,可依據時序參數以及邊緣參數進行共軛梯度下降處理以調整深度圖。 In some embodiments, a conjugate gradient descent process can be performed to adjust the depth map according to timing parameters and edge parameters.

在一些實施例中,可依據破碎深度圖產生稀疏點雲圖,並依據調整後的深度圖將稀疏點雲圖與影像進行合成以產生特效影像。 In some embodiments, a sparse point cloud image may be generated according to the broken depth map, and the sparse point cloud image and the image may be synthesized according to the adjusted depth map to generate a special effect image.

藉由上述步驟,可對分類參數進行調整以防止在背景中接近地面的部分的深度會與地面的深度混淆,以利用調整後的分類參數產生引導遮罩。此外,更可對平滑參數進行調整以防止影像中的物件邊緣的雜訊。另外,更可對時序參數以及邊緣參數進行調整以解決影像與前一幀的另一影像之間的深度不連續以及強化該影像以及另一影像的深度的邊緣強化。 Through the above steps, the classification parameters can be adjusted to prevent the depth of the part close to the ground in the background from being confused with the depth of the ground, so as to use the adjusted classification parameters to generate a guide mask. In addition, smoothing parameters can be adjusted to prevent noise at the edges of objects in the image. In addition, the timing parameters and the edge parameters can be adjusted to solve the depth discontinuity between the image and another image of the previous frame and to enhance the edge enhancement of the depth of the image and the other image.

以下以記憶體120中的實際模型做為例子以進一步對上述流程進行說明。同時參照第3圖,第3圖是依據本揭示一些實施例的影像模擬方法的示意圖。在一實施例中,記憶體120更可包括預處理模型PPM、遮罩產生模型MGM以及時序平滑畫模型TSSM。處理器130可執行這些模型以執行上述第2圖中的步驟。 The above process is further described below by taking the actual model in the memory 120 as an example. Also referring to FIG. 3 , FIG. 3 is a schematic diagram of an image simulation method according to some embodiments of the present disclosure. In one embodiment, the memory 120 may further include a preprocessing model PPM, a mask generation model MGM, and a temporal smoothing model TSSM. The processor 130 can execute these models to perform the steps in Figure 2 above.

首先,預處理模型PPM可對由影像擷取電路110所產生的影像IMG進行預處理,其中影像IMG屬於RGB影像,且預處理包括語意分割處理(例如藉由UperNet處理)、深度預測處理(例如藉由MegaDepth處理)以及點雲投射處理(例如藉由PCL處理)。 First, the preprocessing model PPM can preprocess the image IMG generated by the image capture circuit 110, wherein the image IMG belongs to RGB images, and the preprocessing includes semantic segmentation processing (eg, processing by UpperNet), depth prediction processing (eg, processing by UpperNet). by MegaDepth) and point cloud projection (eg by PCL).

詳細而言,預處理模型PPM可對影像IMG進行語意分割處理以產生語意分割影像SI,其中語意分割影像SI中相同的物件具有相同的權重。接著,預處理模型PPM 可對影像IMG進行深度預測處理以產生深度圖DI。接著,預處理模型PPM可對深度圖DI進行點雲投射處理以產生虛擬地面圖GDI,其中虛擬地面圖GDI中的虛擬地面位置具有較地的灰階值。藉此,預處理模型PPM可將語意分割影像SI以及深度圖DI輸入至遮罩產生模型MGM,並將深度圖DI以及虛擬地面圖GDI輸入至時序平滑化模型TSSM。 Specifically, the preprocessing model PPM can perform semantic segmentation processing on the image IMG to generate a semantically segmented image SI, wherein the same objects in the semantically segmented image SI have the same weight. Next, the preprocessing model PPM A depth prediction process may be performed on the image IMG to generate a depth map DI. Next, the preprocessing model PPM may perform point cloud projection processing on the depth map DI to generate a virtual ground map GDI, wherein the virtual ground positions in the virtual ground map GDI have relatively high grayscale values. In this way, the preprocessing model PPM can input the semantic segmentation image SI and the depth map DI to the mask generation model MGM, and input the depth map DI and the virtual ground map GDI to the time series smoothing model TSSM.

再者,遮罩產生模型MGM可將語意分割影像SI中屬於地面的像素辨識為前景,並把非地面的像素辨識為後景,以及將前景以及後景交界處的像素進一步辨識為未確定區域。接著,遮罩產生模型MGM可依據語意分割影像SI產生影像IMG中的像素的四聯通圖,其中影像IMG中的像素分別對應於四聯通圖中的節點,且四聯通圖中的屬於相同物件的像素之間的連線會有越高的關聯值。 Furthermore, the mask generation model MGM can identify the pixels belonging to the ground in the semantic segmentation image SI as the foreground, and identify the non-ground pixels as the background, and further identify the pixels at the junction of the foreground and the background as the undetermined area. . Then, the mask generation model MGM can segment the image SI according to the semantics to generate a quadruplet graph of the pixels in the image IMG, wherein the pixels in the image IMG respectively correspond to the nodes in the quadruplet graph, and the pixels in the quadruplet graph belong to the same object. Lines between pixels will have higher associated values.

再者,遮罩產生模型MGM可依據影像IMG產生紅色通道、綠色通道以及藍色通道的16個類別的直方圖,其中直方圖的尺寸為16x16x16,且直方圖中的數值類別也是16x16x16。接著,遮罩產生模型MGM可將與這些數值類別對應的16x16x16個輔助節點連接至四聯通圖中的對應的單元。舉例而言,紅色通道的一個數值類別的數值範圍為0~30,且在四聯通圖中與此數值範圍對應的節點可連接至此數值類別的輔助節點。 Furthermore, the mask generation model MGM can generate histograms of 16 categories of red channel, green channel and blue channel according to the image IMG, wherein the size of the histogram is 16x16x16, and the value categories in the histogram are also 16x16x16. Then, the mask generation model MGM can connect the 16x16x16 auxiliary nodes corresponding to these numerical classes to the corresponding cells in the quadruplet graph. For example, the value range of a value class of the red channel is 0~30, and the node corresponding to this value range in the quadruplet graph can be connected to the auxiliary node of this value class.

再者,遮罩產生模型MGM可利用以下公式(1)計算所有輔助節點的分類成本的總合(即,分類參數)。 Furthermore, the mask generation model MGM can calculate the sum of the classification costs (ie, classification parameters) of all auxiliary nodes using the following formula (1).

Figure 110211934-A0305-02-0011-1
Figure 110211934-A0305-02-0011-1

其中E1為分類參數,β k是預先設定的第k個輔助節點的分類成本參數,S是各輔助節點所連接的具有前景標籤的節點的像素的數量,S’是各輔助節點所連接的具有背景標籤的節點的像素的數量,min(,)為取出最小值的函數,以及N為輔助節點的數量(例如上述的16x16x16個)。 where E1 is the classification parameter, β k is the preset classification cost parameter of the kth auxiliary node, S is the number of pixels with foreground labels connected to each auxiliary node, and S' is the The number of pixels of the node of the background label, min(,) is the function of taking the minimum value, and N is the number of auxiliary nodes (for example, the above 16x16x16).

舉例而言,以第k個輔助節點為例。同時參照第4圖,第4圖是依據本揭示一些實施例的輔助節點以及分類標籤的示意圖。於第4圖中,有1個連接的節點的像素具有後景標籤,有2個連接的節點的像素具有未確定標籤,以及有4個節點的像素具有前景標籤。因此,可以取1以及4之中的最小值與上述β進行乘積以計算出第k個輔助節點的分類成本為β。基於此,若在第k個輔助節點中的所有節點的分類標籤至多具有一種標籤(前景或後景標籤),將可使分類成本最小化(即為0)。 For example, take the kth secondary node as an example. Referring also to FIG. 4 , FIG. 4 is a schematic diagram of an auxiliary node and a classification label according to some embodiments of the present disclosure. In Figure 4, pixels with 1 connected node have background labels, pixels with 2 connected nodes have undetermined labels, and pixels with 4 nodes have foreground labels. Therefore, the minimum value of 1 and 4 can be multiplied by the above β to calculate the classification cost of the kth auxiliary node as β. Based on this, if the classification labels of all nodes in the kth auxiliary node have at most one label (foreground or background label), the classification cost can be minimized (ie, 0).

參照回第3圖,遮罩產生模型MGM可以以下公式(2)計算各像素的平滑項,並將平滑項的總合做為平滑參數。 Referring back to FIG. 3, the mask generation model MGM can calculate the smoothing term of each pixel by the following formula (2), and use the sum of the smoothing terms as a smoothing parameter.

Figure 110211934-A0305-02-0011-2
Figure 110211934-A0305-02-0011-2

其中E2為1個像素的平滑項,Vp為1個像素的RGB值,Vq為此像素周圍的其中一個像素的RGB值,以及σ為所有像素的統計上的變異數。值得注意的是,由上述公式(2)可得知,從1個像素可以計算出4個平滑項, 並可將這些平滑項相加以做為平滑參數,其中這些平滑項呈現高斯分佈,相鄰兩個像素顏色越相近其平滑項就越小,且反之平滑項就越大。藉由此平滑參數可大大消除像素的分類標籤出現錯誤的情況。 where E2 is the smoothing term of 1 pixel, Vp is the RGB value of 1 pixel, Vq is the RGB value of one of the pixels around this pixel, and σ is the statistical variance of all pixels. It is worth noting that from the above formula (2), it can be known that 4 smoothing terms can be calculated from 1 pixel, These smoothing terms can be added together as smoothing parameters, wherein these smoothing terms present a Gaussian distribution. The closer the color of two adjacent pixels is, the smaller the smoothing term, and vice versa, the larger the smoothing term. Using this smoothing parameter can greatly eliminate the occurrence of errors in the classification labels of pixels.

再者,遮罩產生模型MGM可預先設定一般陰影在影像中的HSV域的陰影數值範圍。例如,色相通道的數值範圍設定為50~250,飽和度通道的數值範圍設定為0~70,以及明度通道的數值範圍設定為25~100。 Furthermore, the mask generation model MGM can preset the shadow value range of the HSV domain for general shadows in the image. For example, the value range of the hue channel is set to 50~250, the value range of the saturation channel is set to 0~70, and the value range of the lightness channel is set to 25~100.

再者,遮罩產生模型MGM可將影像IMG由RGB域轉換至HSV域以產生影像的HSV數值。接著,遮罩產生模型MGM可依據影像的HSV數值以及陰影數值範圍進行距離運算以產生影像IMG的多個像素的陰影機率,並可依據這些陰影機率產生一個機率圖。 Furthermore, the mask generation model MGM can convert the image IMG from the RGB domain to the HSV domain to generate the HSV value of the image. Then, the mask generation model MGM can perform distance calculation according to the HSV value of the image and the range of shadow values to generate shadow probabilities of a plurality of pixels of the image IMG, and can generate a probability map according to the shadow probabilities.

舉例而言,參照第5圖,第5圖是依據本揭示一些實施例的與陰影機率對應的機率圖P_IMG的示意圖。如第5圖所示,可依據上述像素的陰影機率將不同的陰影機率對應於不同的灰度值以產生機率圖P_IMG,其中越接近白色的區域越有可能是陰影。 For example, referring to FIG. 5, FIG. 5 is a schematic diagram of the probability map P_IMG corresponding to the shadow probability according to some embodiments of the present disclosure. As shown in FIG. 5 , the probability map P_IMG can be generated by assigning different shading probabilities to different grayscale values according to the shading probabilities of the above-mentioned pixels, wherein the area closer to white is more likely to be a shading.

參照回第3圖,遮罩產生模型MGM可依據上述陰影機率調整公式(1)中的β k的數值如以下公式(3)。 Referring back to FIG. 3 , the mask generation model MGM can adjust the value of β k in the formula (1) according to the above-mentioned shadow probability, as shown in the following formula (3).

β k=1-ShadowProbk...(3) β k=1-ShadowProbk...(3)

其中ShadowProbk為與第k個輔助節點對應的每個像素的陰影機率。換言之,在陰影機率越高的地方就越不受分類參數的影響,而只受到上述平滑項的控制。 where ShadowProbk is the shadow probability of each pixel corresponding to the kth auxiliary node. In other words, where the probability of shadowing is higher, it is less affected by the classification parameters, but only controlled by the above-mentioned smoothing term.

再者,遮罩產生模型MGM可依據上述像素的陰影機率產生陰影參數如以下公式(4)。 Furthermore, the mask generation model MGM can generate shadow parameters according to the shadow probability of the above-mentioned pixels, as shown in the following formula (4).

Figure 110211934-A0305-02-0013-3
Figure 110211934-A0305-02-0013-3

其中E3為陰影參數,ShadowProbp為像素的陰影機率,模糊像素為具有不明確標籤且周圍的像素具有背景標籤的像素,以及ShadowProbq為像素的周圍的像素的陰影機率。值得注意的是,針對其他像素,像素周圍存在4個像素,故可產生4個陰影參數。基於上述,陰影參數E3將可降低原本語意分割中的地面交界處的成本,讓最終的交界處落在語意分割的邊緣而不是陰影的邊緣。 where E3 is the shadow parameter, ShadowProbp is the shadow probability of a pixel, a blurred pixel is a pixel with an ambiguous label and the surrounding pixels have a background label, and ShadowProbq is the shadow probability of the surrounding pixels of the pixel. It is worth noting that for other pixels, there are 4 pixels around the pixel, so 4 shadow parameters can be generated. Based on the above, the shadow parameter E3 will reduce the cost of the ground junction in the original semantic segmentation, so that the final junction falls on the edge of the semantic segmentation instead of the edge of the shadow.

再者,遮罩產生模型MGM可依據各像素的陰影參數以及一個預設的調整參數調整上述的平滑參數如以下公式(5)。 Furthermore, the mask generation model MGM can adjust the above-mentioned smoothing parameter according to the shadow parameter of each pixel and a preset adjustment parameter, as shown in the following formula (5).

E2'=E2+w×E3...(5) E2 ' =E2+w×E3...(5)

其中E2’為調整後的平滑參數,以及w為調整參數。藉由上述調整參數w,將可控制陰影以及影像IMG的顏色之間的關係。 where E2' is the adjusted smoothing parameter, and w is the adjusted parameter. By adjusting the parameter w above, the relationship between the shadow and the color of the image IMG can be controlled.

再者,遮罩產生模型MGM可對E1、各像素的調整後的平滑參數E2’、多個輔助節點以及四聯通圖中的關聯值進行最大流最小分割運算以產生調整後的分類標籤以及調整後的四聯通圖。接著,遮罩產生模型MGM可依據調整後的分類標籤以及調整後的四聯通圖產生二元引導遮罩GM,其中二元引導遮罩GM中與背景標籤對應的位置 的數值可被設定為0,且二元引導遮罩GM中與前背景標籤對應的位置的數值可被設定為1。藉此,遮罩產生模型MGM可將二元引導遮罩GM輸入至時序平滑化模型TSSM。 Furthermore, the mask generation model MGM can perform the maximum flow minimum segmentation operation on E1, the adjusted smoothing parameter E2' of each pixel, a plurality of auxiliary nodes, and the associated values in the quadruplet graph to generate the adjusted classification label and adjust The four-connection diagram after. Next, the mask generation model MGM can generate a binary guide mask GM according to the adjusted classification labels and the adjusted quadruplet graph, wherein the position in the binary guide mask GM corresponds to the background label The value of can be set to 0, and the value of the position corresponding to the front background label in the binary guide mask GM can be set to 1. Thereby, the mask generation model MGM can input the binary guide mask GM to the temporal smoothing model TSSM.

再者,時序平滑化模型TSSM可預先儲存影像IMG以及影像IMG前一幀的另一影像IMG’。時序平滑化模型TSSM可從點雲圖中辨識與影像IMG中的地平線對應的虛擬地面位置,並依據虛擬地面位置調整二元引導遮罩GM,進而利用調整後的二元引導遮罩GM對深度圖DI進行濾波以產生過濾後的深度圖DI。 Furthermore, the time series smoothing model TSSM can pre-store the image IMG and another image IMG' of the previous frame of the image IMG. The time series smoothing model TSSM can identify the virtual ground position corresponding to the horizon in the image IMG from the point cloud image, and adjust the binary guidance mask GM according to the virtual ground position, and then use the adjusted binary guidance mask GM to adjust the depth map. DI is filtered to produce a filtered depth map DI.

再者,時序平滑化模型TSSM可依據影像IMG前一幀的另一影像IMG’產生光度參數如以下公式(6)。 Furthermore, the time series smoothing model TSSM can generate the photometric parameters according to the following formula (6) according to another image IMG' of the previous frame of the image IMG.

min ʃ w(x)×∥O-warp(O')∥2dx...(6) min ʃ w(x)×∥O-warp(O ' )∥ 2 dx...(6)

其中x為在影像IMG中的各像素的位置,O為位置x的各像素預期輸出的深度值,warp(,)是使用光流映射函數(DIS flow)對原始未處理過的影片經過計算得到的前後幀之間每個相素的位移(即,warp(O’)為計算位置x的像素預期輸出的深度值O以及x的像素預期輸出的前一幀的深度值O’之間每個相素的位移),以及w(x)為如以下公式(7)所示。 Where x is the position of each pixel in the image IMG, O is the depth value expected to be output by each pixel at position x, warp(,) is calculated from the original unprocessed video using the optical flow mapping function (DIS flow). The displacement of each pixel between the frame before and after (ie, warp(O') is calculated between the depth value O expected to be output by the pixel at position x and the depth value O' of the previous frame expected to be output by the pixel at x. displacement of the phase element), and w(x) is as shown in the following formula (7).

w(x)=λ×e-∥V-warp(v')∥...(7) w(x)=λ×e -∥V-warp(v ' )∥ ...(7)

其中λ為預設的參數,V為在影像IMG中的位置x的像素的RGB值。在位置x的像素上如果和前一幀 的顏色差距越多,越可能在物體的交界處,這就越不需要被平滑化。 where λ is a preset parameter, and V is the RGB value of the pixel at position x in the image IMG. at the pixel at position x if and the previous frame The more color gaps in the , the more likely it is at the intersection of objects, the less this needs to be smoothed.

再者,當連續影像持續播放時間拉長時,從上一幀投影過來的資訊會不停的疊加,導致整體深度趨近於某個數值。因此,時序平滑化模型TSSM可調整光度參數以產生時序參數如以下公式(8)。 Furthermore, when the continuous playback time of the continuous image is prolonged, the information projected from the previous frame will be continuously superimposed, causing the overall depth to approach a certain value. Therefore, the temporal smoothing model TSSM can adjust the photometric parameters to generate the temporal parameters as the following formula (8).

E3=min ʃ w(x)×∥O-warp(O')∥2+s×∥O-P∥2dx...(8) E3=min ʃ w(x)×∥O-warp(O ' )∥ 2 +s×∥OP∥ 2 dx...(8)

其中E3為時序參數,P為深度圖DI中與位置x對應的深度值,以及s為0.1。這將使得預期得到的深度值O與深度圖DI中的深度值P之間的差距不會太大,且s的數值將不會使原本預期的數值偏離太大。 where E3 is the timing parameter, P is the depth value corresponding to position x in the depth map DI, and s is 0.1. This will make the difference between the expected depth value O and the depth value P in the depth map DI not too large, and the value of s will not deviate too much from the originally expected value.

再者,時序平滑化模型TSSM可對深度圖DI中與位置x對應的深度值P以及周圍的像素的深度值進行梯度計算以產生邊緣參數如以下公式(9)。 Furthermore, the temporal smoothing model TSSM can perform gradient calculation on the depth value P corresponding to the position x in the depth map DI and the depth values of surrounding pixels to generate edge parameters as shown in the following formula (9).

E4=e×∥▽O-▽P∥2...(9) E4=e×∥▽O-▽P∥ 2 ...(9)

其中▽O為位置x的各像素預期輸出的深度值的梯度值,▽P為深度圖DI中與位置x對應的深度值的梯度值,以及e為如以下公式(10)所示。 where ▽O is the gradient value of the depth value expected to be output by each pixel at position x, ▽P is the gradient value of the depth value corresponding to position x in the depth map DI, and e is as shown in the following formula (10).

Figure 110211934-A0305-02-0015-4
Figure 110211934-A0305-02-0015-4

由公式(10)可得知,當像素的周圍存在邊界的時候時,e為0,這代表當前的數值完全與邊界無關,反之則為-1。 It can be known from formula (10) that when there is a boundary around the pixel, e is 0, which means that the current value has nothing to do with the boundary, otherwise it is -1.

再者,時序平滑化模型TSSM可將時序參數E3以及邊緣參數E4相加。由於相加之後的各項次皆為平方項,故可以視為是一個矩陣的最小二乘法的問題。藉此,時序平滑化模型TSSM可利用共軛梯度下降處理以疊代出預期輸出的深度值,並將深度圖DI中的深度值調整為這些預期輸出的深度值。 Furthermore, the timing smoothing model TSSM can add timing parameters E3 and edge parameters E4. Since each order after the addition is a square term, it can be regarded as a problem of the least squares method of a matrix. Thereby, the time series smoothing model TSSM can use the conjugate gradient descent process to iterate the depth values of the expected output, and adjust the depth values in the depth map DI to the depth values of the expected output.

最後,時序平滑化模型TSSM可依據預先儲存的破碎深度圖產生稀疏點雲圖(例如利用動態恢復結構處理(Structure from Motion,SfM)),並依據調整後的深度圖將稀疏點雲圖與影像IMG進行合成以產生特效影像SYI。 Finally, the time series smoothing model TSSM can generate a sparse point cloud image according to the pre-stored broken depth map (for example, using dynamic recovery structure processing (Structure from Motion, SfM)), and according to the adjusted depth map The sparse point cloud image and the image IMG are processed. Synthesized to produce special effect image SYI.

綜上所述,本揭示實施例的影像模擬系統可依據影像的分類參數以及四聯通圖產生引導遮罩,並利用引導遮罩優化由深度預測產生的深度圖以明確分辨出影像中的前景以及背景,進而防止影像中接近地面的部分的深度會與地面的深度混淆。此外,更可利用時序參數以及邊緣參數在時序上對深度進行平滑化以及邊緣強化,以防止時序上的深度由於沒有連續性所導致的閃爍的情況。藉此,可有效地在影像上模擬出接近真實的天氣效果。 To sum up, the image simulation system of the embodiment of the present disclosure can generate a guide mask according to the classification parameters of the image and the quadruple map, and use the guide mask to optimize the depth map generated by the depth prediction to clearly distinguish the foreground and background to prevent the depth of parts of the image near the ground from being confused with the depth of the ground. In addition, the timing parameters and the edge parameters can be used to smooth the depth and enhance the edge in time sequence, so as to prevent the depth in time sequence from flickering due to lack of continuity. In this way, a near-real weather effect can be effectively simulated on the image.

雖然本揭示的特定實施例已經揭露有關上述實施例,此些實施例不意欲限制本揭示。各種替代及改良可藉由相關領域中的一般技術人員在本揭示中執行而沒有從本揭示的原理及精神背離。因此,本揭示的保護範圍由所附申請專利範圍確定。 Although specific embodiments of the present disclosure have been disclosed with respect to the above-described embodiments, such embodiments are not intended to limit the present disclosure. Various substitutions and modifications can be made in the present disclosure by those of ordinary skill in the relevant art without departing from the principles and spirit of the present disclosure. Therefore, the protection scope of the present disclosure is determined by the appended claims.

100:影像模擬系統 100: Video Simulation System

110:影像擷取電路 110: Image capture circuit

120:記憶體 120: memory

130:處理器 130: Processor

Claims (10)

一種影像模擬系統,包括:一影像擷取電路,用以擷取一影像;一記憶體,用以儲存多個指令以及一破碎深度圖;以及一處理器,連接該影像擷取電路以及該記憶體,並存取該些指令以進行下列操作:依據該影像產生一四聯通圖以及一分類參數,其中分類參數相關於該影像中的一背景以及一前景;依據該分類參數調整該四聯通圖,並依據調整後的該四聯通圖產生一引導遮罩,其中該引導遮罩用以指示該影像中的該背景以及該前景;以及依據該影像、該引導遮罩以及該破碎深度圖產生一特效影像,其中該特效影像用以在該影像中模擬與該破碎深度圖對應的天氣。 An image simulation system includes: an image capture circuit for capturing an image; a memory for storing a plurality of commands and a fragmentation depth map; and a processor for connecting the image capture circuit and the memory body, and access the commands to perform the following operations: generate a quad-connected graph and a classification parameter according to the image, wherein the classification parameter is related to a background and a foreground in the image; adjust the quad-connected graph according to the classification parameter , and generate a guide mask according to the adjusted quadruplet map, wherein the guide mask is used to indicate the background and the foreground in the image; and generate a guide mask according to the image, the guide mask and the broken depth map A special effect image, wherein the special effect image is used to simulate the weather corresponding to the broken depth map in the image. 如請求項1所述之影像模擬系統,其中依據該影像產生該四聯通圖以及該分類參數的操作包括:對該影像執行語意分割處理以產生一語意分割影像,並依據該語意分割影像產生分別與該影像的多個像素對應的多個分類標籤;以及依據該些像素的RGB值產生與多個數值類別對應的一直方圖,並依據該些數值類別以及該些分類標籤產生該分類參數。 The image simulation system according to claim 1, wherein the operation of generating the quadruplet graph and the classification parameter according to the image comprises: performing semantic segmentation processing on the image to generate a semantic segmentation image, and generating a separate image according to the semantic segmentation image. a plurality of classification labels corresponding to a plurality of pixels of the image; and generating a histogram corresponding to a plurality of numerical categories according to the RGB values of the pixels, and generating the classification parameters according to the numerical categories and the classification labels. 如請求項2所述之影像模擬系統,其中該些分類標籤包括一前景標籤、一後景標籤以及一未確定標籤,其中依據該些數值類別以及該些分類標籤產生該分類參數的操作包括:從與該些數值類別中的各者對應的該前景標籤的數量以及該未確定標籤的數量之中選擇一最小數量;以及將該些數值類別各自的該最小數量相加以產生一最小總和值,並依據該最小總和值以及一分類成本參數產生該分類參數。 The image simulation system of claim 2, wherein the classification labels include a foreground label, a background label, and an undetermined label, and the operation of generating the classification parameter according to the numerical categories and the classification labels includes: selecting a minimum number from among the number of the foreground labels and the number of undetermined labels corresponding to each of the numerical classes; and summing the minimum number of each of the numerical classes to generate a minimum sum value, The classification parameter is generated according to the minimum sum value and a classification cost parameter. 如請求項2所述之影像模擬系統,其中依據該分類參數調整該四聯通圖的操作包括:依據該些像素中的各者的RGB值以及該些像素中的該各者的周圍的像素的RGB值產生一平滑參數;以及依據該些像素中的該各者的該平滑參數以及該分類參數調整該四聯通圖。 The image simulation system as claimed in claim 2, wherein the operation of adjusting the quadruple graph according to the classification parameter comprises: according to the RGB value of each of the pixels and the surrounding pixels of the each of the pixels. The RGB values generate a smoothing parameter; and the quad-connected graph is adjusted according to the smoothing parameter and the classification parameter of each of the pixels. 如請求項4所述之影像模擬系統,其中依據該分類參數調整該四聯通圖的操作包括:將該影像由RGB域轉換至HSV域以產生該影像的HSV數值;依據該影像的HSV數值以及一陰影數值範圍進行距離運算以產生該影像的該些像素的陰影機率;以及依據該些像素的陰影機率調整該分類參數以及該平滑參 數,並依據調整後的該平滑參數以及調整後的該分類參數調整該些分類標籤以及該四聯通圖。 The image simulation system of claim 4, wherein the operation of adjusting the quadruple graph according to the classification parameter comprises: converting the image from the RGB domain to the HSV domain to generate the HSV value of the image; according to the HSV value of the image and A distance calculation is performed on a shadow value range to generate shadow probabilities of the pixels of the image; and the classification parameter and the smoothing parameter are adjusted according to the shadow probabilities of the pixels. number, and adjust the classification labels and the quadruplet graph according to the adjusted smoothing parameter and the adjusted classification parameter. 如請求項5所述之影像模擬系統,其中依據調整後的該平滑參數以及調整後的該分類參數調整該些分類標籤以及該四聯通圖的操作包括:對調整後的該平滑參數以及調整後的該分類參數對該四聯通圖進行最大流最小分割運算以調整分別與該影像的該些像素對應的該些分類標籤,並依據調整後的該些分類標籤以及該四聯通圖產生該引導遮罩。 The image simulation system according to claim 5, wherein the operation of adjusting the classification labels and the quadruple graph according to the adjusted smoothing parameter and the adjusted classification parameter includes: adjusting the adjusted smoothing parameter and adjusted Perform the maximum flow minimum segmentation operation on the quad-connected graph to adjust the classification labels corresponding to the pixels of the image, and generate the guidance mask according to the adjusted classification labels and the quad-connected graph cover. 如請求項1所述之影像模擬系統,其中依據該影像、該引導遮罩以及該破碎深度圖產生該特效影像的操作包括:依據該影像產生一深度圖,並利用該引導遮罩以及該深度圖在該影像上產生與該破碎深度圖對應的天氣。 The image simulation system of claim 1, wherein the operation of generating the special effect image according to the image, the guide mask and the broken depth map comprises: generating a depth map according to the image, and using the guide mask and the depth The map produces weather on the imagery corresponding to the broken depth map. 如請求項1所述之影像模擬系統,其中該影像擷取電路更用以擷取在該影像前一幀的一另一影像,其中依據該影像、該引導遮罩以及該破碎深度圖產生該特效影像的操作包括:依據該影像產生一深度圖,並依據該深度圖產生一點雲圖;從該點雲圖中辨識與該影像中的地平線對應的一虛擬地 面位置,並依據該虛擬地面位置調整該引導遮罩;依據該影像、該另一影像、調整後的該引導遮罩以及該深度圖產生一時序參數以及一邊緣參數,其中該時序參數用以解決該影像與該另一影像之間的深度不連續,且該邊緣參數用以強化該影像以及該另一影像的深度的邊緣強化;以及依據該時序參數以及該邊緣參數調整該深度圖,並依據調整後的該深度圖以及該破碎深度圖產生該特效影像。 The image simulation system of claim 1, wherein the image capture circuit is further configured to capture another image one frame before the image, wherein the image is generated according to the image, the guide mask and the broken depth map The operation of the special effect image includes: generating a depth map according to the image, and generating a point cloud image according to the depth map; identifying a virtual ground corresponding to the horizon in the image from the point cloud image surface position, and adjust the guide mask according to the virtual ground position; generate a timing parameter and an edge parameter according to the image, the other image, the adjusted guide mask and the depth map, wherein the timing parameter is used for solving the depth discontinuity between the image and the other image, and the edge parameter is used to enhance the edge enhancement of the depth of the image and the other image; and adjusting the depth map according to the timing parameter and the edge parameter, and The special effect image is generated according to the adjusted depth map and the broken depth map. 如請求項8所述之影像模擬系統,其中依據該時序參數以及該邊緣參數調整該深度圖的操作包括:依據該時序參數以及該邊緣參數進行共軛梯度下降處理以調整該深度圖。 The image simulation system of claim 8, wherein the operation of adjusting the depth map according to the timing parameters and the edge parameters includes: performing conjugate gradient descent processing according to the timing parameters and the edge parameters to adjust the depth map. 如請求項8所述之影像模擬系統,其中依據調整後的該深度圖以及該破碎深度圖產生該特效影像的操作包括:依據該破碎深度圖產生一稀疏點雲圖,並依據該調整後的該深度圖將該稀疏點雲圖與該影像進行合成以產生該特效影像。 The image simulation system according to claim 8, wherein the operation of generating the special effect image according to the adjusted depth map and the broken depth map comprises: generating a sparse point cloud image according to the broken depth map, and according to the adjusted depth map The depth map synthesizes the sparse point cloud image with the image to generate the special effect image.
TW110211934U 2021-10-08 2021-10-08 Image simulation system with time sequence smoothness TWM625817U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
TW110211934U TWM625817U (en) 2021-10-08 2021-10-08 Image simulation system with time sequence smoothness

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
TW110211934U TWM625817U (en) 2021-10-08 2021-10-08 Image simulation system with time sequence smoothness

Publications (1)

Publication Number Publication Date
TWM625817U true TWM625817U (en) 2022-04-21

Family

ID=82198219

Family Applications (1)

Application Number Title Priority Date Filing Date
TW110211934U TWM625817U (en) 2021-10-08 2021-10-08 Image simulation system with time sequence smoothness

Country Status (1)

Country Link
TW (1) TWM625817U (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI804001B (en) * 2021-10-08 2023-06-01 鈊象電子股份有限公司 Correction system for broken depth map with time sequence smoothness

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TWI804001B (en) * 2021-10-08 2023-06-01 鈊象電子股份有限公司 Correction system for broken depth map with time sequence smoothness

Similar Documents

Publication Publication Date Title
Wang et al. Gladnet: Low-light enhancement network with global awareness
CN110598610B (en) Target significance detection method based on neural selection attention
WO2020224428A1 (en) Method for implanting information into video, computer device and storage medium
US11042990B2 (en) Automatic object replacement in an image
CN109325532A (en) The image processing method of EDS extended data set under a kind of small sample
CN112565636B (en) Image processing method, device, equipment and storage medium
CN112184759A (en) Moving target detection and tracking method and system based on video
CN109389569B (en) Monitoring video real-time defogging method based on improved DehazeNet
US20220172331A1 (en) Image inpainting with geometric and photometric transformations
US11978216B2 (en) Patch-based image matting using deep learning
CN109741293A (en) Conspicuousness detection method and device
CN108596992B (en) Rapid real-time lip gloss makeup method
TWM625817U (en) Image simulation system with time sequence smoothness
CN112991236B (en) Image enhancement method and device based on template
CN110111239A (en) A kind of portrait head background-blurring method based on the soft segmentation of tof camera
CN111832508B (en) DIE _ GA-based low-illumination target detection method
CN111597963B (en) Light supplementing method, system and medium for face in image and electronic equipment
CN117611501A (en) Low-illumination image enhancement method, device, equipment and readable storage medium
CN111738964A (en) Image data enhancement method based on modeling
CN113724282A (en) Image processing method and related product
TWI804001B (en) Correction system for broken depth map with time sequence smoothness
CN110992371A (en) Portrait segmentation method and device based on prior information and electronic equipment
CN115526811A (en) Adaptive vision SLAM method suitable for variable illumination environment
Zhang et al. A compensation textures dehazing method for water alike area
CN112508168B (en) Frame regression neural network construction method based on automatic correction of prediction frame