WO2022103171A1 - Procédé et dispositif de densification de profondeur à l'aide d'une image rvb et d'une profondeur éparse - Google Patents

Procédé et dispositif de densification de profondeur à l'aide d'une image rvb et d'une profondeur éparse Download PDF

Info

Publication number
WO2022103171A1
WO2022103171A1 PCT/KR2021/016439 KR2021016439W WO2022103171A1 WO 2022103171 A1 WO2022103171 A1 WO 2022103171A1 KR 2021016439 W KR2021016439 W KR 2021016439W WO 2022103171 A1 WO2022103171 A1 WO 2022103171A1
Authority
WO
WIPO (PCT)
Prior art keywords
encoder
auto
drs
electronic device
map
Prior art date
Application number
PCT/KR2021/016439
Other languages
English (en)
Inventor
Uma MUDENAGUDI
Girish Dattatray HEGDE
Ramesh Ashok Tabib
Soumya Shamarao JAHAGIRDAR
Tushar Irappa PHARALE
Basavaraja Shanthappa Vandrotti
Ankit Dhiman
Vaishakh Suhas NARGUND
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Publication of WO2022103171A1 publication Critical patent/WO2022103171A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to imaging and more specifically to a device and a method of depth densification using an Red Green Blue (RGB) image and a sparse depth.
  • RGB Red Green Blue
  • the present application is based on and claims priority from an Indian Provisional Application Number 202041049771 filed on 13 th November 2020, the disclosure of which is hereby incorporated by reference herein.
  • Augmented Reality has grown in popularity in today's generation.
  • users of the AR must be able to get a realistic sense of a scene's dense depth.
  • the depth obtained from a mobile device is sparse and insufficient for accurate rendering of any virtual object.
  • the dense depth is used by a large number of automation tasks and/or tools. As a result, obtaining the dense depth from a sparse depth is required.
  • Existing method discloses different methods of generating the dense depth. However, the existing methods are time consuming and involves complex processing.
  • the principal object of the embodiments herein is to provide a system and method for generating a dense depth from an RGB input and a sparse depth.
  • the RGB input and the sparse depth are pre-processed to obtain a plurality of feature maps by passing through a first network. Further, the feature map and the RGB input are passed through a second network for generating the dense depth.
  • a method of depth densification in an electronic device comprising: receiving, by the electronic device, an RGB image and a sparse depth map of the RGB image as in input; generating, by the electronic device, at least one first feature map based on the RGB image and the sparse depth map through a first Neural Network (NN); generating, by the electronic device (, at least one second feature map by a second network of a Dense Residual Skip (DRS) auto-encoder based on the at least one feature map and the RGB image; and mapping, by the electronic device (100), the RGB image with the at least one second feature map to generate a dense depth map.
  • NN Neural Network
  • the generating, by the electronic device, the at least one first feature map comprises: converting, by the electronic device, the sparse depth map with random sparse points to a uniform depth map with uniform sparse points by creating a quad-tree; providing, by the electronic device, the uniform depth map with the uniform sparse points to a first NN specific to feature mapping as input; performing, by the first NN specific to the feature mapping, a nearest neighbour interpolation of the uniform depth map with the uniform sparse points; performing, by the first NN specific to the feature mapping, a bi-cubic interpolation of the uniform depth map with the uniform sparse points; and concatenating, by the first NN specific to the feature mapping, the RGB image, an output of the nearest neighbour interpolation and an output of the bi-cubic interpolation to generate the at least one first feature map.
  • the DRS auto-encoder comprises a plurality of layers and wherein each layer comprises an encoder associated with a decoder, and wherein each encoder is connected to each decoder.
  • the second neural network of the DRS auto-encoder are trained using a Gradient Aware Mean Square Error(GAMSE).
  • GAMSE Gradient Aware Mean Square Error
  • the generating, by the electronic device, the at least one second feature map comprises: providing, by the electronic device, the at least one first feature map and the RGB image as an input to the DRS auto-encoder; performing, by the DRS auto-encoder, a skip-connection between an encoder and a decoder at same layer of the DRS auto-encoder; and generating, by the DRS auto-encoder, the at least one second feature map based on the skip-connection.
  • the generating, by the electronic device, the at least one second feature map comprises: providing, by the electronic device, the at least one first feature map and the RGB image as an input to the DRS auto-encoder; performing, by the DRS auto-encoder, a residual connection between decoders of consecutive layers in the auto-encoder; and generating, by the DRS auto-encoder, the at least one second feature map based on the residual connection.
  • the generating, by the electronic device, the at least one second feature map comprises: providing, by the electronic device, the at least one first feature map and the RGB image as an input to the DRS auto-encoder; performing, by the DRS auto-encoder, a second level wavelet decomposition using wavelet pooling layer; and generating, by the DRS auto-encoder, the at least one second feature map based on the second level wavelet decomposition.
  • an electronic device for providing a dense depth comprising: a depth densification controller; a Dense Residual Skip (DRS) auto-encoder, a memory configured to store one or more instruction, and a processor configured to execute the one or more instructions stored in the memory to: receive an RGB image and a sparse depth map of the RGB image as in input; wherein the depth densification controller is configured to: generate at least one first feature map based on the RGB image and the sparse depth map through a first Neural Network (NN); wherein the DRS auto-encoder is configured to: generate at least one second feature map based on the at least one feature map and the RGB image through a second network of the DRS auto-encoder; and map the RGB image with the at least one second feature map to generate a dense depth map.
  • DRS Dense Residual Skip
  • a computer-readable storage medium having a computer program stored thereon that performs, when executed by a processor, the methods above.
  • Fig. 1 is a block diagram of an electronic device for generating a dense depth, according to an embodiment as disclosed herein;
  • Fig. 2 is a block diagram of a pre-processing controller of the electronic device for generating feature maps, according to an embodiment as disclosed herein;
  • Fig. 3 is a block diagram of a pre-processing a DRS auto-encoder of the electronic device for generating the dense depth, according to an embodiment as disclosed herein;
  • Fig. 4 is a flow diagram, illustrating a flow for generating a dense depth, according to an embodiment as disclosed herein;
  • Fig. 5a-5d are schematic diagrams illustrating example results using the proposed method and device, according to the embodiment as disclosed herein;
  • Fig. 6a-6d are schematic diagrams illustrating example results using the proposed method and device, according to embodiment as disclosed herein.
  • circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like.
  • circuits constituting a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block.
  • a processor e.g., one or more programmed microprocessors and associated circuitry
  • Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure.
  • the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
  • the embodiment herein provides an electronic device and a method for generating a dense depth from a sparse depth and the RGB image.
  • the method and the electronic device disclose generating a quad tree for obtaining a uniform depth from the sparse depth.
  • a Nearest Neighbour (NN) interpolation and a bi-cubic interpolation are performed separately on the uniform depth.
  • Feature maps are generated using results of the NN interpolation, the bi-cubic interpolation, and the RGB input.
  • the feature maps are sent to a Dense Residual Skip (DRS) auto-encoder for generating the dense depth.
  • DRS Dense Residual Skip
  • Fig. 1 is a block diagram of an electronic device (100) for generating a dense depth, according to an embodiment as disclosed herein;
  • the electronic device (100) may be, for example, but not limited, to a mobile device, a cellular phone, a smartphone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an Internet of things (IoT) device, an Artificial Intelligent (AI) device or the like.
  • a mobile device a cellular phone, a smartphone, a Personal Digital Assistant (PDA), a tablet computer, a laptop computer, an Internet of things (IoT) device, an Artificial Intelligent (AI) device or the like.
  • PDA Personal Digital Assistant
  • IoT Internet of things
  • AI Artificial Intelligent
  • the electronic device (100) includes a pre-processor (110), a depth densification controller (120), a Dense Residual Skip (DRS) auto-encoder (130), a memory (140), a processor (150), and a communicator (160).
  • a pre-processor 110
  • a depth densification controller 120
  • a Dense Residual Skip (DRS) auto-encoder 130
  • a memory 140
  • a processor 150
  • a communicator 160
  • the pre-processor (110) receives a RGB image and a sparse depth map as an input.
  • the sparse depth map is also referred as a sparse depth.
  • the RGB image is received from a camera of the electronic device (100) and the sparse depth map is received from one of a Light Detection and Ranging (LiDAR) system, a RGB-D camera system and a Time-of-flight (TOF) camera system.
  • LiDAR Light Detection and Ranging
  • RGB-D camera system a Time-of-flight (TOF) camera system.
  • the sparse depth map is random and irregular.
  • the pre-processor (110) converts the random sparse depth map into a uniform sparse depth map.
  • a depth map has dimension W x H indicating a 2D grid.
  • W x H indicating a 2D grid.
  • the depth map is unstructured such that if the depth map is visually inspected then no structures are visible. Whereas in the uniform depth map, a specific structure of the scene is observed.
  • a quad tree is generated using the random sparse depth.
  • the quad tree converts unstructured data into a structured data.
  • the structured data in the present embodiment is the uniform depth map.
  • the uniform depth map is also referred as a uniform sparse depth.
  • the quad tree is generated using methods known in the art.
  • the pre-processor (110) sends the uniform sparse depth as in input to different interpolators.
  • the interpolators are a Nearest Neighbor (NN) and a bi-cubic interpolator.
  • the NN interpolator performs a NN interpolation of the uniform sparse depth as there is a high correlation between points in the uniform sparse depth with unknown depth to its neighbours.
  • the bi-cubic interpolator performs a bi-cubic interpolation of the uniform sparse depth as a depth of any point in the uniform sparse depth is dependent on its neighbours in all directions and as bi-cubic interpolator utilizes a weighted average of four translated pixel values in the sparse depth for each output pixel value.
  • the outputs of the two interpolators along with the RGB image are then sent to the depth densification controller (120).
  • the depth densification controller (120) generates feature maps by concatenating the RGB image, outputs of the two interpolators.
  • the feature maps are then forwarded to the DRS auto-encoder (130).
  • the DRS auto-encoder (130) comprises a plurality of encoder layers and a plurality of decoder layers at different levels.
  • the DRS auto-encoder (203) performs skip-connection between the encoder layers and the decoder layers at same levels.
  • the DRS auto-encoder (130) is a neural network which is trained using a Gradient Aware Mean Square Error (GAMSE) loss.
  • GAMSE Gradient Aware Mean Square Error
  • the DRS auto-encoder (130) performs residual connection between subsequent layers in the decoder sub- architecture. Further, the DRS auto-encoder (130) performs a second level wavelet decomposition using wavelet pooling layers.
  • the memory (140) stores one or more instruction to be executed by the processor (105) for generating the dense depth.
  • the memory (140) storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
  • the memory (140) may, in some examples, be considered a non-transitory storage medium.
  • the term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (140) is non-movable.
  • the memory (140) can be configured to store larger amounts of information than the memory.
  • a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache).
  • the memory (140) can be an internal storage or it can be an external storage unit of the electronic device (100), a cloud storage, or any other type of external storage.
  • the processor (150) communicates with, pre-processor(110), depth densification controller (120), a Dense Residual Skip (DRS) auto-encoder (130), the memory (140), and the communicator (160).
  • the processor (150) is configured to execute instructions stored in the memory (140) for generating the dense depth.
  • the processor (150) may include one or a plurality of processors, may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial intelligence (AI) dedicated processor such as a neural processing unit (NPU).
  • CPU central processing unit
  • AP application processor
  • AI Artificial intelligence
  • the communicator (160) is configured for communicating internally between internal hardware components and with external devices via one or more networks.
  • the communicator (160) includes an electronic circuit specific to a standard that enables wired or wireless communication.
  • the fig. 1 shows various hardware components of the electronic device (100), it is to be understood that other embodiments are not limited thereon. In other embodiments, the electronic device (100), the may include less or more number of components. Further, the labels or names of the components are used only for illustrative purpose and does not limit the scope of the invention. One or more components can be combined together to perform same or substantially similar function to generating the dense depth.
  • Fig. 2 is a block diagram of the pre-processor (110) of the electronic device for generating feature maps, according to an embodiment as disclosed herein.
  • the pre-processor (110) is a first network for generating the feature maps.
  • the pre-processor (110) may comprises a quad tree (201), the NN interpolator (202), and the bi-cubic interpolator (203).
  • the quad tree (201) is generated from the sparse depth map (204) with random sparse points using method known in the art.
  • the quad tree converts the non-uniform depth into uniform depth.
  • the quad tree is the most efficient data structure method for converting non-uniform depth into uniform depth.
  • the uniform depth map (205) with uniform sparse points is sent to the NN interpolator (202) and the bi-cubic interpolator (203).
  • the NN interpolator (202) performs the NN interpolation and the bi-cubic interpolator (203) performs the bi-cubic interpolation.
  • the output of the NN interpolator (202) is S1 sparse depth as seen in fig.2 and the output of the bi-cubic interpolator (203) is S2 sparse depth.
  • the S1 and S2 along with the RGB are sent as an input to the depth densification controller (120).
  • the depth densification controller (120) comprises a concatenator which concatenates the RGB image (206), the S1 sparse depth, and the S2 sparse depth.
  • the depth densification controller (120) generates the plurality of feature maps based on an output of the concatenator.
  • the plurality of feature maps along with the S1 sparse depth, S2 sparse depth, and the RGB image (206) are sent to the DRS auto-encoder (130).
  • Fig. 3 is a block diagram, of the DRS auto-encoder (130) for generating the dense depth from the plurality of feature maps, according to an embodiment as disclosed herein.
  • the DRS auto-encoder (130) is an artificial neural network trained specifically for generating the dens depth.
  • the DRS auto-encoder (130) comprises the plurality of encoder layers and the plurality of decoder layers at different levels.
  • the DRS auto-encoder (130) comprises a decoder sub- architecture and an encoder sub-architecture.
  • the DRS auto-encoder (130) performs a residual connection between subsequent layers in the decoder sub- architecture.
  • the DRS auto-encoder (130) is trained with layers at same levels and learns model parameters by training against Gradient Aware Mean Square Error Loss (GAMSE).
  • GAMSE Gradient Aware Mean Square Error Loss
  • the DRS auto-encoder (130) generates the dense depth with new residual connections between decoder layers, addition of a wavelet pooling, and implication of the GAMSE for better depth densification.
  • the wavelet pooling is a second level wavelet decomposition using wavelet pooling layers.
  • Equation 1, 2 and 3 below are used for depth generation.
  • equation (1) represents the difference of gradient of predicted depth and ground truth depth in horizontal and vertical direction respectively.
  • Equation (2) represents Mean Squared Error between predicted depth and ground truth.
  • a hyper parameter gamma plays a crucial role in assigning the weightage to gradient regions in the image.
  • the reconstruction problems are mitigated due to inaccurate depth prediction at edges.
  • the DRS auto-encoder (130) achieves fast convergence, enhanced stability, and edge preservation.
  • the GAMSE loss preserves edges which are high frequency features in a scene.
  • the Wavelet decomposition ensures lossless down-sampling.
  • the residual skip connections mitigates the vanishing gradient problem and allows feature reusability for better generalization.
  • Fig. 4 is a flow diagram, illustrating a flow (400) for generating the dense depth, according to an embodiment as disclosed herein;
  • the electronic device (100) may receive an RGB image and a sparse depth map of the RGB image as an input.
  • the RGB input (RGB image) and the sparse depth map are received as an input.
  • the RGB input (RGB image) and the sparse depth map are received from a multimedia source.
  • the RGB image and the sparse depth map are captured by the electronic device (100).
  • the RGB input (RGB image) and the sparse depth map are received as an input by the pre-processor (110).
  • the electronic device (100) may generate a uniform depth map based on the sparse depth map.
  • the quad tree (201) is generated from the sparse depth map for obtaining uniformly spaced sparse depth
  • the electronic device (100) may generate S1 sparse depth and the S2 sparse depth using the NN interpolator (202) and the bi-cubic interpolator (203) respectively based on the uniform depth map.
  • the uniformly spaced sparse depth is sent to the NN interpolator (202) and the bi-cubic interpolator (203).
  • the S1 sparse depth and the S2 sparse depth are obtained using the NN interpolator (202) and the bi-cubic interpolator (203) respectively.
  • the electronic device (100) may generate the plurality of feature maps based on the S1 sparse depth, the S2 sparse depth, and the RGB image.
  • the S1 sparse depth, the S2 sparse depth, and the RGB image are received by the depth densification controller (120).
  • the depth densification controller (120) concatenates the S1 sparse depth, the S2 sparse depth and the RGB image and further generates the plurality of feature maps, wherein the plurality of feature maps generated comprises at least one first feature map.
  • the electronic device (100) may generate the dense depth using the plurality of feature maps along with the S1 sparse depth, S2 sparse depth and the RGB image.
  • the DRS auto-encoder (130) receives the feature maps along with the S1, S2 sparse depth and the RGB image.
  • the DRS-auto encoder (130) generates the dense depth using the input received.
  • the DRS-auto encoder (130) generated a plurality of second feature maps comprising at least one second feature map. Further, the dense depth map is generated using the RGB image and the at least one second feature map.
  • the electronic device (100) may generate dense depth map based on the RGB image and the at least one second feature map.
  • Fig. 5a-5d are schematic diagrams illustrating example results using the proposed method and device, according to the embodiment as disclosed herein.
  • fig. 5a shows the RGB image.
  • Fig. 5b shows the output using the proposed, wherein the DRS auto-encoder (130) is trained using the GAMSE loss.
  • Fig. 5c the output using the proposed, wherein the DRS auto-encoder (130) is trained without the GAMSE loss or using a Mean Squared Error (MSE) loss.
  • Fig. 5d the output using the proposed, wherein the DRS auto-encoder (130) is the actual image without the proposed method and device.
  • MSE Mean Squared Error
  • the dense depth predictions with DRS auto-encoder (!30) trained with GAMSE loss function provides clear, smooth and better dense depth maps when compared to the network which is trained only using the MSE loss.
  • Fig. 6a-6d are schematic diagrams illustrating example results using the proposed method and device, according to embodiment as disclosed herein.
  • 602 is the RGB image.
  • Fig. 6b shows the output using the proposed, wherein the DRS auto-encoder (130) is trained using the GAMSE loss.
  • Fig. 6c the output using the proposed, wherein the DRS auto-encoder (130) is trained without the GAMSE loss or using a Mean Squared Error (MSE) loss.
  • Fig. 6d the output using the proposed, wherein the DRS auto-encoder (130) is the actual image without the proposed method and device.
  • MSE Mean Squared Error
  • the dense depth predictions with DRS auto-encoder (!30) trained with GAMSE loss function provides clear, smooth and better dense depth maps when compared to the network which is trained only using the MSE loss.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

Les modes de réalisation dans la description divulguent un procédé de densification de profondeur dans un dispositif électronique (100), le procédé consistant : à recevoir, par le dispositif électronique (100), une image RVB et une carte de profondeur éparse de l'image RVB en tant qu'entrée ; à générer, par le dispositif électronique (100), au moins une première carte de caractéristiques sur la base de l'image RVB et de la carte de profondeur éparse par l'intermédiaire d'un premier réseau ; à générer, par le dispositif électronique (100), au moins une seconde carte de caractéristiques par un second réseau d'un auto-codeur à saut résiduel dense (DRS) sur la base de ladite carte de caractéristiques et de l'image RVB ; à mettre en correspondance, par le dispositif électronique (100), l'image RVB avec ladite seconde carte de caractéristiques pour générer une carte de profondeur dense.
PCT/KR2021/016439 2020-11-13 2021-11-11 Procédé et dispositif de densification de profondeur à l'aide d'une image rvb et d'une profondeur éparse WO2022103171A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IN202041049771 2020-11-13
IN202041049771 2021-10-25

Publications (1)

Publication Number Publication Date
WO2022103171A1 true WO2022103171A1 (fr) 2022-05-19

Family

ID=81602669

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2021/016439 WO2022103171A1 (fr) 2020-11-13 2021-11-11 Procédé et dispositif de densification de profondeur à l'aide d'une image rvb et d'une profondeur éparse

Country Status (1)

Country Link
WO (1) WO2022103171A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019111006A1 (fr) * 2017-12-06 2019-06-13 V-Nova International Ltd Procédés et appareils pour le codage et le décodage hiérarchiques d'un flux d'octects

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019111006A1 (fr) * 2017-12-06 2019-06-13 V-Nova International Ltd Procédés et appareils pour le codage et le décodage hiérarchiques d'un flux d'octects

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
BHAGAT SNIGDHA, JOSHI S D: "Image fusion using symmetric skip autoencoder via an Adversarial Regulariser", SNIGDHA BHAGAT, S. D. JOSHI, BREJESH LALL, MDPI, 5 June 2020 (2020-06-05), XP055929943, Retrieved from the Internet <URL:https://arxiv.org/pdf/2005.00447.pdf> [retrieved on 20220610] *
HEGDE GIRISH; PHARALE TUSHAR; JAHAGIRDAR SOUMYA; NARGUND VAISHAKH; TABIB RAMESH ASHOK; MUDENAGUDI UMA; VANDROTTI BASAVARAJA; DHIMA: "DeepDNet: Deep Dense Network for Depth Completion Task", 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), IEEE, 19 June 2021 (2021-06-19), pages 2190 - 2199, XP033967644, DOI: 10.1109/CVPRW53098.2021.00248 *
JARITZ MAXIMILIAN; CHARETTE RAOUL DE; WIRBEL EMILIE; PERROTTON XAVIER; NASHASHIBI FAWZI: "Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation", 2018 INTERNATIONAL CONFERENCE ON 3D VISION (3DV), IEEE, 5 September 2018 (2018-09-05), pages 52 - 60, XP033420090, DOI: 10.1109/3DV.2018.00017 *
ZHAO CHEN; VIJAY BADRINARAYANAN; GILAD DROZDOV; ANDREW RABINOVICH: "Estimating Depth from RGB and Sparse Sensing", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 9 April 2018 (2018-04-09), 201 Olin Library Cornell University Ithaca, NY 14853 , XP080997564, DOI: 10.1007/978-3-030-01225-0_11 *

Similar Documents

Publication Publication Date Title
CN108805023B (zh) 一种图像检测方法、装置、计算机设备及存储介质
CN108121991B (zh) 一种基于边缘候选区域提取的深度学习舰船目标检测方法
CN109993810B (zh) 一种智能素描绘画方法、装置、存储介质及终端设备
KR100505311B1 (ko) 화상 분할 처리 방법, 화상 분할 처리 장치, 화상 처리방법 및 화상 처리 장치
CN110379020B (zh) 一种基于生成对抗网络的激光点云上色方法和装置
CN101571954B (zh) 亚像素配准
CN112241714B (zh) 图像中指定区域的识别方法、装置、可读介质和电子设备
WO2020177584A1 (fr) Procédé de composition graphique et dispositif associé
US10846887B2 (en) Machine vision processing system
CN110648284B (zh) 一种光照不均匀的图像处理方法及装置
CN111291826A (zh) 基于相关性融合网络的多源遥感图像的逐像素分类方法
CN108875759B (zh) 一种图像处理方法、装置及服务器
CN114745532B (zh) 混合色温场景白平衡处理方法、装置、存储介质及终端
CN104809716A (zh) 一种确定图像偏移量的方法及设备
CN110968375B (zh) 界面控制方法、装置、智能终端及计算机可读存储介质
WO2022103171A1 (fr) Procédé et dispositif de densification de profondeur à l&#39;aide d&#39;une image rvb et d&#39;une profondeur éparse
CN111107274B (zh) 一种图像亮度统计方法及成像设备
US20150139557A1 (en) Fast dense patch search and quantization
CN108737821B (zh) 基于多通道浅层特征的视频兴趣区域快速预选方法及系统
CN111126187A (zh) 火情检测方法、系统、电子设备及存储介质
CN116597086A (zh) 一种图像的处理方法和装置
CN112581492A (zh) 一种运动目标检测方法和装置
CN107886550B (zh) 图像编辑传播方法和系统
WO2023068655A1 (fr) Système et procédé d&#39;apprentissage de courbes de tons pour une amélioration d&#39;image locale
WO2023149614A1 (fr) Procédé et dispositif électronique permettant de réduire efficacement des dimensions d&#39;une trame d&#39;image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21892338

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21892338

Country of ref document: EP

Kind code of ref document: A1