WO2019117393A1 - Appareil d'apprentissage et procédé de génération d'informations de profondeur, appareil et procédé de génération d'informations de profondeur, et support d'enregistrement associé à ces derniers - Google Patents

Appareil d'apprentissage et procédé de génération d'informations de profondeur, appareil et procédé de génération d'informations de profondeur, et support d'enregistrement associé à ces derniers Download PDF

Info

Publication number
WO2019117393A1
WO2019117393A1 PCT/KR2018/001156 KR2018001156W WO2019117393A1 WO 2019117393 A1 WO2019117393 A1 WO 2019117393A1 KR 2018001156 W KR2018001156 W KR 2018001156W WO 2019117393 A1 WO2019117393 A1 WO 2019117393A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth information
convolution
stereo camera
generating
fusing
Prior art date
Application number
PCT/KR2018/001156
Other languages
English (en)
Korean (ko)
Inventor
손광훈
박기홍
Original Assignee
연세대학교 산학협력단
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 연세대학교 산학협력단 filed Critical 연세대학교 산학협력단
Publication of WO2019117393A1 publication Critical patent/WO2019117393A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present invention relates to a learning apparatus and method for generating depth information, an apparatus and method for generating depth information, and a recording medium therefor.
  • 3D maps can be used directly in the identification of smart cannon drone autonomous navigation routes, and also in applications.
  • 3D structure information used in Augmented Reality (AR) also requires a 3D map.
  • the core technology of 3D map production is 3D modeling by grasping the depth information.
  • the conventional method of acquiring depth information using a stereo camera has a lot of errors due to noise and algorithm limit
  • the method of acquiring depth information with the measurement sensor has a problem of low resolution.
  • the present invention provides a learning apparatus and method for depth information generation capable of generating more accurate depth information by fusing ladder depth information and stereo camera depth information using learning An apparatus and method for generating depth information, and a recording medium therefor.
  • stereo camera depth information generated by fusing left and right images acquired from a stereo camera device and ladder depth information acquired from a Lada device are combined,
  • the first depth information generator learns reference stereo camera depth information and reference depth information as input values and reference actual depth information as labels
  • the second depth information generator learns the reference stereo camera depth information and reference
  • the first depth information generating unit learns the reference first depth information and the reference reference image generated by the first depth information generating unit as input values and the reference actual depth information as a label with the depth information as an input value,
  • the error back propagation process of the second depth information generation unit And the error value of the depth information is further considered.
  • first depth information generator and the second depth information generator are learned using a composite neural network algorithm.
  • first depth information generating unit and the second depth information generating unit use die rate convolution.
  • the first depth information generator comprises: a first filter unit for performing a diatomic convolution on the ladder depth information; A second filter unit for performing di-rate convolution on the stereo camera depth information; A first fused unit for fading the Lada depth information on which the Diya convolution is performed and the stereo camera depth information on which the Diya convolution is performed; And a third filter unit for performing the diatomic convolution on the depth information fused in the first fused unit to generate the first depth information.
  • the second depth information generator comprises: a fourth filter unit for performing a diatomic convolution on the reference image; A fifth filter unit for performing a diatomic convolution on the first depth information; A second fusion unit for fusing the reference image on which the dyerate convolution is performed and the first depth information on which the dyerate convolution is performed; And a third filter unit for performing diore rate convolution on depth information fused in the second fusion unit to generate the second depth information.
  • the first depth information generator uses an error value calculated using the following equation in an error back propagation process during a learning process.
  • Is an error value used in the error back propagation process by the first depth information generator Is an error value of the first depth information
  • stereo camera depth information generated by fusing a left image and a right image acquired from a stereo camera device is fused with ladder depth information acquired from a Lada device
  • a first depth information generating unit for generating first depth information by fusing the stereo camera depth information and the ladder depth information;
  • a second depth information generator for generating the second depth information by fusing the first depth information with a reference image that is a reference for stereo camera depth information fusion among the left and right images
  • the generating unit learns the reference stereo camera depth information and the reference line depth information as input values and the reference actual depth information as labels
  • the second depth information generating unit learns the reference stereo camera depth information and the reference depth
  • the reference first depth information and the reference reference image generated by the first depth information generator are used as input values and the reference actual depth information is used as a label
  • the multiple depth to an error value of the inverse spread process characterized in that the further study in consideration information generating apparatus is provided
  • second depth information is generated by fusing stereo camera depth information generated by merging left and right images acquired from a stereo camera apparatus with ladder depth information obtained from a Lada apparatus
  • A generating first depth information by fusing reference stereo camera depth information and reference ladder depth information
  • second depth information by fusing a reference image, which is a reference of the reference stereo camera depth information fusion, from the reference left image and the reference right image, and the first depth information
  • the step (a) includes the steps of: (a1) performing di-rate convolution on the reference depth information; (a2) performing di-rate convolution on the reference stereo camera depth information; (a3) fusing the ladder depth information on which the diaRate convolution is performed and the stereo camera depth information on which the diaRate convolution is performed; And (a4) generating the first depth information by performing rate matching on the depth information fused in the step (a3).
  • second depth information is generated by fusing stereo camera depth information generated by fusion of a left image and a right image acquired from a stereo camera apparatus with ladder depth information obtained from a Lada apparatus
  • the method comprising the steps of: (a) generating first depth information by fusing the stereo camera depth information and the ladder depth information; And (b) generating the second depth information by fusing the reference image and the first depth information based on the stereo camera depth information fusion among the left and right images, wherein the step (a) The reference stereo camera depth information and the reference line depth information are learned as an input value and the reference actual depth information is used as a label, and the step (b) is a step of inputting the reference stereo camera depth information and reference depth information The reference first depth information and the reference reference image generated by the first depth information generator are used as input values and the reference actual depth information is used as a label, and the step (a) The error value of the error back propagation process of step (b) is further considered in the error back propagation process A
  • a computer-readable recording medium having recorded thereon a program for performing the learning method or the depth information generating method for generating the depth information.
  • the present invention is advantageous in that more accurate depth information can be generated quickly using learning.
  • FIG. 1 is a diagram for explaining a combined product neural network algorithm.
  • FIG. 2 is a diagram for explaining a convolution method of a composite-object-based neural network.
  • FIG. 3 is a diagram for explaining a downsampling method of a composite-articulated network.
  • FIG. 5 is a structural diagram of a learning apparatus for generating depth information according to a preferred embodiment of the present invention.
  • FIG. 6 is a diagram for explaining a learning process of a learning apparatus for generating depth information according to a preferred embodiment of the present invention.
  • FIG. 7 is a structural diagram of an apparatus for generating depth information according to an embodiment of the present invention.
  • FIG. 8 is a flowchart illustrating a learning method for generating depth information according to an exemplary embodiment of the present invention.
  • FIG. 9 is a flowchart illustrating a depth information generating method according to an exemplary embodiment of the present invention.
  • first, second, etc. may be used to describe various components, but the components should not be limited by the terms. The terms are used only for the purpose of distinguishing one component from another.
  • first component may be referred to as a second component, and similarly, the second component may also be referred to as a first component.
  • the learning apparatus for generating depth information acquires stereo camera depth information generated by fusing left and right images acquired using a stereo camera apparatus and acquires stereo camera depth information using a LIDAR The depth information is fused to generate more accurate depth information.
  • the present invention can use a Deep Learning algorithm.
  • a CNN Convolution Neural Network
  • a learning apparatus for generating depth information according to a preferred embodiment of the present invention uses a dilate convolution, and includes two learning networks each composed of a first depth information generating unit and a second depth information generating unit It is possible to learn depth information more quickly and accurately.
  • Lada depth information contains accurate depth information, it is not suitable for direct application to real 3D map because resolution is low.
  • Stereo camera depth information has a high resolution but has a large depth error. It is difficult to produce accurate 3D map .
  • the learning apparatus for generating depth information learns a method of generating depth information by fusing the ladder depth information and the stereo depth information so that it is possible to generate depth information of a high resolution with a relatively small error have.
  • a learning apparatus for generating depth information uses two learning networks composed of a first depth information generating unit and a second depth information generating unit, the first depth information generation unit also considers the error value generated in the second depth information generation unit in the process of propagation so that more accurate and efficient learning can be performed and the depth information generation method can be learned more accurately and quickly.
  • FIG. 1 is a diagram for explaining the convolution of a composite-object neural network.
  • the composite neural network algorithm performs convolution on an input image to extract a feature map for an input image, and identifies or classifies an input image through a feature map )do.
  • the feature map includes feature information on the input image.
  • the convolution can be repeated, and the number of iterations can be variously determined according to the embodiment.
  • the size of the filter (or kernel 210) used for the convolution is determined, convolution is performed through a weighted sum of pixel values of the input image 200 and a weight assigned to each pixel of the filter. That is, the pixel value 230 of the convolution layer can be determined by multiplying the weight of the filter by the pixel value for each corresponding pixel for a specific region of the input image where the filter is overlapped.
  • a convolution layer of size 5X5 can be generated.
  • the size of the convolutional layer i.e., the convoluted image, relative to the input image decreases.
  • a convolution layer having a size of 7 ⁇ 7 equal to the size of the input image can be generated.
  • the number of convolution layers may be determined according to the number of filters used.
  • Fig. 2 is a diagram for explaining di-rate convolution.
  • the diate convolution considers points at nine positions as shown in FIG. 2 (a).
  • k is set to 2
  • k is a distance to a neighboring point among the points at nine positions.
  • FIG. 2 (b) if the k-value is increased and the dia-rate convolution is repeatedly performed, the range that can be considered at one position increases exponentially. Therefore, by repeating the diate convolution by changing the k value, it becomes possible to quickly learn the entire information even with a small number of convolution operations.
  • FIG. 3 is a structural diagram of a learning apparatus for generating depth information according to a preferred embodiment of the present invention
  • FIG. 4 is a view for explaining a learning process of a learning apparatus for generating depth information according to a preferred embodiment of the present invention to be.
  • a learning apparatus for generating depth information may include a first depth information generator 110 and a second depth information generator 120 .
  • the first depth information generator 110 sets the reference depth information 10 and the reference stereo camera depth information 20 as input values and sets the reference depth information 30 and the reference depth information 30 as labels, Upon receiving the camera depth information, the first depth information 40 can be learned.
  • the first depth information generator 110 may generate the first depth information 40 using the following equation.
  • Equation 1 D F is the first depth information
  • D L is the reference depth depth information
  • D S is the reference stereo camera depth information
  • ⁇ F is an internal parameter of the first depth information generating unit.
  • the reference ladder depth information 10 may be obtained using a ladder device and the reference stereo camera depth information 20 may be generated by fusing the left and right images obtained using the stereo camera device.
  • a 3D image generation method using a known stereo camera can be used for fusion of the left image and the right image.
  • the reference actual depth information 30 means actual depth information, and can be acquired using a high-cost, low-efficiency precision depth sensor, for example.
  • the first depth information generator 110 can be learned using the above-described composite neural network, and can use a diatomic convolution.
  • the first depth information generator 110 may include a first filter 112, a second filter 114, a first fusion unit 116, and a third filter 118.
  • the first filter 112 may perform a diatomic convolution on the reference ladder depth information 10 and the second filter 114 may perform a dyadic convolution on the reference stereo camera depth information 20 have.
  • the first fusion unit 116 merges the reference depth information 10 on which the dihedral convolution has been performed and the reference stereo camera depth information 20 on which the dia-rate convolution has been performed.
  • the third filter 118 performs diate rate convolution on the depth information fused in the first fused portion 116 to generate the first depth information 40.
  • the second depth information generator 120 may include a reference reference image 26 and a first depth information 40 as input values, a reference depth information 30 as a reference label, The second depth information 50 can be learned to be generated.
  • the second depth information generator 120 may generate the second depth information 50 using the following equation.
  • Equation 2 D * is the second, and the depth information, I l is the reference standard image, ⁇ R is the internal parameter generation of the second depth information.
  • the reference reference image 26 is an image based on the generation of the reference stereo camera depth information 20 among the left image and the right image.
  • the second depth information generator 120 can also be learned using the above-described composite neural network, and can use the diate convolution.
  • the first depth information 40 generated by the first depth information generator 110 has comparatively accurate depth information. However, if the depth value in the detailed edge area is corrected, the error can be further reduced. Therefore, the second depth information generator 120 can be learned to generate more accurate depth information using the reference reference image 26 directly shot by the stereo camera.
  • the second depth information generator 120 may include a fourth filter 122, a fifth filter 124, a second fusion unit 126, and a sixth filter 128.
  • a diatomic convolution for the reference reference image 26 may be performed, and in the fifth filter 124, a diatomic convolution for the first depth information 40 may be performed.
  • a fusion of the reference reference image 26 on which the dia-rate convolution is performed and the first depth information 40 on which the dia-rate convolution is performed is performed.
  • the sixth filter 128 performs diate rate convolution on the depth information fused in the second fused portion 126 to generate the second depth information 50. For example, It is possible to perform k-value convolution by applying k values of the di-rate convolution filter performed in the second filter 122 and the fourth filter 124 in the reverse order.
  • the first depth information generator 110 and the second depth information generator 120 compare the generated depth information with a value input as a label, and perform a back propagation process It is learned to generate better depth information values.
  • the error back propagation process starts at the end of the second depth information generator 120 and the difference value between the second depth information 50 and the reference actual depth information 30 is re- .
  • the second error value is transmitted from the sixth filter 128 of the second depth information generator 120 through the convolution filter in the direction of the fourth filter 122 and the fifth filter 124,
  • the second depth information generator 120 is learned to generate the second depth information 50 closer to the reference actual depth information 30.
  • the first depth information generator 110 also performs learning through an error back propagation process.
  • the first depth information generator 110 generates the first depth information 40 as a difference between the generated first depth information 40 and the reference depth information 30 inputted as a label, And the second error value is transmitted again in the reverse order.
  • the error value used in the error back propagation process in the first depth information generator 110 may be calculated using the following equation.
  • Equation (3) Is an error value used in the error back propagation process in the first depth information generator, Is a first error value, Is a second error value.
  • both the first error value and the second error value transmitted from the second depth information generator 120 are used for the error back propagation process
  • the first depth information generator 110 may be learned to generate the first depth information 40 that allows the second depth information generator 120 to generate more accurate second depth information 50 have.
  • the learning apparatus for generating depth information uses two learning networks, which are a first depth information generating unit and a second depth information generating unit, So that it can be learned to generate more accurate depth information quickly.
  • FIG. 5 is a structural diagram of an apparatus for generating depth information according to an embodiment of the present invention.
  • the apparatus for generating depth information may include a first depth information generator 710 and a second depth information generator 720.
  • the depth information generating apparatus may be learned in advance like the learning process of the learning apparatus for generating depth information according to the preferred embodiment of the present invention described above.
  • the first depth information generator 710 may receive the depth information and the depth information of the stereo camera to generate the first depth information.
  • the second depth information generator 720 may generate the second depth information by receiving the first depth information and the reference image.
  • FIG. 6 is a flowchart illustrating a learning method for generating depth information according to an exemplary embodiment of the present invention.
  • a method for generating depth information includes a first depth information generation step S810, a second depth information generation step S820, a first error back propagation step S830) and a second error back propagation step (S840).
  • the first depth information generation step S810 is a step of generating the first depth information 40 in the first depth information generation unit 110.
  • the second depth information generation step S820 is a step of generating the second depth information 50 in the second depth information generation unit 120.
  • the first error back propagation step (S830) is a step of learning the resultant neural network through the error back propagation process in the second depth information generating unit (120).
  • the second error back propagation step S840 is a step of learning the resultant neural network through the error back propagation process in the first depth information generator 110.
  • FIG. 7 is a flowchart illustrating a depth information generating method according to an exemplary embodiment of the present invention.
  • a depth information generating method may include a first depth information generating step S910 and a second depth information generating step S920.
  • the first depth information generating step S910 is a step of generating the first depth information in the first depth information generating unit 710.
  • the second depth information generation step S920 is a step of generating second depth information in the second depth information generation unit 720.
  • the above-described technical features may be implemented in the form of program instructions that can be executed through various computer means and recorded in a computer-readable medium.
  • the computer-readable medium may include program instructions, data files, data structures, and the like, alone or in combination.
  • the program instructions recorded on the medium may be those specially designed and constructed for the embodiments or may be available to those skilled in the art of computer software.
  • Examples of computer-readable media include magnetic media such as hard disks, floppy disks and magnetic tape; optical media such as CD-ROMs and DVDs; magnetic media such as floppy disks; Magneto-optical media, and hardware devices specifically configured to store and execute program instructions such as ROM, RAM, flash memory, and the like.
  • program instructions include machine language code such as those produced by a compiler, as well as high-level language code that can be executed by a computer using an interpreter or the like.
  • the hardware device may be configured to operate as one or more software modules to perform the operations of the embodiments, and vice versa.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un appareil d'apprentissage et un procédé de génération d'informations de profondeur, un appareil et un procédé de génération d'informations de profondeur, et un support d'enregistrement associé à ces derniers. L'appareil d'apprentissage décrit permettant de générer des informations de profondeur apprend un second procédé de génération d'informations de profondeur par fusion d'informations de profondeur de caméra stéréo générées par fusion d'une image gauche et d'une image droite acquises à partir d'un dispositif de caméra stéréo et d'informations de profondeur de LIDAR acquises à partir d'un dispositif LIDAR, et comprend : une première unité de génération d'informations de profondeur pour apprendre un premier procédé de génération d'informations de profondeur par fusion des informations de profondeur de caméra stéréo et des informations de profondeur LIDAR ; et une seconde unité de génération d'informations de profondeur pour apprendre le second procédé de génération d'informations de profondeur par fusion des premières informations de profondeur et d'une image standard, qui est devenue la norme en fusion d'informations de profondeur de caméra stéréo entre l'image gauche et l'image droite, la première unité de génération d'informations de profondeur apprenant en utilisant des informations de profondeur de caméra stéréo de référence et des informations de profondeur LIDAR de référence en tant que valeurs d'entrée et en utilisant des informations de profondeur réelle de référence en tant qu'étiquette, la seconde unité de génération d'informations de profondeur apprenant en utilisant, en tant que valeurs d'entrée, une image standard de référence et des premières informations de profondeur de référence, qui sont générées par la première unité de génération d'informations de profondeur en utilisant les informations de profondeur de caméra stéréo de référence et les informations de profondeur LIDAR de référence en tant que valeurs d'entrée, et en utilisant les informations de profondeur réelle de référence en tant qu'étiquette, et la première unité de génération d'informations de profondeur apprenant en considérant en outre, dans une étape de rétropropagation d'erreur pendant un processus d'apprentissage, une valeur d'erreur de l'étape de rétropropagation d'erreur de la seconde unité de génération d'informations de profondeur. Selon l'appareil décrit, des informations de profondeur plus précises peuvent être rapidement générées à l'aide d'un apprentissage.
PCT/KR2018/001156 2017-12-13 2018-01-26 Appareil d'apprentissage et procédé de génération d'informations de profondeur, appareil et procédé de génération d'informations de profondeur, et support d'enregistrement associé à ces derniers WO2019117393A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020170171004A KR101976290B1 (ko) 2017-12-13 2017-12-13 깊이 정보 생성을 위한 학습 장치 및 방법과 깊이 정보 생성 장치 및 방법 그리고 이에 관한 기록 매체
KR10-2017-0171004 2017-12-13

Publications (1)

Publication Number Publication Date
WO2019117393A1 true WO2019117393A1 (fr) 2019-06-20

Family

ID=66655951

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/001156 WO2019117393A1 (fr) 2017-12-13 2018-01-26 Appareil d'apprentissage et procédé de génération d'informations de profondeur, appareil et procédé de génération d'informations de profondeur, et support d'enregistrement associé à ces derniers

Country Status (2)

Country Link
KR (1) KR101976290B1 (fr)
WO (1) WO2019117393A1 (fr)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102229861B1 (ko) 2019-10-24 2021-03-18 연세대학교 산학협력단 저채널 라이다와 스테레오 카메라를 이용한 깊이 추정 장치 및 방법
CN111754798A (zh) * 2020-07-02 2020-10-09 上海电科智能系统股份有限公司 融合路侧激光雷达和视频实现车辆和周边障碍物探知方法
KR102334332B1 (ko) * 2020-07-31 2021-12-02 숭실대학교 산학협력단 가이디드 필터링을 이용한 딥러닝 네트워크 기반 깊이 영상 결과 개선 방법, 이를 수행하기 위한 기록 매체 및 장치
KR20220066690A (ko) * 2020-11-16 2022-05-24 삼성전자주식회사 로봇 및 그 제어 방법

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210518A1 (en) * 2015-01-15 2016-07-21 vClick3d, Inc. Systems and methods for controlling the recording, storing and transmitting of video surveillance content
US20170032222A1 (en) * 2015-07-30 2017-02-02 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
KR20170028749A (ko) * 2015-09-04 2017-03-14 한국전자통신연구원 학습 기반 깊이 정보 추출 방법 및 장치
US20170140253A1 (en) * 2015-11-12 2017-05-18 Xerox Corporation Multi-layer fusion in a convolutional neural network for image classification
WO2017176112A1 (fr) * 2016-04-04 2017-10-12 Fugro N.V. Analyse de données spatiales

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101789071B1 (ko) * 2011-01-13 2017-10-24 삼성전자주식회사 깊이 영상의 특징 추출 방법 및 장치
KR101825218B1 (ko) * 2016-04-08 2018-02-02 한국과학기술원 깊이 정보 생성 장치 및 방법

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160210518A1 (en) * 2015-01-15 2016-07-21 vClick3d, Inc. Systems and methods for controlling the recording, storing and transmitting of video surveillance content
US20170032222A1 (en) * 2015-07-30 2017-02-02 Xerox Corporation Cross-trained convolutional neural networks using multimodal images
KR20170028749A (ko) * 2015-09-04 2017-03-14 한국전자통신연구원 학습 기반 깊이 정보 추출 방법 및 장치
US20170140253A1 (en) * 2015-11-12 2017-05-18 Xerox Corporation Multi-layer fusion in a convolutional neural network for image classification
WO2017176112A1 (fr) * 2016-04-04 2017-10-12 Fugro N.V. Analyse de données spatiales

Also Published As

Publication number Publication date
KR101976290B1 (ko) 2019-05-07

Similar Documents

Publication Publication Date Title
WO2019117393A1 (fr) Appareil d'apprentissage et procédé de génération d'informations de profondeur, appareil et procédé de génération d'informations de profondeur, et support d'enregistrement associé à ces derniers
CN112184738B (zh) 一种图像分割方法、装置、设备及存储介质
WO2022068487A1 (fr) Procédé de génération d'image stylisée, procédé d'entraînement de modèle, appareil, dispositif, et support
WO2021085784A1 (fr) Procédé d'apprentissage d'un modèle de détection d'objet et dispositif de détection d'objet dans lequel un modèle de détection d'objet est exécuté
WO2020071701A1 (fr) Procédé et dispositif de détection d'un objet en temps réel au moyen d'un modèle de réseau d'apprentissage profond
WO2019164379A1 (fr) Procédé et système de reconnaissance faciale
CN110163903A (zh) 三维图像的获取及图像定位方法、装置、设备和存储介质
EP3872764B1 (fr) Procédé et appareil de construction de carte
WO2023137913A1 (fr) Procédé de résumé de texte de vidéo basé sur un modèle multimodal, dispositif et support d'enregistrement
WO2020246655A1 (fr) Procédé de reconnaissance de situation et dispositif permettant de le mettre en œuvre
WO2023185494A1 (fr) Procédé et appareil d'identification de données de nuage de points, dispositif électronique et support d'enregistrement
CN113076891B (zh) 基于改进高分辨率网络的人体姿态预测方法及系统
CN113487608A (zh) 内窥镜图像检测方法、装置、存储介质及电子设备
US20220358662A1 (en) Image generation method and device
WO2019147024A1 (fr) Procédé de détection d'objet à l'aide de deux caméras aux distances focales différentes, et appareil associé
WO2023168955A1 (fr) Procédé et appareil de détermination d'informations de pose de collecte, dispositif et support lisible par ordinateur
WO2016186236A1 (fr) Système et procédé de traitement de couleur pour objet tridimensionnel
WO2023016111A1 (fr) Procédé et appareil de mise en correspondance de valeurs clés, et support lisible et dispositif électronique
CN113610034B (zh) 识别视频中人物实体的方法、装置、存储介质及电子设备
CN109829401A (zh) 基于双拍摄设备的交通标志识别方法及装置
CN109871890A (zh) 图像处理方法和装置
WO2023237065A1 (fr) Procédé et appareil de détection de fermeture de boucle, dispositif électronique et support
WO2016021829A1 (fr) Procédé et de reconnaissance de mouvement et dispositif de reconnaissance de mouvement
WO2022191424A1 (fr) Dispositif électronique et son procédé de commande
CN112598718B (zh) 一种无监督多视角多模态智能眼镜图像配准方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18889512

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18889512

Country of ref document: EP

Kind code of ref document: A1