KR20170028591A - Apparatus and method for object recognition with convolution neural network - Google Patents

Apparatus and method for object recognition with convolution neural network Download PDF

Info

Publication number
KR20170028591A
KR20170028591A KR1020150125393A KR20150125393A KR20170028591A KR 20170028591 A KR20170028591 A KR 20170028591A KR 1020150125393 A KR1020150125393 A KR 1020150125393A KR 20150125393 A KR20150125393 A KR 20150125393A KR 20170028591 A KR20170028591 A KR 20170028591A
Authority
KR
South Korea
Prior art keywords
image
size information
unit
neural network
object recognition
Prior art date
Application number
KR1020150125393A
Other languages
Korean (ko)
Other versions
KR101980360B1 (en
Inventor
이영운
Original Assignee
한국전자통신연구원
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 한국전자통신연구원 filed Critical 한국전자통신연구원
Priority to KR1020150125393A priority Critical patent/KR101980360B1/en
Publication of KR20170028591A publication Critical patent/KR20170028591A/en
Application granted granted Critical
Publication of KR101980360B1 publication Critical patent/KR101980360B1/en

Links

Images

Classifications

    • G06K9/66
    • G06K9/4652
    • G06K9/6204
    • G06K9/6215

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The present invention relates to an apparatus and method for recognizing an object using a convolutional neural network. The apparatus includes an image input unit for acquiring and inputting a color image and a depth image, an image processor for generating a composite image of the color image and the depth image, and correcting resolution and noise of the generated composite image, A size information extracting unit that extracts size information of an object included in the image using a depth value of the image, and a size information extracting unit that extracts size information of the object extracted by the size information extracting unit and the synthesized image corrected by the image processing unit, And an object recognition unit for recognizing the object by applying it to the solution neural network.

Description

[0001] Apparatus and method for object recognition using convolution neural network [0002]

The present invention relates to an apparatus and method for recognizing an object using a convolutional neural network.

Object recognition technology extracts feature points from camera images and analyzes the distribution to identify the types of objects included in the images. Representative examples of object recognition technology include face recognition, human recognition, and traffic signal recognition.

Recently, object recognition technology using convolutional neural network has appeared, which shows accuracy exceeding the recognition rate of existing object recognition technology, and consequently object recognition research using convolutional neural network is actively proceeding.

However, the object recognition technology using the existing convolution neural network does not consider the color image and the depth image simultaneously in the feature point extraction step, so it can not accurately distinguish the area of the object and can not scale-invariant Respectively.

Korean Patent Publication No. 10-2014-0104091

An object of the present invention is to provide an apparatus and method for extracting feature points by applying convolution neural networks simultaneously to a color image and a depth image to clearly distinguish an object region and applying absolute size information derived from depth information to a convolutional neural network, And an object recognition apparatus and method using a convolution neural network capable of robust object recognition.

The technical problems of the present invention are not limited to the above-mentioned technical problems, and other technical problems which are not mentioned can be understood by those skilled in the art from the following description.

According to another aspect of the present invention, there is provided an apparatus for recognizing an object using a convolution neural network, the apparatus including an image input unit for acquiring and inputting a color image and a depth image, a synthesized image of the color image and the depth image, A size information extracting unit for extracting size information of an object included in the depth image using the depth value of the depth image, a size information extracting unit for extracting size information of the object included in the depth image, And an object recognition unit for recognizing the object by applying the size information of the object extracted by the size information extraction unit to the convolution neural network.

According to another aspect of the present invention, there is provided a method of recognizing an object using a convolution neural network, the method comprising: acquiring and inputting a color image and a depth image; generating a composite image of the color image and the depth image; Extracting the size information of the object included in the image using the depth value of the depth image, and extracting size information of the extracted composite image and the extracted object, And recognizing the object by applying it to the convolution neural network.

According to the present invention, by applying the convolutional neural network to the combined image of the color image and the depth image input from the camera and the size information of the object included in the image, the object is recognized, There is an advantage to be recognized.

1 is a block diagram of an object recognition apparatus using a convolutional neural network according to the present invention.
2 is a diagram illustrating an example of a composite image generated by an object recognition apparatus using a convolutional neural network according to the present invention.
FIG. 3 and FIG. 4 are diagrams illustrating an operation flow for an object recognition method using a convolutional neural network according to the present invention.
5 is a diagram illustrating a computing system to which the apparatus according to the present invention is applied.

Hereinafter, some embodiments of the present invention will be described in detail with reference to exemplary drawings. It should be noted that, in adding reference numerals to the constituent elements of the drawings, the same constituent elements are denoted by the same reference numerals whenever possible, even if they are shown in different drawings. In the following description of the embodiments of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the difference that the embodiments of the present invention are not conclusive.

In describing the components of the embodiment of the present invention, terms such as first, second, A, B, (a), and (b) may be used. These terms are intended to distinguish the constituent elements from other constituent elements, and the terms do not limit the nature, order or order of the constituent elements. Also, unless otherwise defined, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. Terms such as those defined in commonly used dictionaries should be interpreted as having a meaning consistent with the meaning in the context of the relevant art and are to be interpreted in an ideal or overly formal sense unless explicitly defined in the present application Do not.

1 is a block diagram of an object recognition apparatus using a convolutional neural network according to the present invention.

1, an object recognition apparatus (hereinafter, referred to as 'object recognition apparatus') 100 using a convolution neural network according to the present invention includes a control unit 110, an image input unit 120, an input unit 130, An output unit 140, a communication unit 150, a storage unit 160, an image processing unit 170, a size information extraction unit 180, and an object recognition unit 190. Here, the control unit 110 may process signals transmitted between the respective units of the object recognition apparatus 100. [

The image input unit 120 may correspond to a camera that captures and provides a color image and a depth image. Here, the camera may have a depth sensor separately, and may include an RGB image sensor for acquiring a depth image from a color image. For example, the image input unit 120 may correspond to a Kinect that acquires an RGB image and a depth image from an RGB image sensor in real time.

The color image and the depth image obtained by the image input unit 120 may be transmitted to the image processing unit 170 and the size information extraction unit 180 through the control unit 110. [

The input unit 130 may be a key button implemented outside the object recognition apparatus 100 as a means for receiving a control command and may correspond to a soft key implemented on a display implemented in the object recognition apparatus 100 It is possible. The input unit 130 may be an input unit such as a mouse, a joystick, a jog shuttle, or a stylus pen.

The output unit 140 may include a display for displaying an operation state of the object recognition apparatus 100, an object recognition result, and the like, and may include a speaker for outputting a voice signal.

Here, the display may be used as an input device in addition to an output device when a sensor for sensing a touch operation is provided. That is, when a touch sensor such as a touch film, a touch sheet, or a touch pad is provided on the display, the display may operate as a touch screen, and the input unit 130 and the output unit 140 may be integrated.

The display may be a liquid crystal display (LCD), a thin film transistor liquid crystal display (TFT LCD), an organic light-emitting diode (OLED), a flexible display, , A field emission display (FED), and a 3D display (3D display).

The communication unit 150 may include a communication module that supports communication with a camera implemented at a remote location. In addition, the communication unit 150 may include a communication module that supports access to a server or a database implemented in the server.

The communication module may support wireless Internet access, short range communication, or wired communication. Here, the wireless Internet technology includes a wireless LAN (WLAN), a wireless broadband (Wibro), a Wi-Fi, a World Interoperability for Microwave Access (WIMAX), a High Speed Downlink Packet Access And may also include Bluetooth, ZigBee, Ultra Wideband (UWB), Radio Frequency Identification (RFID), Infrared Data Association (IrDA), and the like as the short range communication technology . The wired communication technology may include USB (Universal Serial Bus) communication, and the like.

The storage unit 160 may store data and programs necessary for the object recognition apparatus 100 to operate. For example, the storage unit 160 may store setting values for image processing, size information extraction, and object recognition in the object recognition apparatus 100, and an algorithm for performing each function may be stored. In addition, the storage unit 160 may store a command for performing an operation of the object recognition apparatus 100 and the like.

Also, the storage unit 160 may store parameter information to be referred to for image processing, size information extraction, and object recognition, and corresponding parameter values.

Here, the storage unit 160 may include a flash memory type, a hard disk type, a multimedia card micro type, a card type memory (for example, an SD or XD memory A random access memory (SRAM), a read-only memory (ROM), a programmable read-only memory (PROM), an electrically erasable programmable read-only memory (EEPROM) Erasable Programmable Read-Only Memory).

The image processing unit 170 generates a composite image of the color image and the depth image obtained by the image input unit 120. At this time, the image processor 170 may generate a composite image by mapping pixels of the color image and the depth image. For example, the image processor 170 may map the pixels of the color image corresponding to the pixels of the depth image using the depth image values.

In addition, the image processor 170 corrects the composite image of the color image and the depth image. At this time, the image processor 170 can correct the resolution of the synthesized image and correct the noise.

Here, the image processor 170 may correct the resolution of the composite image by cutting out an area to which the color image and the depth image are not mapped in the composite image.

On the other hand, the depth image has lower resolution than the color image. Accordingly, the image processor 170 can correct the resolution of the composite image by increasing the resolution of the depth image. For example, the image processor 170 may upsamplify a depth image using a Markov Random Field (MRF), a Spatiotemporal filter, and a bilateral filter that preserves an edge The resolution of the depth image can be increased.

The image processor 170 may remove the noise of the depth image using the color image information. For example, the image processor 170 may remove noise of a depth image using a bi-directional filter and a median filter. In addition, the image processor 170 can estimate the depth value of the hole in which the depth information is not input in the depth image using the color image information. Here, the image processor 170 may estimate the depth value of the corresponding hole by linearly interpolating the depth information from the surrounding color information of the hole to which the depth information is not inputted.

In this way, the image processor 170 can generate a corrected composite image by correcting the resolution of the composite image and removing the noise. The corrected synthesized image generated by the image processing unit 170 may be transmitted to the size information extracting unit 180 and the object recognizing unit 190.

The size information extracting unit 180 extracts size information of each pixel or area using the depth value of the depth image. Here, the size information extracting unit 180 can extract information such as the length, angle, and height of each pixel or region using the depth value of the depth image.

For example, the size information extracting unit 180 may extract length information of an object included in an image using Equation (1) below.

Figure pat00001

In Equation (1), s denotes an actual length of a specific object included in the depth image, d 1 denotes a depth value of a pixel or an area where the object is located, and s 1 denotes a length of a specific object on the depth image.

Accordingly, the size information extraction unit 180 extracts the length information of the object included in the depth image through Equation (1), and transmits the extracted length information to the object recognition unit 190.

The object recognition unit 190 includes the corrected composite image transferred from the image processing unit 170, the size information transmitted from the size information extraction unit 180, and the convolutional neural network And recognizes the object.

Here, the convolution neural network is composed of a feature point extractor and a neural network classifier for extracting feature points of the input image. The feature point extractor can be defined as a series of convolution and sub-sampling processes. The feature point extractor can predict the camera motion (Ego-motion) by tracking the corner feature points extracted from the original image, and sets the region of the object having other motion components as a region of interest (ROI) . The neural network classifier is composed of a multi-layer neural network, and classifies the objects included in the set ROI.

At this time, the convolutional neural network can learn the parameters of the convolutional neural network in advance from the database included in the storage unit 160 or the database received from the external server, and can recognize the object based on the parameters of the learned convolutional neural network have. Here, the database may include a pre-prepared composite image and a ground truth.

Accordingly, the object recognition unit 190 can recognize the object by applying the corrected composite image and the size information of the object to the convolutional neural network.

For example, the object recognition unit 190 recognizes that the object included in the composite image is located at a short distance from the camera using the corrected composite image and the size information of the object, . In this case, the object recognition unit 190 can reduce the color image value in the composite image, reduce the weight of the corresponding region applied to the convolutional neural network, and recognize the object.

As another example, the object recognition unit 190 recognizes that the object included in the composite image is located far from the camera using the corrected composite image and the size information of the object, so that the weight occupied by the composite image is smaller than the absolute size . In this case, the object recognition unit 190 can increase the color image value in the composite image or enlarge the area of the object, thereby increasing the weight of the corresponding region applied to the convolutional neural network and recognizing the object.

The object recognition result by the object recognition unit 190 can be represented as shown in FIG.

In this way, the object recognition unit 190 can apply the information of the color image and the information of the depth image to the convolution neural network at the same time, thereby making it possible to recognize the area of the object clearly by reflecting the size change of the object.

The operation flow of the control device according to the present invention will be described in more detail as follows.

FIG. 3 and FIG. 4 are diagrams illustrating an operation flow for an object recognition method using a convolutional neural network according to the present invention.

Referring to FIGS. 3 and 4, when the color image and the depth image are input from the image input means such as a camera (S110), the object recognition apparatus generates a composite image of the input color image and the depth image (S120). In step 'S120', the object recognition apparatus can generate a composite image by mapping the pixels of the color image corresponding to the pixels of the depth image using the depth image values.

In addition, the object recognition apparatus corrects the synthesized image (S130). In step 'S130', the object recognition apparatus corrects the resolution of the synthesized image (S131) and removes the noise (S135), as shown in FIG.

In step 'S131', the object recognition apparatus can correct the resolution of the composite image by cutting out areas where the color image and the depth image are not mapped in the composite image, or by increasing the resolution by upsampling the depth image. Also, in step 'S135', the object recognition apparatus can remove the noise of the composite image by estimating the depth value of the hole in which the depth information is not inputted in the depth image using the color image information.

Then, the object recognition apparatus extracts the size information of the object in the image using the depth value of the depth image (S140).

The object recognition apparatus applies the synthesized image corrected in step S130 and the size information of the object extracted in step S140 to the convolutional neural network in step S150 to recognize the object in step S160.

The object recognition apparatus can recognize the area of the object clearly by reflecting the size change of the object by applying the corrected composite image and the size information of the object to the convolution neural network at the same time.

5 is a diagram illustrating a computing system to which the apparatus according to the present invention is applied.

5, a computing system 1000 includes at least one processor 1100, a memory 1300, a user interface input device 1400, a user interface output device 1500, (1600), and a network interface (1700).

The processor 1100 may be a central processing unit (CPU) or a memory device 1300 and / or a semiconductor device that performs processing for instructions stored in the storage 1600. Memory 1300 and storage 1600 may include various types of volatile or non-volatile storage media. For example, the memory 1300 may include a ROM (Read Only Memory) and a RAM (Random Access Memory).

Thus, the steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by processor 1100, or in a combination of the two. The software module may reside in a storage medium (i.e., memory 1300 and / or storage 1600) such as a RAM memory, a flash memory, a ROM memory, an EPROM memory, an EEPROM memory, a register, a hard disk, a removable disk, You may. An exemplary storage medium is coupled to the processor 1100, which can read information from, and write information to, the storage medium. Alternatively, the storage medium may be integral to the processor 1100. [ The processor and the storage medium may reside within an application specific integrated circuit (ASIC). The ASIC may reside within an object recognition device. Alternatively, the processor and the storage medium may reside as discrete components within the object recognition device.

The foregoing description is merely illustrative of the technical idea of the present invention, and various changes and modifications may be made by those skilled in the art without departing from the essential characteristics of the present invention.

Therefore, the embodiments disclosed in the present invention are intended to illustrate rather than limit the scope of the present invention, and the scope of the technical idea of the present invention is not limited by these embodiments. The scope of protection of the present invention should be construed according to the following claims, and all technical ideas within the scope of equivalents should be construed as falling within the scope of the present invention.

100: object recognition device 110:
120: image input unit 130: input unit
140: output unit 150: communication unit
160: storage unit 170: image processing unit
180: Size information extraction unit 190: Object recognition unit

Claims (1)

An image input unit for acquiring and inputting a color image and a depth image;
An image processor for generating a composite image of the color image and the depth image, and correcting the resolution and noise of the generated composite image;
A size information extracting unit for extracting size information of an object included in the image using the depth value of the depth image; And
An object recognition unit for recognizing an object by applying the synthesized image corrected by the image processing unit and the size information of the object extracted by the size information extracting unit to the convolutional neural network,
And an object recognition unit using the convolution neural network.
KR1020150125393A 2015-09-04 2015-09-04 Apparatus and method for object recognition with convolution neural network KR101980360B1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
KR1020150125393A KR101980360B1 (en) 2015-09-04 2015-09-04 Apparatus and method for object recognition with convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
KR1020150125393A KR101980360B1 (en) 2015-09-04 2015-09-04 Apparatus and method for object recognition with convolution neural network

Publications (2)

Publication Number Publication Date
KR20170028591A true KR20170028591A (en) 2017-03-14
KR101980360B1 KR101980360B1 (en) 2019-08-28

Family

ID=58460101

Family Applications (1)

Application Number Title Priority Date Filing Date
KR1020150125393A KR101980360B1 (en) 2015-09-04 2015-09-04 Apparatus and method for object recognition with convolution neural network

Country Status (1)

Country Link
KR (1) KR101980360B1 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019083336A1 (en) * 2017-10-27 2019-05-02 전북대학교산학협력단 Method and device for crop and weed classification using neural network learning
KR20190070464A (en) * 2017-12-13 2019-06-21 동국대학교 산학협력단 Apparatus for predicting roasting completion time and operating method thereof
KR20190098812A (en) 2018-01-31 2019-08-23 전남대학교산학협력단 System for recognizing music symbol using deep network and method therefor
CN110337807A (en) * 2017-04-07 2019-10-15 英特尔公司 The method and system of camera apparatus is used for depth channel and convolutional neural networks image and format
KR20190124600A (en) * 2018-04-26 2019-11-05 한국전자통신연구원 Layered protecting apparatus and system for multiple video objects based on neural network learning and method thereof
WO2020045903A1 (en) * 2018-08-28 2020-03-05 포항공과대학교 산학협력단 Method and device for detecting object size-independently by using cnn
WO2020085653A1 (en) * 2018-10-26 2020-04-30 계명대학교 산학협력단 Multiple-pedestrian tracking method and system using teacher-student random fern
WO2020251336A1 (en) * 2019-06-13 2020-12-17 엘지이노텍 주식회사 Camera device and image generation method of camera device
CN112115913A (en) * 2020-09-28 2020-12-22 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
KR20210050707A (en) * 2019-10-29 2021-05-10 오토아이티(주) Apparatus and method for object detection based on color and temperature data
KR20220053988A (en) 2020-10-23 2022-05-02 한국전자통신연구원 Apprartus and method for detecting objects of interest based on scalable deep neural networks
US11386637B2 (en) 2019-07-16 2022-07-12 Samsung Electronics Co., Ltd. Method and apparatus for detecting object

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120052610A (en) * 2010-11-16 2012-05-24 삼성전자주식회사 Apparatus and method for recognizing motion using neural network learning algorithm
KR20140066637A (en) * 2012-11-23 2014-06-02 엘지전자 주식회사 Rgb-ir sensor with pixels array and apparatus and method for obtaining 3d image using the same
KR20140104091A (en) 2013-02-20 2014-08-28 삼성전자주식회사 Apparatus of recognizing an object using a depth image and method thereof
KR20140141174A (en) * 2013-05-31 2014-12-10 한국과학기술원 Method and apparatus for recognition and segmentation object for 3d object recognition
KR101476799B1 (en) * 2013-07-10 2014-12-26 숭실대학교산학협력단 System and method for detecting object using depth information
KR20150008744A (en) * 2013-07-15 2015-01-23 삼성전자주식회사 Method and apparatus processing a depth image
KR20150010248A (en) * 2013-07-18 2015-01-28 주식회사 에스원 Method and apparatus for surveillance by using 3-dimension image data
KR20150039252A (en) * 2013-10-01 2015-04-10 한국전자통신연구원 Apparatus and method for providing application service by using action recognition
KR20160034513A (en) * 2014-09-19 2016-03-30 한국전자통신연구원 Apparatus and method for implementing immersive augmented reality with RGB-D data

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120052610A (en) * 2010-11-16 2012-05-24 삼성전자주식회사 Apparatus and method for recognizing motion using neural network learning algorithm
KR20140066637A (en) * 2012-11-23 2014-06-02 엘지전자 주식회사 Rgb-ir sensor with pixels array and apparatus and method for obtaining 3d image using the same
KR20140104091A (en) 2013-02-20 2014-08-28 삼성전자주식회사 Apparatus of recognizing an object using a depth image and method thereof
KR20140141174A (en) * 2013-05-31 2014-12-10 한국과학기술원 Method and apparatus for recognition and segmentation object for 3d object recognition
KR101476799B1 (en) * 2013-07-10 2014-12-26 숭실대학교산학협력단 System and method for detecting object using depth information
KR20150008744A (en) * 2013-07-15 2015-01-23 삼성전자주식회사 Method and apparatus processing a depth image
KR20150010248A (en) * 2013-07-18 2015-01-28 주식회사 에스원 Method and apparatus for surveillance by using 3-dimension image data
KR20150039252A (en) * 2013-10-01 2015-04-10 한국전자통신연구원 Apparatus and method for providing application service by using action recognition
KR20160034513A (en) * 2014-09-19 2016-03-30 한국전자통신연구원 Apparatus and method for implementing immersive augmented reality with RGB-D data

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
장영균 외 1명, "RGB-D 영상 기반 다수 객체 구역화 및 인식 : 다수 객체 구역화를 위한 사용자 참여형 깊이 영상 군집화와 컬러 영상 기반 다수 객체 인식", 한국HCI학회 학술대회, pp. 4-7, 2013년 1월. *

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110337807A (en) * 2017-04-07 2019-10-15 英特尔公司 The method and system of camera apparatus is used for depth channel and convolutional neural networks image and format
WO2019083336A1 (en) * 2017-10-27 2019-05-02 전북대학교산학협력단 Method and device for crop and weed classification using neural network learning
KR20190070464A (en) * 2017-12-13 2019-06-21 동국대학교 산학협력단 Apparatus for predicting roasting completion time and operating method thereof
KR20190098812A (en) 2018-01-31 2019-08-23 전남대학교산학협력단 System for recognizing music symbol using deep network and method therefor
KR20190124600A (en) * 2018-04-26 2019-11-05 한국전자통신연구원 Layered protecting apparatus and system for multiple video objects based on neural network learning and method thereof
KR20200027078A (en) * 2018-08-28 2020-03-12 포항공과대학교 산학협력단 Method and apparatus for detecting object independently of size using convolutional neural network
WO2020045903A1 (en) * 2018-08-28 2020-03-05 포항공과대학교 산학협력단 Method and device for detecting object size-independently by using cnn
WO2020085653A1 (en) * 2018-10-26 2020-04-30 계명대학교 산학협력단 Multiple-pedestrian tracking method and system using teacher-student random fern
WO2020251336A1 (en) * 2019-06-13 2020-12-17 엘지이노텍 주식회사 Camera device and image generation method of camera device
US11825214B2 (en) 2019-06-13 2023-11-21 Lg Innotek Co., Ltd. Camera device and image generation method of camera device
US11386637B2 (en) 2019-07-16 2022-07-12 Samsung Electronics Co., Ltd. Method and apparatus for detecting object
KR20210050707A (en) * 2019-10-29 2021-05-10 오토아이티(주) Apparatus and method for object detection based on color and temperature data
CN112115913A (en) * 2020-09-28 2020-12-22 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
CN112115913B (en) * 2020-09-28 2023-08-25 杭州海康威视数字技术股份有限公司 Image processing method, device and equipment and storage medium
KR20220053988A (en) 2020-10-23 2022-05-02 한국전자통신연구원 Apprartus and method for detecting objects of interest based on scalable deep neural networks

Also Published As

Publication number Publication date
KR101980360B1 (en) 2019-08-28

Similar Documents

Publication Publication Date Title
KR20170028591A (en) Apparatus and method for object recognition with convolution neural network
EP3152706B1 (en) Image capturing parameter adjustment in preview mode
US9697416B2 (en) Object detection using cascaded convolutional neural networks
US10891473B2 (en) Method and device for use in hand gesture recognition
US20160154469A1 (en) Mid-air gesture input method and apparatus
US20180053293A1 (en) Method and System for Image Registrations
US9928439B2 (en) Facilitating text identification and editing in images
JP6688277B2 (en) Program, learning processing method, learning model, data structure, learning device, and object recognition device
US10839537B2 (en) Depth maps generated from a single sensor
US20170061229A1 (en) Method and system for object tracking
KR20160048140A (en) Method and apparatus for generating an all-in-focus image
US9082039B2 (en) Method and apparatus for recognizing a character based on a photographed image
US20200265569A1 (en) Method of correcting image on basis of category and recognition rate of object included in image and electronic device implementing same
US10122912B2 (en) Device and method for detecting regions in an image
US9400924B2 (en) Object recognition method and object recognition apparatus using the same
US11636608B2 (en) Artificial intelligence using convolutional neural network with Hough transform
US9485416B2 (en) Method and a guided imaging unit for guiding a user to capture an image
US10163212B2 (en) Video processing system and method for deformation insensitive tracking of objects in a sequence of image frames
WO2013085525A1 (en) Techniques for efficient stereo block matching for gesture recognition
US20150261409A1 (en) Gesture recognition apparatus and control method of gesture recognition apparatus
US20160142702A1 (en) 3d enhanced image correction
JP2017120503A (en) Information processing device, control method and program of information processing device
KR20210069686A (en) Object tracking based on custom initialization points
CN108304840B (en) Image data processing method and device
US20150112853A1 (en) Online loan application using image capture at a client device

Legal Events

Date Code Title Description
E902 Notification of reason for refusal
E701 Decision to grant or registration of patent right
GRNT Written decision to grant