EP4148671A1 - Electronic device and method for controlling same - Google Patents

Electronic device and method for controlling same Download PDF

Info

Publication number
EP4148671A1
EP4148671A1 EP21850284.7A EP21850284A EP4148671A1 EP 4148671 A1 EP4148671 A1 EP 4148671A1 EP 21850284 A EP21850284 A EP 21850284A EP 4148671 A1 EP4148671 A1 EP 4148671A1
Authority
EP
European Patent Office
Prior art keywords
depth image
image
depth
composition ratio
electronic device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP21850284.7A
Other languages
German (de)
French (fr)
Other versions
EP4148671A4 (en
Inventor
Taehee Lee
Sungwon Kim
Saeyoung KIM
Yoojeong LEE
Junghwan Lee
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP4148671A1 publication Critical patent/EP4148671A1/en
Publication of EP4148671A4 publication Critical patent/EP4148671A4/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • G06T7/596Depth or shape recovery from multiple images from stereo images from three or more stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/87Combinations of systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/89Lidar systems specially adapted for specific applications for mapping or imaging
    • G01S17/8943D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/521Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the disclosure relates to an electronic device and a method for controlling the same, and more particularly, to an electronic device for acquiring a depth image and a method for controlling the same.
  • a sensor for acquiring depth information there are a time of flight (ToF) sensor that acquires a depth image based on flight time or phase information of light, a stereo camera for acquiring a depth image based on an image captured by two cameras, and the like.
  • ToF time of flight
  • the ToF sensor has superior angular resolution for a long distance compared to the stereo camera, but has a limitation in that the accuracy of near-field information is relatively low due to multiple reflections.
  • the stereo camera may acquire shortdistance information with relatively high accuracy, two cameras need to be far apart from each other for longdistance measurement, so the stereo cameras have the disadvantage of being difficult to manufacture small in size.
  • the disclosure provides an electronic device that is easy to miniaturize and has improved accuracy of distance information for a short distance.
  • an electronic device includes: a first image sensor; a second image sensor; and a processor, in which the processor acquires a first depth image and a confidence map corresponding to the first depth image by using the first image sensor, acquires an RGB image corresponding to the first depth image by using the second image sensor, acquires a second depth image based on the confidence map and the RGB image, and acquires a third depth image by composing the first depth image and the second depth image based on a pixel value of the confidence map.
  • the processor may acquire a grayscale image for the RGB image, and the second depth image may be acquired by stereo matching the confidence map and the grayscale image.
  • the processor may acquire the second depth image by stereo matching the confidence map and the grayscale image based on a shape of an object included in the confidence map and the grayscale image.
  • the processor may determine a composition ratio of the first depth image and the second depth image based on the pixel value of the confidence map; and acquire a third depth image by composing the first depth image and the second depth image based on the determined composition ratio.
  • the processor may determine the first composition ratio and the second composition ratio so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for a region in which a pixel value is greater than a preset value among a plurality of regions of the confidence map, and the first composition ratio and the second composition ratio may be determined so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than the preset value among a plurality of regions of the confidence map.
  • the processor may acquire a depth value of the second depth image as a depth value of the third depth image for a first region in which a depth value is smaller than a first threshold distance among a plurality of regions of the first depth image, and acquire a depth value of the first depth image as a depth value of the third depth image for a second region in which a depth value is greater than a second threshold distance among a plurality of regions of the first depth image.
  • the processor may identify an object included in the RGB image, identify each region of the first depth image and the second depth image corresponding to the identified object, and acquire the third depth image by composing the first depth image and the second depth image at a predetermined composition ratio for each of the regions.
  • the first image sensor may be a time of flight (ToF) sensor
  • the second image sensor may be an RGB sensor
  • a method for controlling an electronic device includes: acquiring a first depth image and a confidence map corresponding to the first depth image by using a first image sensor; acquiring an RGB image corresponding to the first depth image by using a second image sensor; acquiring a second depth image based on the confidence map and the RGB image; and acquiring a third depth image by composing the first depth image and the second depth image based on a pixel value of the confidence map.
  • a grayscale image for the RGB image may be acquired, and the second depth image may be acquired by stereo matching the confidence map and the grayscale image.
  • the second depth image may be acquired by stereo matching the confidence map and the grayscale image based on a shape of an object included in the confidence map and the grayscale image.
  • a composition ratio of the first depth image and the second depth image may be determined based on the pixel value of the confidence map, and a third depth image may be acquired by composing the first depth image and the second depth image based on the determined composition ratio.
  • the first composition ratio and the second composition ratio may be determined so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for a region in which a pixel value is greater than a preset value among a plurality of regions of the confidence map, and the first composition ratio and the second composition ratio may be determined so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than the preset value among a plurality of regions of the confidence map.
  • a depth value of the second depth image may be acquired as a depth value of the third depth image for a first region in which a depth value is smaller than a first threshold distance among a plurality of regions of the first depth image
  • a depth value of the first depth image may be acquired as a depth value of the third depth image for a second region in which a depth value is larger than a second threshold distance among a plurality of regions of the first depth image.
  • the acquiring of the third depth image may include identifying an object included in the RGB image; identifying each region of the first depth image and the second depth image corresponding to the identified object, and acquiring the third depth image by composing the first depth image and the second depth image at a predetermined composition ratio for each of the identified regions.
  • the electronic device may acquire distance information with improved accuracy of distance information for a short distance compared to the conventional ToF sensor.
  • FIG. 1 is a diagram for describing a method of acquiring a depth image according to an embodiment of the disclosure.
  • An electronic device 100 may acquire a first depth image 10 by using a first image sensor 110. Specifically, the electronic device 100 may acquire the first depth image 10 based on a signal output from the first image sensor 110.
  • the first depth image 10 is an image indicating a distance from the electronic device 100 to an object, and a depth value (or distance value) of each pixel of the first depth image may refer to a distance from the electronic device 100 to the object corresponding to each pixel.
  • the electronic device 100 may acquire a confidence map 20 by using the first image sensor 110.
  • the confidence map (or the confidence image) 20 refers to an image representing reliability of depth values for each region of the first depth image 10.
  • the confidence map 20 may be an infrared (IR) image corresponding to the first depth image 10.
  • the electronic device 100 may determine the reliability of the depth values for each region of the first depth image 10 based on the confidence map 20.
  • the electronic device 100 may acquire the confidence map 20 based on a signal output from the first image sensor 110.
  • the first image sensor 110 may include a plurality of sensors that are activated from a preset time difference.
  • the electronic device 100 may acquire a plurality of image data through each of the plurality of sensors.
  • the electronic device 100 may acquire the confidence map 20 from a plurality of acquired image data.
  • I1 to 14 denote first to fourth image data, respectively.
  • the first image sensor 110 may be implemented as a time of flight (ToF) sensor or a structured light sensor.
  • ToF time of flight
  • the electronic device 100 may acquire an RGB image 30 using a second image sensor 120. Specifically, the electronic device 100 may acquire the RGB image based on a signal output from the second image sensor 120.
  • the RGB image 30 may correspond to the first depth image 10 and the confidence map 20, respectively.
  • the RGB image 30 may be an image for the same timing as the first depth image 10 and the confidence map 20.
  • the electronic device 100 may acquire the RGB image 30 corresponding to the first depth image 10 and the confidence map 20 by adjusting the activation timing of the first image sensor 110 and the second image sensor 120.
  • the electronic device 100 may generate a grayscale image 40 based on R, G, and B values of the RGB image 30.
  • the second image sensor 120 may be implemented as image sensors such as a complementary metal-oxide-semiconductor (CMOS) and a charge-coupled device (CCD).
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge-coupled device
  • the electronic device 100 may acquire a second depth image 50 based on the confidence map 20 and the grayscale image 40.
  • the electronic device 100 may acquire the second depth image 50 by performing stereo matching on the confidence map 20 and the grayscale image 40.
  • the stereo matching refers to a method of calculating a depth value by detecting in which an arbitrary point in one image is located in the other image, and obtaining a shifted amount of the detected result point.
  • the electronic device 100 may identify a corresponding point in the confidence map 20 and the grayscale image 40. In this case, the electronic device 100 may identify a corresponding point by identifying a shape or an outline of the object included in the confidence map 20 and the grayscale image 40.
  • the electronic device 100 may generate the second depth image 50 based on a disparity between the corresponding points identified in each of the confidence map 20 and the grayscale image 40 and a length (i.e., the distance between the first image sensor 100 and the second image sensor 200) of a baseline.Meanwhile, when the stereo matching may be performed based on the confidence map 20 and the RGB image 30 which are an IR image, it may be difficult to find an exact corresponding point due to a difference in pixel values. Accordingly, the electronic device 100 may perform the stereo matching based on the grayscale image 40 instead of the RGB image 30. Accordingly, the electronic device 100 may more accurately identify the corresponding point, and the accuracy of the depth information included in the second depth image 50 may be improved. Meanwhile, the electronic device 100 may perform pre-processing such as correcting a difference in brightness between the confidence map 20 and the grayscale image 40 before performing the stereo matching.
  • the ToF sensor has higher angular resolution (that is, ability to distinguish two objects that are separated from each other) and distance accuracy than the stereo sensor outside a preset distance (e.g., within 5m from the ToF sensor), but may have lower angular resolution and distance accuracy than the stereo sensor within the preset distance.
  • angular resolution that is, ability to distinguish two objects that are separated from each other
  • distance accuracy than the stereo sensor outside a preset distance (e.g., within 5m from the ToF sensor)
  • the electronic device 100 may acquire a third depth image 60 having improved near-field accuracy compared to the first depth image 10 by using the second depth image 50 acquired through the stereo matching.
  • the electronic device 100 may acquire the third depth image 60 based on the first depth image 10 and the second depth image 50. Specifically, the electronic device 100 may generate the third depth image 60 by composing the first depth image 10 and the second depth image 50. In this case, the electronic device 100 may determine a first composition ratio ⁇ of the first depth image 10 and a second composition ratio ⁇ of the second depth image 50 based on at least one of the depth value of the first depth image 10 and the pixel value of the confidence map 20.
  • the first composition ratio ⁇ and the second composition ratio ⁇ may have a value between 0 and 1, and the sum of the first composition ratio ⁇ and the second composition ratio ⁇ may be 1.
  • the second composition ratio ⁇ may be 0.4 (or 400) .
  • a method of determining the first composition ratio ⁇ and the second composition ratio ⁇ will be described in more detail.
  • FIG. 2 is a graph illustrating a first composition ratio and a second composition ratio according to a depth value of a first depth image according to an embodiment of the disclosure.
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ based on a depth value D of the first depth image 10.
  • the first composition ratio ⁇ may be determined to be 0, and the second composition ratio ⁇ may be determined to be 1. That is, the electronic device 100 may acquire the depth value of the second depth image 50 as the depth value of the third depth image 60 for a region in which the depth value D is smaller than the first threshold distance Dth1 among the plurality of regions. Accordingly, the electronic device 100 may acquire the third depth image 60 with improved near-field accuracy compared to the first depth image 10.
  • a first threshold distance e.g. 20 cm
  • the first composition ratio ⁇ may be determined to be 1, and the second composition ratio ⁇ may be determined to be 0. That is, the electronic device 100 may acquire the depth value of the first depth image 50 as the depth value of the third depth image 60 for a region in which the depth value D is greater than the second threshold distance Dth2 among the plurality of regions.
  • the first composition ratio ⁇ and the second composition ratio ⁇ may be determined such that, as the depth value D increases, the first composition ratio ⁇ increases and the second composition ratio ⁇ decreases. Since the first image sensor 110 has higher far-field angular resolution than the second image sensor 120, as the depth value D increases, the accuracy of the depth value of the third depth image 60 may be improved when the first composition ratio ⁇ increases.
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ based on a pixel value P of the confidence map 20.
  • FIG. 3 is a graph illustrating the first composition ratio and the second composition ratio according to the pixel value of the confidence map according to an embodiment of the disclosure.
  • the electronic device 100 may identify a fourth region R4 in which the pixel value P is smaller than a first threshold value Pth1 among the plurality of regions of the confidence map 20.
  • the electronic device 100 may determine the first composition ratio ⁇ as 0 and the second composition ratio ⁇ as 1. That is, when it is determined that the reliability of the first depth image 10 is smaller than the first threshold value Pth1, the electronic device 100 may acquire the depth value of the second depth image 50 as the depth value of the third depth image 60. Accordingly, the electronic device 100 may acquire the third depth image 60 with improved distance accuracy compared to the first depth image 10.
  • the electronic device 100 may identify a fifth region R5 in which the pixel value is greater than a second threshold value Pth2 among the plurality of regions of the confidence map 20.
  • the electronic device 100 may determine the first composition ratio ⁇ as 1 and the second composition ratio ⁇ as 0. That is, when it is determined that the reliability of the first depth image 10 is smaller than the second threshold value Pth2, the electronic device 100 may acquire the depth value of the first depth image 10 as the depth value of the third depth image 60.
  • the electronic device 100 may identify a sixth region R6 in which the pixel value P is greater than the first threshold value Pth1 and smaller than the second threshold value Pth2 among the plurality of regions of the confidence map 20.
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ so that, as the pixel value P increases, the first composition ratio ⁇ increases and the second composition ratio ⁇ decreases. That is, the electronic device 100 may increase the first composition ratio ⁇ as the reliability of the first depth image 10 increases. Accordingly, the accuracy of the depth value of the third depth image 60 may be improved.
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ based on the depth value D of the first depth image 10 and the pixel value P of the confidence map 20.
  • the electronic device 100 may consider the pixel value P of the confidence map 20 when determining the first composition ratio ⁇ and the second composition ratio ⁇ for the third region R3.
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ so that the first composition ratio ⁇ is greater than the second composition ratio ⁇ .
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ so that the first composition ratio ⁇ is smaller than the second composition ratio ⁇ .
  • the electronic device 100 may increase the first composition ratio ⁇ as the pixel value of the confidence map 20 corresponding to the third region R3 increases. That is, the electronic device 100 may increase the first composition ratio ⁇ for the third region R3 as the reliability of the first depth image 10 increases.
  • the electronic device 100 may acquire the third depth image 60 based on the first composition ratio ⁇ and the second composition ratio ⁇ thus obtained.
  • the electronic device 100 may acquire the distance information on the object based on the third depth image 60.
  • the electronic device 100 may generate a driving path of the electronic device 100 based on the third depth image 60.
  • FIGS. 2 and 3 illustrate that the first composition ratio ⁇ and the second composition ratio ⁇ vary linearly, but this is only an example, and the first composition ratio ⁇ and the second composition ratio ⁇ may vary non-linearly.
  • FIG. 4 is a diagram for describing a method of acquiring a third depth image according to an embodiment of the disclosure.
  • the first depth image 10 may include a 1-1th region R1-1, a 2-1th region R2-1, and a 3-1th region R3-1.
  • the 1-1th region R1-1 may correspond to the first region R1 of FIG. 2
  • the 2-1th region R2-1 may correspond to the second region R2 of FIG. 2 . That is, a depth value D11 of the 1-1th region R1-1 may be smaller than the first threshold distance Dth1, and a depth value D12 of the 2-1th region R2-1 may be greater than the second threshold distance Dth2.
  • a 3-1th region R3-1 may correspond to the third region R3 of FIG. 2 . That is, a depth value D13 of the 3-1th region R3-1 may be greater than the first threshold distance Dth1 and smaller than the second threshold distance Dth2.
  • the electronic device 100 may determine the first composition ratio ⁇ as 0 and the second composition ratio ⁇ as 1. Accordingly, the electronic device 100 may acquire a depth value D21 of the second depth image 50 as a depth value D31 of the third depth image 60.
  • the electronic device 100 may determine the first composition ratio ⁇ as 1 and the second composition ratio ⁇ as 0. Accordingly, the electronic device 100 may acquire the depth value D12 of the first depth image 10 as a depth value D32 of the third depth image 60.
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ based on the confidence map 20. For example, if a depth value P3 of the confidence map 20 is smaller than a preset value, when the first depth image 10 and the second depth image 50 are composed for the 3-1th region R3-1, the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ so that the first composition ratio ⁇ is smaller than the second composition ratio ⁇ .
  • the electronic device 100 may determine the first composition ratio ⁇ and the second composition ratio ⁇ so that the first composition ratio ⁇ is greater than the second composition ratio ⁇ . As described above, the electronic device 100 may acquire a depth value D33 of the third depth image 60 by applying the first composition ratio ⁇ to the depth value D13 of the first depth image 10, and the second composition ratio ⁇ to a depth value D23 of the second depth image 50.
  • the electronic device 100 may acquire the third depth image 60 by applying a predetermined composition ratio to the same object included in the first depth image 10 and the second depth image 50.
  • FIG. 5 is a diagram illustrating an RGB image according to an embodiment of the disclosure.
  • the RGB image 30 may include a first object ob1 and a second object ob2.
  • the electronic device 100 may analyze the RGB image 30 to identify the first object ob1. In this case, the electronic device 100 may identify the first object ob1 using an object recognition algorithm. Alternatively, the electronic device 100 may identify the first object ob1 by inputting the RGB image 30 to a neural network model trained to identify an object included in the image.
  • the electronic device 100 may apply a predetermined composition ratio. For example, the electronic device 100 may apply a 1-1th composition ratio ⁇ 1 and a 2-1th composition ratio ⁇ 1 , which are fixed values, to the region corresponding to the first object ob1. Accordingly, the electronic device 100 may acquire the third depth image 60 in which the distance error for the first object ob1 is improved.
  • FIG. 6 is a flowchart illustrating a method for controlling an electronic device according to an embodiment of the disclosure.
  • the electronic device 100 may acquire the first depth image and the confidence map corresponding to the first depth image using the first image sensor (S610), and acquire the RGB image corresponding to the first depth image using the second image sensor (S620). As a detailed description thereof has been described with reference to FIG. 1 , a redundant description thereof will be omitted.
  • the electronic device 100 may acquire the second depth image based on the confidence map and the RGB image (S630).
  • the electronic device 100 may acquire a grayscale image for the RGB image, and acquire the second depth image by performing the stereo matching on the confidence map and the grayscale image.
  • the electronic device 100 may acquire the second depth image by performing the stereo matching on the confidence map and the grayscale image based on the shape of the object included in the confidence map and the grayscale image.
  • the electronic device 100 may obtain the third depth image by composing the first depth image and the second depth image based on the pixel value of the confidence map (S640).
  • the electronic device 100 may determine the composition ratio of the first depth image and the second depth image based on the pixel value of the confidence map, and compose the first depth image and the second depth image based on the determined composition ratio to acquire the third depth image.
  • the electronic device 100 may determine the first composition ratio and the second composition ratio so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for the region in which the pixel value is greater than a preset value among the plurality of regions of the confidence map.
  • the electronic device 100 may determine the first composition ratio and the second composition ratio so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than a preset value among the plurality of regions of the confidence map.
  • FIG. 7 is a perspective view illustrating an electronic device according to an embodiment of the disclosure.
  • the electronic device 100 may include the first image sensor 110 and the second image sensor 120.
  • the distance between the first image sensor 110 and the second image sensor 120 may be defined as a length L of a baseline.
  • the conventional stereo sensor using two cameras has a limitation in that the angular resolution for a long distance is lowered because the length of the baseline is limited.
  • the conventional stereo sensor is difficult to miniaturize.
  • the electronic device 100 uses the first image sensor 110 having a higher angular resolution for a long distance compared to the stereo sensor as described above, to acquire the far-field information even if the length L of the baseline does not increase. Accordingly, the electronic device 100 may have a technical effect that it is easier to miniaturize compared to the conventional stereo sensor.
  • FIG. 8A is a block diagram illustrating a configuration of the electronic device according to the embodiment of the disclosure.
  • the electronic device 100 may include a light emitting unit 105, a first image sensor 110, the second image sensor 120, a memory 130, a communication interface 140, a driving unit 150, and a processor 160.
  • the electronic device 100 according to the embodiment of the disclosure may be implemented as a movable robot.
  • the light emitting unit 105 may emit light toward an object.
  • the light (hereinafter, emitted light) emitted from the light emitting unit 105 may have a waveform in the form of a sinusoidal wave.
  • the emitted light may have a waveform in the form of a square wave.
  • the light emitting unit 105 may include various types of laser devices.
  • the light emitting unit 105 may include a vertical cavity surface emitting laser (VCSEL) or a laser diode (LD).
  • the light emitting unit 105 may include a plurality of laser devices. In this case, a plurality of laser devices may be arranged in an array form.
  • the light emitting unit 105 may emit light of various frequency bands.
  • the light emitting unit 105 may emit a laser beam having a frequency of 100 MHz.
  • the first image sensor 110 is configured to acquire the depth image.
  • the first image sensor 110 may acquire reflected light reflected from the object after being emitted from the light emitting unit 105.
  • the processor 160 may acquire the depth image based on the reflected light acquired by the first image sensor 110.
  • the processor 160 may acquire the depth image based on a difference (i.e., flight time of light) between emission timing of the light emitted from the light emitting unit 105 and timing at which the image sensor 110 receives the reflected light.
  • the processor 160 may acquire the depth image based on a difference between a phase of the light emitted from the light emitting unit 105 and a phase of the reflected light acquired by the image sensor 110.
  • the first image sensor 110 may be implemented as the time of flight (ToF) sensor or the structured light sensor.
  • ToF time of flight
  • the second image sensor 120 is configured to acquire an RGB image.
  • the second image sensor 120 may be implemented as image sensors such as a complementary metal-oxide-semiconductor (CMOS) and a charge-coupled device (CCD) .
  • CMOS complementary metal-oxide-semiconductor
  • CCD charge-coupled device
  • the memory 130 may store an operating system (OS) for controlling a general operation of components of the electronic device 100 and commands or data related to components of the electronic device 100.
  • OS operating system
  • the memory 130 may be implemented as a non-volatile memory (e.g., a hard disk, a solid state drive (SSD), a flash memory), a volatile memory, or the like.
  • the communication interface 140 includes at least one circuit and may communicate with various types of external devices according to various types of communication methods.
  • the communication interface 140 may include at least one of a Wi-Fi communication module, a cellular communication module, a 3rd generation (3G) mobile communication module, a 4th generation (4G) mobile communication module, a 4th generation Long Term Evolution (LTE) communication module, and a 5th generation (5G) mobile communication module.
  • the electronic device 100 may transmit an image acquired using the second image sensor 120 to a user terminal through the communication interface 140.
  • the driving unit 150 is configured to move the electronic device 100.
  • the driving unit 150 may include an actuator for driving the electronic device 100.
  • the driving unit 150 may include an actuator for driving a motion of another physical component (e.g., an arm, etc.) of the electronic device 100.
  • the electronic device 100 may control the driving unit 150 to move or operate based on the depth information obtained through the first image sensor 110 and the second image sensor 120.
  • the processor 160 may control the overall operation of the electronic device 100.
  • the processor 160 may include a first depth image acquisition module 161, a confidence map acquisition module 162, an RGB image acquisition module 163, a grayscale image acquisition module 164, a second depth image acquisition module 165, and a third depth image acquisition module 166. Meanwhile, each module of the processor 160 may be implemented as a software module, but may also be implemented in a form in which software and hardware are combined.
  • the first depth image acquisition module 161 may acquire the first depth image based on the signal output from the first image sensor 110.
  • the first image sensor 110 may include a plurality of sensors that are activated from a preset time difference.
  • the first depth image acquisition module 161 may calculate a time of flight of light based on a plurality of image data acquired through a plurality of sensors, and acquire the first depth image based on the calculated time of flight of light.
  • the confidence map acquisition module 162 may acquire the confidence map based on the signal output from the first image sensor 110.
  • the first image sensor 110 may include a plurality of sensors that are activated from a preset time difference.
  • the confidence map acquisition module 162 may acquire a plurality of image data through each of the plurality of sensors.
  • the confidence map acquisition module 162 may acquire the confidence map 20 using the plurality of acquired image data.
  • the confidence map acquisition module 162 may acquire the confidence map 20 based on [Math Figure 1 ] described above.
  • the RGB image acquisition module 163 may acquire the RGB image based on the signal output from the second image sensor 120.
  • the acquired RGB image may correspond to the first depth image and the confidence map.
  • the grayscale image acquisition module 164 may acquire the grayscale image based on the RGB image acquired by the RGB image acquisition module 163. Specifically, the grayscale image acquisition module 164 may generate the grayscale image based on the R, G, and B values of the RGB image.
  • the second depth image acquisition module 165 may acquire the second depth image based on the confidence map acquired by the confidence map acquisition module 162 and the grayscale image acquired by the grayscale image acquisition module 164. Specifically, the second depth image acquisition module 165 may generate the second depth image by performing the stereo matching on the confidence map and the grayscale image. The second depth image acquisition module 165 may identify corresponding points in the confidence map and the grayscale image. In this case, the second depth image acquisition module 165 may identify the corresponding points by identifying the shape or outline of the object included in the confidence map and the grayscale image. In addition, the second depth image acquisition module 165 may generate the second depth image based on the disparity between the corresponding points identified in each of the confidence map and the grayscale image and the length of the baseline.
  • the second depth image acquisition module 165 may more accurately identify the corresponding points by performing the stereo matching based on the grayscale image instead of the RGB image. Accordingly, it is possible to improve the accuracy of the depth information included in the second depth image. Meanwhile, the second depth image acquisition module 165 may perform preprocessing such as correcting a difference in brightness between the confidence map and the grayscale image before performing the stereo matching.
  • the third depth image acquisition module 166 may acquire the third depth image based on the first depth image and the second depth image.
  • the third depth image acquisition module 166 may generate the third depth image by composing the first depth image and the second depth image.
  • the third depth image acquisition module 166 may determine the first composition ratio for the first depth image and the second composition ratio for the second depth image based on the depth value of the first depth image.
  • the third depth image acquisition module 166 may determine the first composition ratio as 0 and the second composition ratio as 1 for the first region in which the depth value is smaller than the first threshold distance among the plurality of regions of the first depth image.
  • the third depth image acquisition module 166 may determine the first composition ratio as 1 and the second composition ratio as 0 for the second region in which the depth value is greater than the second threshold distance among the plurality of regions of the first depth image.
  • the third depth image acquisition module 166 may determine the composition ratio based on the pixel value of the confidence map for the third region in which the depth value is greater than the first threshold distance and smaller than the second threshold distance among the plurality of regions of the first depth image. For example, when the pixel value of the confidence map corresponding to the third region is smaller than a preset value, the third depth image acquisition module 166 may determine the first composition ratio and the second composition ratio so that the first composition ratio is smaller than the second composition ratio. When the pixel value of the confidence map corresponding to the third region is larger than a preset value, the third depth image acquisition module 166 may determine the first composition ratio and the second composition ratio so that the first composition ratio is greater than the second composition ratio. That is, the third depth image acquisition module 166 may determine the first composition ratio and the second composition ratio so that the first composition ratio increases and the second composition ratio decreases as the pixel value of the confidence map corresponding to the third region increases.
  • the third depth image acquisition module 166 may compose the first depth image and the second depth image with a predetermined composition ratio for the same object.
  • the third depth image acquisition module 166 may analyze the RGB image to identify the object included in the RGB image.
  • the third depth image acquisition module 166 may apply a predetermined composition ratio to the first region of the first depth image and the second region of the second depth image corresponding to the identified object to compose the first depth image and the second depth image.
  • the processor 160 may adjust the sync of the first image sensor 110 and the second image sensor 120. Accordingly, the first depth image, the confidence map, and the second depth image may correspond to each other. That is, the first depth image, the confidence map, and the second depth image may be images for the same timing.
  • embodiments described above may be implemented in a computer or an apparatus similar to the computer using software, hardware, or a combination of software and hardware.
  • embodiments described in the disclosure may be implemented as a processor itself.
  • embodiments such as procedures and functions described in the specification may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described in the specification.
  • computer instructions for performing processing operations according to the diverse embodiments of the disclosure described above may be stored in a non-transitory computer-readable medium.
  • the computer instructions stored in the non-transitory computer-readable medium may cause a specific device to perform the processing operations according to the diverse embodiments described above when they are executed by a processor.
  • the non-transitory computer-readable medium is not a medium that stores data for a while, such as a register, a cache, a memory, or the like, but means a medium that semipermanently stores data and is readable by the device.
  • Specific examples of the non-transitory computer-readable medium may include a compact disk (CD), a digital versatile disk (DVD), a hard disk, a Blu-ray disk, a USB, a memory card, a read only memory (ROM), and the like.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Optics & Photonics (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Analysis (AREA)

Abstract

An electronic device is disclosed. The electronic device may comprise a first image sensor, a second image sensor, and a processor, wherein the processor may: acquire a first depth image and a confidence map by using the first image sensor; acquire an RGB image by using the second image sensor; acquire a second depth image on the basis of the confidence map and the RGB image; and acquire a third depth image by composing the first depth image and the second depth image on the basis of the pixel value of the confidence map.

Description

    [Technical Field]
  • The disclosure relates to an electronic device and a method for controlling the same, and more particularly, to an electronic device for acquiring a depth image and a method for controlling the same.
  • [Background Art]
  • In recent years, with the development of electronic technology, research on autonomous driving robots has been actively conducted. For smooth driving of the robot, it is important to obtain accurate depth information about the robot's surroundings. As a sensor for acquiring depth information, there are a time of flight (ToF) sensor that acquires a depth image based on flight time or phase information of light, a stereo camera for acquiring a depth image based on an image captured by two cameras, and the like.
  • On the other hand, the ToF sensor has superior angular resolution for a long distance compared to the stereo camera, but has a limitation in that the accuracy of near-field information is relatively low due to multiple reflections. In addition, although the stereo camera may acquire shortdistance information with relatively high accuracy, two cameras need to be far apart from each other for longdistance measurement, so the stereo cameras have the disadvantage of being difficult to manufacture small in size.
  • Accordingly, there is a need for a technique for acquiring a depth image with high accuracy of near-field information while being easy to miniaturize.
  • [Disclosure] [Technical Problem]
  • The disclosure provides an electronic device that is easy to miniaturize and has improved accuracy of distance information for a short distance.
  • Objects of the disclosure are not limited to the abovementioned objects. That is, other objects that are not mentioned may be obviously understood by those skilled in the art from the following description.
  • [Technical Solution]
  • According to an embodiment of the disclosure, an electronic device includes: a first image sensor; a second image sensor; and a processor, in which the processor acquires a first depth image and a confidence map corresponding to the first depth image by using the first image sensor, acquires an RGB image corresponding to the first depth image by using the second image sensor, acquires a second depth image based on the confidence map and the RGB image, and acquires a third depth image by composing the first depth image and the second depth image based on a pixel value of the confidence map.
  • The processor may acquire a grayscale image for the RGB image, and the second depth image may be acquired by stereo matching the confidence map and the grayscale image.
  • The processor may acquire the second depth image by stereo matching the confidence map and the grayscale image based on a shape of an object included in the confidence map and the grayscale image.
  • The processor may determine a composition ratio of the first depth image and the second depth image based on the pixel value of the confidence map; and acquire a third depth image by composing the first depth image and the second depth image based on the determined composition ratio.
  • The processor may determine the first composition ratio and the second composition ratio so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for a region in which a pixel value is greater than a preset value among a plurality of regions of the confidence map, and the first composition ratio and the second composition ratio may be determined so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than the preset value among a plurality of regions of the confidence map.
  • The processor may acquire a depth value of the second depth image as a depth value of the third depth image for a first region in which a depth value is smaller than a first threshold distance among a plurality of regions of the first depth image, and acquire a depth value of the first depth image as a depth value of the third depth image for a second region in which a depth value is greater than a second threshold distance among a plurality of regions of the first depth image.
  • The processor may identify an object included in the RGB image, identify each region of the first depth image and the second depth image corresponding to the identified object, and acquire the third depth image by composing the first depth image and the second depth image at a predetermined composition ratio for each of the regions.
  • The first image sensor may be a time of flight (ToF) sensor, and the second image sensor may be an RGB sensor.
  • According to another embodiment of the disclosure, a method for controlling an electronic device includes: acquiring a first depth image and a confidence map corresponding to the first depth image by using a first image sensor; acquiring an RGB image corresponding to the first depth image by using a second image sensor; acquiring a second depth image based on the confidence map and the RGB image; and acquiring a third depth image by composing the first depth image and the second depth image based on a pixel value of the confidence map.
  • In the acquiring of the second depth image, a grayscale image for the RGB image may be acquired, and the second depth image may be acquired by stereo matching the confidence map and the grayscale image.
  • In the acquiring of the second depth image, the second depth image may be acquired by stereo matching the confidence map and the grayscale image based on a shape of an object included in the confidence map and the grayscale image.
  • In the acquiring of the third depth image, a composition ratio of the first depth image and the second depth image may be determined based on the pixel value of the confidence map, and a third depth image may be acquired by composing the first depth image and the second depth image based on the determined composition ratio.
  • In the determining of the composition ratio, the first composition ratio and the second composition ratio may be determined so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for a region in which a pixel value is greater than a preset value among a plurality of regions of the confidence map, and the first composition ratio and the second composition ratio may be determined so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than the preset value among a plurality of regions of the confidence map.
  • In the acquiring of the third depth image, a depth value of the second depth image may be acquired as a depth value of the third depth image for a first region in which a depth value is smaller than a first threshold distance among a plurality of regions of the first depth image, and a depth value of the first depth image may be acquired as a depth value of the third depth image for a second region in which a depth value is larger than a second threshold distance among a plurality of regions of the first depth image.
  • The acquiring of the third depth image may include identifying an object included in the RGB image; identifying each region of the first depth image and the second depth image corresponding to the identified object, and acquiring the third depth image by composing the first depth image and the second depth image at a predetermined composition ratio for each of the identified regions.
  • Technical solutions of the disclosure are not limited to the abovementioned solutions, and solutions that are not mentioned will be clearly understood by those skilled in the art to which the disclosure pertains from the present specification and the accompanying drawings.
  • [Advantageous Effects]
  • According to various embodiments of the disclosure as described above, the electronic device may acquire distance information with improved accuracy of distance information for a short distance compared to the conventional ToF sensor.
  • In addition, the effects obtainable or predicted by the embodiments of the disclosure are to be disclosed directly or implicitly in the detailed description of the embodiments of the disclosure. For example, various effects predicted according to embodiments of the disclosure will be disclosed in the detailed description to be described later.
  • [Description of Drawings]
    • FIG. 1 is a diagram for describing a method of acquiring a depth image according to an embodiment of the disclosure.
    • FIG. 2 is a graph illustrating a first composition ratio and a second composition ratio according to a depth value of a first depth image according to an embodiment of the disclosure.
    • FIG. 3 is a graph illustrating the first composition ratio and the second composition ratio according to a pixel value of a confidence map according to an embodiment of the disclosure.
    • FIG. 4 is a diagram for describing a method of acquiring a third depth image according to an embodiment of the disclosure.
    • FIG. 5 is a diagram illustrating an RGB image according to an embodiment of the disclosure.
    • FIG. 6 is a flowchart illustrating a method for controlling an electronic device according to an embodiment of the disclosure.
    • FIG. 7 is a perspective view illustrating an electronic device according to an embodiment of the disclosure.
    • FIG. 8A is a block diagram illustrating a configuration of the electronic device according to the embodiment of the disclosure.
    • FIG. 8B is a block diagram illustrating a configuration of a processor according to an embodiment of the disclosure.
    [Best Mode for Carrying Out the Invention]
  • After terms used in the specification are schematically described, the disclosure will be described in detail.
  • General terms that are currently widely used were selected as terms used in embodiments of the disclosure in consideration of functions in the disclosure, but may be changed depending on the intention of those skilled in the art or a judicial precedent, the emergence of a new technique, and the like. In addition, in a specific case, terms arbitrarily chosen by an applicant may exist. In this case, the meaning of such terms will be mentioned in detail in a corresponding description portion of the disclosure. Therefore, the terms used in embodiments of the disclosure should be defined on the basis of the meaning of the terms and the contents throughout the disclosure rather than simple names of the terms.
  • Because the disclosure may be variously modified and have several embodiments, specific embodiments of the disclosure will be illustrated in the drawings and be described in detail in a detailed description. However, it is to be understood that the disclosure is not limited to specific embodiments, but includes all modifications, equivalents, and substitutions without departing from the scope and spirit of the disclosure. When it is decided that a detailed description for the known art related to the disclosure may obscure the gist of the disclosure, the detailed description will be omitted.
  • Terms 'first' , 'second' , and the like, may be used to describe various components, but the components are not to be construed as being limited by these terms. The terms are used only to distinguish one component from another component.
  • Singular forms are intended to include plural forms unless the context clearly indicates otherwise. It should be understood that terms "comprise" or "include" used in the present specification, specify the presence of features, numerals, steps, operations, components, parts mentioned in the present specification, or combinations thereof, but do not preclude the presence or addition of one or more other features, numerals, steps, operations, components, parts, or combinations thereof.
  • Hereinafter, embodiments of the disclosure will be described in detail with reference to the accompanying drawings so that those skilled in the art to which the disclosure pertains may easily practice the disclosure. However, the disclosure may be implemented in various different forms and is not limited to exemplary embodiments described herein. In addition, in the drawings, portions unrelated to the description will be omitted to obviously describe the disclosure, and similar reference numerals will be used to describe similar portions throughout the specification.
  • FIG. 1 is a diagram for describing a method of acquiring a depth image according to an embodiment of the disclosure.
  • An electronic device 100 may acquire a first depth image 10 by using a first image sensor 110. Specifically, the electronic device 100 may acquire the first depth image 10 based on a signal output from the first image sensor 110. Here, the first depth image 10 is an image indicating a distance from the electronic device 100 to an object, and a depth value (or distance value) of each pixel of the first depth image may refer to a distance from the electronic device 100 to the object corresponding to each pixel.
  • The electronic device 100 may acquire a confidence map 20 by using the first image sensor 110. Here, the confidence map (or the confidence image) 20 refers to an image representing reliability of depth values for each region of the first depth image 10. In this case, the confidence map 20 may be an infrared (IR) image corresponding to the first depth image 10. In addition, the electronic device 100 may determine the reliability of the depth values for each region of the first depth image 10 based on the confidence map 20.
  • Meanwhile, the electronic device 100 may acquire the confidence map 20 based on a signal output from the first image sensor 110. Specifically, the first image sensor 110 may include a plurality of sensors that are activated from a preset time difference. In this case, the electronic device 100 may acquire a plurality of image data through each of the plurality of sensors. In addition, the electronic device 100 may acquire the confidence map 20 from a plurality of acquired image data. For example, the electronic device 100 may acquire the confidence map 20 through [Math Figure 1] .
    Confidence = abs I 2 I 4 abs I 1 I 3
    Figure imgb0001
  • Here, I1 to 14 denote first to fourth image data, respectively.
  • Meanwhile, the first image sensor 110 may be implemented as a time of flight (ToF) sensor or a structured light sensor.
  • The electronic device 100 may acquire an RGB image 30 using a second image sensor 120. Specifically, the electronic device 100 may acquire the RGB image based on a signal output from the second image sensor 120. In this case, the RGB image 30 may correspond to the first depth image 10 and the confidence map 20, respectively. For example, the RGB image 30 may be an image for the same timing as the first depth image 10 and the confidence map 20.
  • The electronic device 100 may acquire the RGB image 30 corresponding to the first depth image 10 and the confidence map 20 by adjusting the activation timing of the first image sensor 110 and the second image sensor 120. In addition, the electronic device 100 may generate a grayscale image 40 based on R, G, and B values of the RGB image 30. Meanwhile, the second image sensor 120 may be implemented as image sensors such as a complementary metal-oxide-semiconductor (CMOS) and a charge-coupled device (CCD).
  • The electronic device 100 may acquire a second depth image 50 based on the confidence map 20 and the grayscale image 40. In particular, the electronic device 100 may acquire the second depth image 50 by performing stereo matching on the confidence map 20 and the grayscale image 40. Here, the stereo matching refers to a method of calculating a depth value by detecting in which an arbitrary point in one image is located in the other image, and obtaining a shifted amount of the detected result point. The electronic device 100 may identify a corresponding point in the confidence map 20 and the grayscale image 40. In this case, the electronic device 100 may identify a corresponding point by identifying a shape or an outline of the object included in the confidence map 20 and the grayscale image 40. Then, the electronic device 100 may generate the second depth image 50 based on a disparity between the corresponding points identified in each of the confidence map 20 and the grayscale image 40 and a length (i.e., the distance between the first image sensor 100 and the second image sensor 200) of a baseline.Meanwhile, when the stereo matching may be performed based on the confidence map 20 and the RGB image 30 which are an IR image, it may be difficult to find an exact corresponding point due to a difference in pixel values. Accordingly, the electronic device 100 may perform the stereo matching based on the grayscale image 40 instead of the RGB image 30. Accordingly, the electronic device 100 may more accurately identify the corresponding point, and the accuracy of the depth information included in the second depth image 50 may be improved. Meanwhile, the electronic device 100 may perform pre-processing such as correcting a difference in brightness between the confidence map 20 and the grayscale image 40 before performing the stereo matching.
  • Meanwhile, the ToF sensor has higher angular resolution (that is, ability to distinguish two objects that are separated from each other) and distance accuracy than the stereo sensor outside a preset distance (e.g., within 5m from the ToF sensor), but may have lower angular resolution and distance accuracy than the stereo sensor within the preset distance. For example, when an intensity of reflected light is greater than a threshold value, a near-field virtual image may appear on a depth image due to a lens flare or ghost phenomenon. As a result, there is a problem in that the depth image acquired through the ToF sensor includes near-field errors. Accordingly, the electronic device 100 may acquire a third depth image 60 having improved near-field accuracy compared to the first depth image 10 by using the second depth image 50 acquired through the stereo matching.
  • The electronic device 100 may acquire the third depth image 60 based on the first depth image 10 and the second depth image 50. Specifically, the electronic device 100 may generate the third depth image 60 by composing the first depth image 10 and the second depth image 50. In this case, the electronic device 100 may determine a first composition ratio α of the first depth image 10 and a second composition ratio β of the second depth image 50 based on at least one of the depth value of the first depth image 10 and the pixel value of the confidence map 20. Here, the first composition ratio α and the second composition ratio β may have a value between 0 and 1, and the sum of the first composition ratio α and the second composition ratio β may be 1. For example, when the first composition ratio α is 0.6 (or 60%), the second composition ratio β may be 0.4 (or 400) . Hereinafter, a method of determining the first composition ratio α and the second composition ratio β will be described in more detail.
  • FIG. 2 is a graph illustrating a first composition ratio and a second composition ratio according to a depth value of a first depth image according to an embodiment of the disclosure. Referring to FIG. 2, the electronic device 100 may determine the first composition ratio α and the second composition ratio β based on a depth value D of the first depth image 10.
  • In the electronic device 100, for a first region R1 in which the depth value D is smaller than a first threshold distance (e.g., 20 cm) Dth1 among the plurality of regions of the first depth image 10, the first composition ratio α may be determined to be 0, and the second composition ratio β may be determined to be 1. That is, the electronic device 100 may acquire the depth value of the second depth image 50 as the depth value of the third depth image 60 for a region in which the depth value D is smaller than the first threshold distance Dth1 among the plurality of regions. Accordingly, the electronic device 100 may acquire the third depth image 60 with improved near-field accuracy compared to the first depth image 10.
  • In the electronic device 100, for a second region R2 in which the depth value D is greater than a second threshold distance (e.g., 3m) Dth2 among the plurality of regions of the first depth image 10, the first composition ratio α may be determined to be 1, and the second composition ratio β may be determined to be 0. That is, the electronic device 100 may acquire the depth value of the first depth image 50 as the depth value of the third depth image 60 for a region in which the depth value D is greater than the second threshold distance Dth2 among the plurality of regions.
  • In the electronic device 100, for a third region R3 in which the depth value D is greater than the first threshold distance Dth1 and smaller than the second threshold distance Dth2 among the plurality of regions of the first depth image 10, the first composition ratio α and the second composition ratio β may be determined such that, as the depth value D increases, the first composition ratio α increases and the second composition ratio β decreases. Since the first image sensor 110 has higher far-field angular resolution than the second image sensor 120, as the depth value D increases, the accuracy of the depth value of the third depth image 60 may be improved when the first composition ratio α increases.
  • Meanwhile, the electronic device 100 may determine the first composition ratio α and the second composition ratio β based on a pixel value P of the confidence map 20.
  • FIG. 3 is a graph illustrating the first composition ratio and the second composition ratio according to the pixel value of the confidence map according to an embodiment of the disclosure.
  • The electronic device 100 may identify a fourth region R4 in which the pixel value P is smaller than a first threshold value Pth1 among the plurality of regions of the confidence map 20. In addition, when each region of the first depth image 10 and the second depth image 50 corresponding to the fourth region R4 is composed, the electronic device 100 may determine the first composition ratio α as 0 and the second composition ratio β as 1. That is, when it is determined that the reliability of the first depth image 10 is smaller than the first threshold value Pth1, the electronic device 100 may acquire the depth value of the second depth image 50 as the depth value of the third depth image 60. Accordingly, the electronic device 100 may acquire the third depth image 60 with improved distance accuracy compared to the first depth image 10.
  • The electronic device 100 may identify a fifth region R5 in which the pixel value is greater than a second threshold value Pth2 among the plurality of regions of the confidence map 20. In addition, when each region of the first depth image 10 and the second depth image 50 corresponding to the fifth region R5 is composed, the electronic device 100 may determine the first composition ratio α as 1 and the second composition ratio β as 0. That is, when it is determined that the reliability of the first depth image 10 is smaller than the second threshold value Pth2, the electronic device 100 may acquire the depth value of the first depth image 10 as the depth value of the third depth image 60.
  • The electronic device 100 may identify a sixth region R6 in which the pixel value P is greater than the first threshold value Pth1 and smaller than the second threshold value Pth2 among the plurality of regions of the confidence map 20. In addition, when each region of the first depth image 10 and the second depth image 50 corresponding to the sixth region R6 is composed, the electronic device 100 may determine the first composition ratio α and the second composition ratio β so that, as the pixel value P increases, the first composition ratio α increases and the second composition ratio β decreases. That is, the electronic device 100 may increase the first composition ratio α as the reliability of the first depth image 10 increases. Accordingly, the accuracy of the depth value of the third depth image 60 may be improved.
  • Meanwhile, the electronic device 100 may determine the first composition ratio α and the second composition ratio β based on the depth value D of the first depth image 10 and the pixel value P of the confidence map 20. In particular, the electronic device 100 may consider the pixel value P of the confidence map 20 when determining the first composition ratio α and the second composition ratio β for the third region R3. For example, when the pixel value of the confidence map 20 corresponding to the third region R3 is greater than a preset value, the electronic device 100 may determine the first composition ratio α and the second composition ratio β so that the first composition ratio α is greater than the second composition ratio β. On the other hand, when the pixel value of the confidence map 20 corresponding to the third region R3 is smaller than a preset value, the electronic device 100 may determine the first composition ratio α and the second composition ratio β so that the first composition ratio α is smaller than the second composition ratio β. The electronic device 100 may increase the first composition ratio α as the pixel value of the confidence map 20 corresponding to the third region R3 increases. That is, the electronic device 100 may increase the first composition ratio α for the third region R3 as the reliability of the first depth image 10 increases.
  • The electronic device 100 may acquire the third depth image 60 based on the first composition ratio α and the second composition ratio β thus obtained. The electronic device 100 may acquire the distance information on the object based on the third depth image 60. Alternatively, the electronic device 100 may generate a driving path of the electronic device 100 based on the third depth image 60. Meanwhile, FIGS. 2 and 3 illustrate that the first composition ratio α and the second composition ratio β vary linearly, but this is only an example, and the first composition ratio α and the second composition ratio β may vary non-linearly.
  • FIG. 4 is a diagram for describing a method of acquiring a third depth image according to an embodiment of the disclosure. Referring to FIG. 4, the first depth image 10 may include a 1-1th region R1-1, a 2-1th region R2-1, and a 3-1th region R3-1. The 1-1th region R1-1 may correspond to the first region R1 of FIG. 2, and the 2-1th region R2-1 may correspond to the second region R2 of FIG. 2. That is, a depth value D11 of the 1-1th region R1-1 may be smaller than the first threshold distance Dth1, and a depth value D12 of the 2-1th region R2-1 may be greater than the second threshold distance Dth2. Also, a 3-1th region R3-1 may correspond to the third region R3 of FIG. 2. That is, a depth value D13 of the 3-1th region R3-1 may be greater than the first threshold distance Dth1 and smaller than the second threshold distance Dth2.
  • When the first depth image 10 and the second depth image 50 are composed for the 1-1th region R1-1, the electronic device 100 may determine the first composition ratio α as 0 and the second composition ratio β as 1. Accordingly, the electronic device 100 may acquire a depth value D21 of the second depth image 50 as a depth value D31 of the third depth image 60.
  • When the first depth image 10 and the second depth image 50 are composed for the 2-1th region R2-1, the electronic device 100 may determine the first composition ratio α as 1 and the second composition ratio β as 0. Accordingly, the electronic device 100 may acquire the depth value D12 of the first depth image 10 as a depth value D32 of the third depth image 60.
  • When the first depth image 10 and the second depth image 50 are composed for the 3-1th region R3-1, the electronic device 100 may determine the first composition ratio α and the second composition ratio β based on the confidence map 20. For example, if a depth value P3 of the confidence map 20 is smaller than a preset value, when the first depth image 10 and the second depth image 50 are composed for the 3-1th region R3-1, the electronic device 100 may determine the first composition ratio α and the second composition ratio β so that the first composition ratio α is smaller than the second composition ratio β. As another example, if the depth value P3 of the confidence map 20 is greater than a preset value, when the first depth image 10 and the second depth image 50 are composed for the 3-1th region R3-1, the electronic device 100 may determine the first composition ratio α and the second composition ratio β so that the first composition ratio α is greater than the second composition ratio β. As described above, the electronic device 100 may acquire a depth value D33 of the third depth image 60 by applying the first composition ratio α to the depth value D13 of the first depth image 10, and the second composition ratio β to a depth value D23 of the second depth image 50.
  • Meanwhile, the electronic device 100 may acquire the third depth image 60 by applying a predetermined composition ratio to the same object included in the first depth image 10 and the second depth image 50.
  • FIG. 5 is a diagram illustrating an RGB image according to an embodiment of the disclosure. Referring to FIG. 5, the RGB image 30 may include a first object ob1 and a second object ob2.
  • The electronic device 100 may analyze the RGB image 30 to identify the first object ob1. In this case, the electronic device 100 may identify the first object ob1 using an object recognition algorithm. Alternatively, the electronic device 100 may identify the first object ob1 by inputting the RGB image 30 to a neural network model trained to identify an object included in the image.
  • When the first depth image 10 and the second depth image 50 are composed for the region corresponding to the first object ob1, the electronic device 100 may apply a predetermined composition ratio. For example, the electronic device 100 may apply a 1-1th composition ratio α1 and a 2-1th composition ratio β1, which are fixed values, to the region corresponding to the first object ob1. Accordingly, the electronic device 100 may acquire the third depth image 60 in which the distance error for the first object ob1 is improved.
  • FIG. 6 is a flowchart illustrating a method for controlling an electronic device according to an embodiment of the disclosure.
  • The electronic device 100 may acquire the first depth image and the confidence map corresponding to the first depth image using the first image sensor (S610), and acquire the RGB image corresponding to the first depth image using the second image sensor (S620). As a detailed description thereof has been described with reference to FIG. 1, a redundant description thereof will be omitted.
  • The electronic device 100 may acquire the second depth image based on the confidence map and the RGB image (S630). The electronic device 100 may acquire a grayscale image for the RGB image, and acquire the second depth image by performing the stereo matching on the confidence map and the grayscale image. In this case, the electronic device 100 may acquire the second depth image by performing the stereo matching on the confidence map and the grayscale image based on the shape of the object included in the confidence map and the grayscale image.
  • The electronic device 100 may obtain the third depth image by composing the first depth image and the second depth image based on the pixel value of the confidence map (S640). The electronic device 100 may determine the composition ratio of the first depth image and the second depth image based on the pixel value of the confidence map, and compose the first depth image and the second depth image based on the determined composition ratio to acquire the third depth image. In this case, the electronic device 100 may determine the first composition ratio and the second composition ratio so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for the region in which the pixel value is greater than a preset value among the plurality of regions of the confidence map. The electronic device 100 may determine the first composition ratio and the second composition ratio so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than a preset value among the plurality of regions of the confidence map.
  • FIG. 7 is a perspective view illustrating an electronic device according to an embodiment of the disclosure.
  • The electronic device 100 may include the first image sensor 110 and the second image sensor 120. In this case, the distance between the first image sensor 110 and the second image sensor 120 may be defined as a length L of a baseline.
  • The conventional stereo sensor using two cameras has a limitation in that the angular resolution for a long distance is lowered because the length of the baseline is limited. In addition, in order to increase the angular resolution for a long distance, since the length of the baseline needs to increase, there is a problem in that the conventional stereo sensor is difficult to miniaturize.
  • On the other hand, as described above, the electronic device 100 according to the disclosure uses the first image sensor 110 having a higher angular resolution for a long distance compared to the stereo sensor as described above, to acquire the far-field information even if the length L of the baseline does not increase. Accordingly, the electronic device 100 may have a technical effect that it is easier to miniaturize compared to the conventional stereo sensor.
  • FIG. 8A is a block diagram illustrating a configuration of the electronic device according to the embodiment of the disclosure. Referring to FIG. 8A, the electronic device 100 may include a light emitting unit 105, a first image sensor 110, the second image sensor 120, a memory 130, a communication interface 140, a driving unit 150, and a processor 160. In particular, the electronic device 100 according to the embodiment of the disclosure may be implemented as a movable robot.
  • The light emitting unit 105 may emit light toward an object. In this case, the light (hereinafter, emitted light) emitted from the light emitting unit 105 may have a waveform in the form of a sinusoidal wave. However, this is only an example, and the emitted light may have a waveform in the form of a square wave. Also, the light emitting unit 105 may include various types of laser devices. For example, the light emitting unit 105 may include a vertical cavity surface emitting laser (VCSEL) or a laser diode (LD). Meanwhile, the light emitting unit 105 may include a plurality of laser devices. In this case, a plurality of laser devices may be arranged in an array form. Also, the light emitting unit 105 may emit light of various frequency bands. For example, the light emitting unit 105 may emit a laser beam having a frequency of 100 MHz.
  • The first image sensor 110 is configured to acquire the depth image. The first image sensor 110 may acquire reflected light reflected from the object after being emitted from the light emitting unit 105. The processor 160 may acquire the depth image based on the reflected light acquired by the first image sensor 110. For example, the processor 160 may acquire the depth image based on a difference (i.e., flight time of light) between emission timing of the light emitted from the light emitting unit 105 and timing at which the image sensor 110 receives the reflected light. Alternatively, the processor 160 may acquire the depth image based on a difference between a phase of the light emitted from the light emitting unit 105 and a phase of the reflected light acquired by the image sensor 110. Meanwhile, the first image sensor 110 may be implemented as the time of flight (ToF) sensor or the structured light sensor.
  • The second image sensor 120 is configured to acquire an RGB image. For example, the second image sensor 120 may be implemented as image sensors such as a complementary metal-oxide-semiconductor (CMOS) and a charge-coupled device (CCD) .
  • The memory 130 may store an operating system (OS) for controlling a general operation of components of the electronic device 100 and commands or data related to components of the electronic device 100. To this end, the memory 130 may be implemented as a non-volatile memory (e.g., a hard disk, a solid state drive (SSD), a flash memory), a volatile memory, or the like.
  • The communication interface 140 includes at least one circuit and may communicate with various types of external devices according to various types of communication methods. The communication interface 140 may include at least one of a Wi-Fi communication module, a cellular communication module, a 3rd generation (3G) mobile communication module, a 4th generation (4G) mobile communication module, a 4th generation Long Term Evolution (LTE) communication module, and a 5th generation (5G) mobile communication module. For example, the electronic device 100 may transmit an image acquired using the second image sensor 120 to a user terminal through the communication interface 140.
  • The driving unit 150 is configured to move the electronic device 100. In particular, the driving unit 150 may include an actuator for driving the electronic device 100. Also, the driving unit 150 may include an actuator for driving a motion of another physical component (e.g., an arm, etc.) of the electronic device 100. For example, the electronic device 100 may control the driving unit 150 to move or operate based on the depth information obtained through the first image sensor 110 and the second image sensor 120.
  • The processor 160 may control the overall operation of the electronic device 100.
  • Referring to FIG. 8B, the processor 160 may include a first depth image acquisition module 161, a confidence map acquisition module 162, an RGB image acquisition module 163, a grayscale image acquisition module 164, a second depth image acquisition module 165, and a third depth image acquisition module 166. Meanwhile, each module of the processor 160 may be implemented as a software module, but may also be implemented in a form in which software and hardware are combined.
  • The first depth image acquisition module 161 may acquire the first depth image based on the signal output from the first image sensor 110. Specifically, the first image sensor 110 may include a plurality of sensors that are activated from a preset time difference. In this case, the first depth image acquisition module 161 may calculate a time of flight of light based on a plurality of image data acquired through a plurality of sensors, and acquire the first depth image based on the calculated time of flight of light.
  • The confidence map acquisition module 162 may acquire the confidence map based on the signal output from the first image sensor 110. Specifically, the first image sensor 110 may include a plurality of sensors that are activated from a preset time difference. In this case, the confidence map acquisition module 162 may acquire a plurality of image data through each of the plurality of sensors. In addition, the confidence map acquisition module 162 may acquire the confidence map 20 using the plurality of acquired image data. For example, the confidence map acquisition module 162 may acquire the confidence map 20 based on [Math Figure 1] described above.
  • The RGB image acquisition module 163 may acquire the RGB image based on the signal output from the second image sensor 120. In this case, the acquired RGB image may correspond to the first depth image and the confidence map.
  • The grayscale image acquisition module 164 may acquire the grayscale image based on the RGB image acquired by the RGB image acquisition module 163. Specifically, the grayscale image acquisition module 164 may generate the grayscale image based on the R, G, and B values of the RGB image.
  • The second depth image acquisition module 165 may acquire the second depth image based on the confidence map acquired by the confidence map acquisition module 162 and the grayscale image acquired by the grayscale image acquisition module 164. Specifically, the second depth image acquisition module 165 may generate the second depth image by performing the stereo matching on the confidence map and the grayscale image. The second depth image acquisition module 165 may identify corresponding points in the confidence map and the grayscale image. In this case, the second depth image acquisition module 165 may identify the corresponding points by identifying the shape or outline of the object included in the confidence map and the grayscale image. In addition, the second depth image acquisition module 165 may generate the second depth image based on the disparity between the corresponding points identified in each of the confidence map and the grayscale image and the length of the baseline.
  • As such, the second depth image acquisition module 165 may more accurately identify the corresponding points by performing the stereo matching based on the grayscale image instead of the RGB image. Accordingly, it is possible to improve the accuracy of the depth information included in the second depth image. Meanwhile, the second depth image acquisition module 165 may perform preprocessing such as correcting a difference in brightness between the confidence map and the grayscale image before performing the stereo matching.
  • The third depth image acquisition module 166 may acquire the third depth image based on the first depth image and the second depth image. In detail, the third depth image acquisition module 166 may generate the third depth image by composing the first depth image and the second depth image. In this case, the third depth image acquisition module 166 may determine the first composition ratio for the first depth image and the second composition ratio for the second depth image based on the depth value of the first depth image. For example, the third depth image acquisition module 166 may determine the first composition ratio as 0 and the second composition ratio as 1 for the first region in which the depth value is smaller than the first threshold distance among the plurality of regions of the first depth image. In addition, the third depth image acquisition module 166 may determine the first composition ratio as 1 and the second composition ratio as 0 for the second region in which the depth value is greater than the second threshold distance among the plurality of regions of the first depth image.
  • Meanwhile, the third depth image acquisition module 166 may determine the composition ratio based on the pixel value of the confidence map for the third region in which the depth value is greater than the first threshold distance and smaller than the second threshold distance among the plurality of regions of the first depth image. For example, when the pixel value of the confidence map corresponding to the third region is smaller than a preset value, the third depth image acquisition module 166 may determine the first composition ratio and the second composition ratio so that the first composition ratio is smaller than the second composition ratio. When the pixel value of the confidence map corresponding to the third region is larger than a preset value, the third depth image acquisition module 166 may determine the first composition ratio and the second composition ratio so that the first composition ratio is greater than the second composition ratio. That is, the third depth image acquisition module 166 may determine the first composition ratio and the second composition ratio so that the first composition ratio increases and the second composition ratio decreases as the pixel value of the confidence map corresponding to the third region increases.
  • Meanwhile, the third depth image acquisition module 166 may compose the first depth image and the second depth image with a predetermined composition ratio for the same object. For example, the third depth image acquisition module 166 may analyze the RGB image to identify the object included in the RGB image. In addition, the third depth image acquisition module 166 may apply a predetermined composition ratio to the first region of the first depth image and the second region of the second depth image corresponding to the identified object to compose the first depth image and the second depth image.
  • Meanwhile, the processor 160 may adjust the sync of the first image sensor 110 and the second image sensor 120. Accordingly, the first depth image, the confidence map, and the second depth image may correspond to each other. That is, the first depth image, the confidence map, and the second depth image may be images for the same timing.
  • Meanwhile, the diverse embodiments described above may be implemented in a computer or an apparatus similar to the computer using software, hardware, or a combination of software and hardware. In some cases, embodiments described in the disclosure may be implemented as a processor itself. According to a software implementation, embodiments such as procedures and functions described in the specification may be implemented as separate software modules. Each of the software modules may perform one or more functions and operations described in the specification.
  • Meanwhile, computer instructions for performing processing operations according to the diverse embodiments of the disclosure described above may be stored in a non-transitory computer-readable medium. The computer instructions stored in the non-transitory computer-readable medium may cause a specific device to perform the processing operations according to the diverse embodiments described above when they are executed by a processor.
  • The non-transitory computer-readable medium is not a medium that stores data for a while, such as a register, a cache, a memory, or the like, but means a medium that semipermanently stores data and is readable by the device. Specific examples of the non-transitory computer-readable medium may include a compact disk (CD), a digital versatile disk (DVD), a hard disk, a Blu-ray disk, a USB, a memory card, a read only memory (ROM), and the like.
  • Although the embodiments of the disclosure have been illustrated and described hereinabove, the disclosure is not limited to the specific embodiments described above, but may be variously modified by those skilled in the art to which the disclosure pertains without departing from the gist of the disclosure as disclosed in the accompanying claims. These modifications should also be understood to fall within the scope and spirit of the disclosure.

Claims (15)

  1. An electronic device, comprising:
    a first image sensor;
    a second image sensor; and
    a processor,
    wherein the processor acquires a first depth image and a confidence map correspond
    ing to the first depth image by using the first image sensor,
    acquires an RGB image corresponding to the first depth image by using the second image sensor,
    acquires a second depth image based on the confidence map and the RGB image, and
    acquires a third depth image by composing the first depth image and the second depth image based on a pixel value of the confidence map.
  2. The electronic device as claimed in claim 1, wherein the processor acquires a grayscale image for the RGB image, and
    the second depth image is acquired by performing stereo matching on the confidence map and the grayscale image.
  3. The electronic device as claimed in claim 2, wherein the processor acquires the second depth image by performing stereo matching on the confidence map and the grayscale image based on a shape of an object included in the confidence map and the grayscale image.
  4. The electronic device as claimed in claim 1, wherein the processor determines a composition ratio of the first depth image and the second depth image based on the pixel value of the confidence map, and
    acquires a third depth image by composing the first depth image and the second depth image based on the determined composition ratio.
  5. The electronic device as claimed in claim 4, wherein the processor determines the first composition ratio and the second composition ratio so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for a region in which a pixel value is greater than a preset value among a plurality of regions of the confidence map, and
    the first composition ratio and the second composition ratio are determined so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than the preset value among a plurality of regions of the confidence map.
  6. The electronic device as claimed in claim 1, wherein the processor acquires a depth value of the second depth image as a depth value of the third depth image for a first region in which a depth value is smaller than a first threshold distance among a plurality of regions of the first depth image, and
    acquires a depth value of the first depth image as a depth value of the third depth image for a second region in which a depth value is greater than a second threshold distance among a plurality of regions of the first depth image.
  7. The electronic device as claimed in claim 1, wherein the processor identifies an object included in the RGB image,
    identifies each region of the first depth image and the second depth image corresponding to the identified object, and
    acquires the third depth image by composing the first depth image and the second depth image at a predetermined composition ratio for each of the regions.
  8. The electronic device as claimed in claim 1, wherein the first image sensor is a time of flight (ToF) sensor, and
    the second image sensor is an RGB sensor.
  9. A method for controlling an electronic device, comprising:
    acquiring a first depth image and a confidence map corresponding to the first depth image by using a first image sensor;
    acquiring an RGB image corresponding to the first depth image by using a second image sensor;
    acquiring a second depth image based on the confidence map and the RGB image; and
    acquiring a third depth image by composing the first depth image and the second depth image based on a pixel value of the confidence map.
  10. The method as claimed in claim 9, wherein, in the acquiring of the second depth image, a grayscale image for the RGB image is acquired, and
    the second depth image is acquired by stereo matching the confidence map and the grayscale image.
  11. The method as claimed in claim 10, wherein, in the acquiring of the second depth image, the second depth image is acquired by stereo matching the confidence map and the grayscale image based on a shape of an object included in the confidence map and the grayscale image.
  12. The method as claimed in claim 9, wherein, in the acquiring of the third depth image, a composition ratio of the first depth image and the second depth image is determined based on the pixel value of the confidence map, and
    a third depth image is acquired by composing the first depth image and the second depth image based on the determined composition ratio.
  13. The method as claimed in claim 12, wherein, in the determining of the composition ratio, the first composition ratio and the second composition ratio are determined so that the first composition ratio of the first depth image is greater than the second composition ratio of the second depth image for a region in which a pixel value is greater than a preset value among a plurality of regions of the confidence map, and
    the first composition ratio and the second composition ratio are determined so that the first composition ratio is smaller than the second composition ratio for the region in which the pixel value is smaller than the preset value among a plurality of regions of the confidence map.
  14. The method as claimed in claim 9, wherein, in the acquiring of the third depth image, a depth value of the second depth image is acquired as a depth value of the third depth image for a first region in which a depth value is smaller than a first threshold distance among a plurality of regions of the first depth image, and
    a depth value of the first depth image is acquired as a depth value of the third depth image for a second region in which a depth value is larger than a second threshold distance among a plurality of regions of the first depth image.
  15. The method as claimed in claim 9, wherein the acquiring of the third depth image includes,
    identifying an object included in the RGB image; identifying each region of the first depth image and the second depth image corresponding to the identified object, and
    acquiring the third depth image by composing the first depth image and the second depth image at a predetermined composition ratio for each of the identified regions.
EP21850284.7A 2020-07-29 2021-07-02 Electronic device and method for controlling same Pending EP4148671A4 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020200094153A KR20220014495A (en) 2020-07-29 2020-07-29 Electronic apparatus and method for controlling thereof
PCT/KR2021/008433 WO2022025458A1 (en) 2020-07-29 2021-07-02 Electronic device and method for controlling same

Publications (2)

Publication Number Publication Date
EP4148671A1 true EP4148671A1 (en) 2023-03-15
EP4148671A4 EP4148671A4 (en) 2024-01-03

Family

ID=80035582

Family Applications (1)

Application Number Title Priority Date Filing Date
EP21850284.7A Pending EP4148671A4 (en) 2020-07-29 2021-07-02 Electronic device and method for controlling same

Country Status (5)

Country Link
US (1) US20230177709A1 (en)
EP (1) EP4148671A4 (en)
KR (1) KR20220014495A (en)
CN (1) CN116097306A (en)
WO (1) WO2022025458A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115457099B (en) * 2022-09-09 2023-05-09 梅卡曼德(北京)机器人科技有限公司 Depth complement method, device, equipment, medium and product

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120056982A1 (en) * 2010-09-08 2012-03-08 Microsoft Corporation Depth camera based on structured light and stereo vision
KR101706093B1 (en) * 2010-11-30 2017-02-14 삼성전자주식회사 System for extracting 3-dimensional coordinate and method thereof
KR101272574B1 (en) * 2011-11-18 2013-06-10 재단법인대구경북과학기술원 Apparatus and Method for Estimating 3D Image Based Structured Light Pattern
KR101714224B1 (en) * 2015-09-21 2017-03-08 현대자동차주식회사 3 dimension image reconstruction apparatus and method based on sensor fusion
CN110852134A (en) * 2018-07-27 2020-02-28 北京市商汤科技开发有限公司 Living body detection method, living body detection device, living body detection system, electronic device, and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LASANG PONGSAK ET AL: "Optimal depth recovery using image guided TGV with depth confidence for high-quality view synthesis", JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, ACADEMIC PRESS, INC, US, vol. 39, 12 May 2016 (2016-05-12), pages 24 - 39, XP029624008, ISSN: 1047-3203, DOI: 10.1016/J.JVCIR.2016.05.006 *
See also references of WO2022025458A1 *
WALAS KRZYSZTOF ET AL: "Depth data fusion for simultaneous localization and mapping - RGB-DD SLAM", 2016 IEEE INTERNATIONAL CONFERENCE ON MULTISENSOR FUSION AND INTEGRATION FOR INTELLIGENT SYSTEMS (MFI), IEEE, 19 September 2016 (2016-09-19), pages 9 - 14, XP033065780, DOI: 10.1109/MFI.2016.7849459 *

Also Published As

Publication number Publication date
US20230177709A1 (en) 2023-06-08
WO2022025458A1 (en) 2022-02-03
KR20220014495A (en) 2022-02-07
CN116097306A (en) 2023-05-09
EP4148671A4 (en) 2024-01-03

Similar Documents

Publication Publication Date Title
JP2011123071A (en) Image capturing device, method for searching occlusion area, and program
WO2021016854A1 (en) Calibration method and device, movable platform, and storage medium
CN111060101A (en) Vision-assisted distance SLAM method and device and robot
US10104359B2 (en) Disparity value deriving device, movable apparatus, robot, disparity value producing method, and computer program
US20120121126A1 (en) Method and apparatus for estimating face position in 3 dimensions
US10713810B2 (en) Information processing apparatus, method of controlling information processing apparatus, and storage medium
US20120044363A1 (en) Interaction control system, method for detecting motion of object, host apparatus and control method thereof
US11132804B2 (en) Hybrid depth estimation system
US20210004978A1 (en) Method for acquiring depth information of target object and movable platform
US20230177709A1 (en) Electronic device and method for controlling same
US11921216B2 (en) Electronic apparatus and method for controlling thereof
JP7206855B2 (en) Three-dimensional position detection device, three-dimensional position detection system, and three-dimensional position detection method
KR20200076628A (en) Location measuring method of mobile device, location measuring device and electronic device
CN111656404A (en) Image processing method and system and movable platform
JP5803534B2 (en) Optical communication apparatus and program
JP7347398B2 (en) object detection device
US20220018658A1 (en) Measuring system, measuring method, and measuring program
KR20210074153A (en) Electronic apparatus and method for controlling thereof
CN113189601B (en) Hybrid depth estimation system
JP7220835B1 (en) Object detection device and object detection method
US20230100249A1 (en) Information processing device, control method, and non-transitory computer-readable media
US20230229171A1 (en) Robot and control method therefor
CN111373222A (en) Light projection system
US20220291009A1 (en) Information processing apparatus, information processing method, and storage medium
KR102660089B1 (en) Method and apparatus for estimating depth of object, and mobile robot using the same

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20221208

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)
A4 Supplementary search report drawn up and despatched

Effective date: 20231204

RIC1 Information provided on ipc code assigned before grant

Ipc: G06T 7/521 20170101ALI20231128BHEP

Ipc: G06T 7/11 20170101ALI20231128BHEP

Ipc: G06T 5/50 20060101ALI20231128BHEP

Ipc: G06T 3/40 20060101ALI20231128BHEP

Ipc: G06T 7/90 20170101ALI20231128BHEP

Ipc: G06T 7/593 20170101AFI20231128BHEP

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20240415